text
stringlengths
104
605k
## Schumann Resonance Over Time Schumann resonances are global electromagnetic resonances, excited by lightning. The Schumann's resonance does not fluctuate much, but has 8 different frequencies ranging from 0. Three-dimensional (3D) cine (time-resolved) phase-contrast cardiovascular magnetic resonance (CMR) with three-directional velocity-encoding (4D Flow CMR) is a technique that permits visualization and evaluation of the pulsatile blood flows in the chambers of the heart and great thoracic vessels over the cardiac cycle in a single acquisition. Inner power & higher self awakening - Schumann Resonance Music - Schumann Frequency (7. 2 out of 5 stars 9 48. The exact frequency also varies over time. Earth's electrical resonance, also known as Schumann Resonance, is the result of the make up of Earth as a spherical resonator. As we are living on a planet with a specific dimension of the cavity Earth-ionosphere, there is a resonance frequency (named Schumann resonance) through which we absorb the vital energies of the Earth. For an example, they point to the human brain's electromagnetic waves that are synchronized to the Schumann frequency. Time series data consist of measurements of a variable over time. This may include normal tissue and glands, as well as areas of benign breast changes (e. Classic Sound and Feel Each Key Series saxophone is hand-finished to enhance its design, emphasizing consistent air resistance, uniformly balanced keys, and lively resonance throughout its entire range. SCHUMANN FREQUENCIES. Chronic venous insufficiency (CVI) is diagnosed using the following: Duplex ultrasound: Duplex ultrasound is a type of ultrasound for assessing blood flow and structure of the leg veins. They must put the patient at ease to get usable images. Since 1980 it has risen to over 12Hz. The composer Johannes Brahms was in love with Clara Schumann – but unfortunately she was married to the composer Robert Schumann, one of Brahms' best friends. Schumann resonance has been a natural and constant frequency of planet Earth, pulsating exactly at 7. The Schumann resonance spectrum is in the ELF band with a frequency of around 8 Hz (this value is much higher, 7. It is said that the Schumann Resonance is a breathing phenomenon of the Earth that is lasting from old time of the Earth creation and is giving a positive effect to the human brain. Schumann resonances 〰️ have served as timing signal for all biological life on earth 🌎 via lightning ⚡ from the beginning of time. Around 70% of women married before age 46, and the standardized marriage rates remained relatively stable during most of our study period. The Feet of Silence system consists of a cylindrically formed base, suspension attachments, the suspension itself and an inner -body with a damping layer for mechanical coupling to the components. The scientific validity and efficacy of the hGH biomarkers approach has been documented in multiple scientific publications for over a decade. The present study is based on simultaneous measurements of the atmospheric electric potential gradient (PG) and Schumann resonances at Nagycenk station (Hungary) from 1993 to 1996. Today’s maximum is Power 98 as previously reported. In the time domain, the value will always be in real numbers. The Schumann resonances occurring over an eight-hour period can be clearly seen at approximately 7. scientist Earle Williams (1992) constructed a powerful argument that links Schumann resonances to convection and ultimately to widespread tropical and/or global temperature. This chart show a plot of the Schuman Resonance frequency as monitored over time. It’s the measurement of 7. Herbert König demonstrated a connection between Schumann Resonance and brain rhythms. These two terms can apply to any motion that repeats over and over again – so we can talk about the frequency of a sound. 3D Psychoacoustic Soundscapes. ‘Yellowstone’ season 3 returned June 21 and picked up in the fallout of Tate’s kidnapping. These include increasing how strong your muscles are by doing things such as lifting weights. CSD mode may be more useful for such measurements as the later part of the impulse response can be noisy, obscuring the behaviour in the later slices. The Schumann's resonance does not fluctuate much, but has 8 different frequencies ranging from 0. Increased alpha activity can be seen later in the waveforms, starting at around the time the blood-pressure wave reaches the brain. Earth's magnetic field has weakened by 15 per cent over the last 200 years. 83Hz, sound and light (red/green. 83 Hz - Schumann Resonance. But over time, people develop poor breathing habits. Scandinavian Journal of Rheumatology: Vol. Magnetic resonance technologists are part of the larger 2011 National Occupational Classification 3215: Medical radiation technologists. The Schumann frequency is very close to the alpha wave frequency used for relaxation and meditation. The changes of amplitude are called the amplitude envelope, as we’ll discuss in a later section. • Resonance knob to emphasize or suppress portions of the signal above or below the defined cutoff frequency. Well, it seems like that, and in this article, I will show you what I see when looking at data. Magnetic Resonance. In equation m is the. However, over several hours, GPS is more stable, and so the stability can be improved by phase-locking the PRS10 to GPS with a long time constant. On top of that, there's evidence that the Earth's average frequencies are drifting over time too. We also have not seen any evidence reported by other monitoring. My research focuses on the unfolding of human behavior over short timescales (e. One tag concept is based on the low-Q fundamental mode of dielectric resonators (DR) which exhibits peak scattering at its resonance frequency. CURRENT SPINAL RESEARCH: The Cutting Edge Treatment for the Spine. Today, access to information and increased world awareness also influence our sense of time. The lightning flashes from these storms, about 50. GAIAS BREATH - Schumann Frequency Music 7. The rotator cuff is a group of muscles and tendons that hold the shoulder joint in place and allow you to move your arm and shoulder. Let us assume that a certain quantity I(t) changes over time as I(t) = I 0 cos(ωt). A periodic force is applied o one end of the spring a 2 hertz, causing resonance. Over many years of monitoring the earth's magnetic field with the Global Coherence Monitoring System, which uses multiple recording stations strategically located around the earth, we have not observed any evidence for the claim that the frequencies of the Schumann resonances are changing beyond the normal diurnal variation. 2K by immersing it in liquid helium. The circuit vibrates and may produce a standing wave, depending on the frequency of the driver, the wavelength of the oscillating wave and the geometry of the circuit. The originality of his work pushed at emotional, structural and. In 2014, it was considered anomalous for the frequency to have risen from it's usual 7. , fibroadenomas) and disease (breast cancer). So, if the fundamental Schumann Resonance is truly rising (and it isn't), why does this Earth-human coupling phenomenon, which occurs reliably and demonstrably every time the 7 Hz to 8 Hz range of frequencies (i. Magnetic resonance imaging (MRI)–based measurements of the brain have been proposed as aids in the diagnosis of Alzheimer disease (AD) and other types of dementia. Math skills. When we move away from this biologically attuned resonance field, it is highly likely to result in disorganization of our electrophysiology. This is a big deal and yet there is no publicly provided information on how this spike is impacting the earth’s geomagnetic field and how this radically effects the human body. the black, skipped, missing part showing on the schumann chart is a snapshot of this quantum time-line shift! this time-line shift was imprinted on the schumann resonance chart 1-15-20 and 1-16-20 for 35 hours straight and was reflected in the energy grid as a long-duration, blacked-out column!. Förster resonance energy transfer is named after the German scientist Theodor Förster. Diagnostic services are primarily used to help health care providers detect a problem, and diagnose disease or injury. 83 hz, but since the 1960's, the Schumann Resonance has been steadily on the rise. Re: Schumann Resonance Revisited I think there is a whole lot of territory to explore in the "inner universe". geospace magnetosphere - cut planes, density (updated image) geospace magnetosphere - cut planes, velocity. " A team of physicists from the Moscow Institute of Physics and Technology (MIPT), however, have come. -- theta wave --5 Hz to 6 Hz (20 minute cycle time)-- mu wave --8. Scheuermann's kyphosis develops over time during periods of bone growth (such as puberty). Magnetic resonance imaging characteristics of ischemic brain infarction over time in a canine stroke model: Choi,Sooyoung et al. com You can find the daily Schumann Resonance here. The relatively low signal-to-noise of MRS measurements has shaped the types of questions that it has been used to address. State-imposed internet blackouts. It also talks to preparations over these last few years, that many of us Sensitives have found hard to understand. Epub 2017 Dec 19. · A character played by an actor. For the first time in recorded history, the Schumann Resonance has reached frequencies of 36+ and is heading for maximum spike levels approaching 50 Hz. The thing to understand is, the "official normal" Schumann Resonance frequency for the Earth is something like 7. Laser Interstitial Thermal Therapy (LITT) is considered experimental, investigational or unproven for all indications. 83 Hz) by Time for you. a period of one time constant (t=¿ = 1) the output has decayed to y(¿) = e¡1y(0) or 36. The fMRI smackdown cometh Over the last few months, the soul searching over the shortcomings of fMRI brain scanning has escaped the backrooms of imaging labs and has hit the mainstream. Over many years of monitoring the earth’s magnetic field with the Global Coherence Monitoring System, which uses multiple recording stations strategically located around the earth, we have not observed any evidence for the claim that the frequencies of the Schumann resonances are changing beyond the normal diurnal variation. A time series of images is acquired wherein the fluctuation in intensity at a voxel over time indirectly reflects the dynamics of the local neural activity. The yearly mean for June 2002 through May 2003 is 7. Earth's electrical resonance, also known as Schumann Resonance, is the result of the make up of Earth as a spherical resonator. The lightning flashes from these storms, about 50. decay time for energy Compare ----- time to oscillate one radian Q: Write down an expression for this ratio. And time is not our only notion of acceleration. The Dominus Cervix Energetica products are energized with and radiate pure light. \r\r528 Hz is the love frequency and is used as the undercurrent binaural drone at the speed of the Schumann resonance for this meditation. In 1952, a physicist named Winfried Otto Schumann proposed the existence of natural electromagnetic waves surrounding our planet. A phasor is a vector whose length represents the amplitude I 0 (see the diagram ). Schumann Resonance Variances. This is a big deal and yet there is no publicly provided information on how this spike is impacting the earth’s geomagnetic field and how this radically effects the human body. 83 Hz to 36+ Hz is a big deal. Second, they contain a great deal of information. GAIAS BREATH - Schumann Frequency Music 7. A German physicist, Winfried Otto Schumann, documented the Schumann Resonance in 1952. create resonance, how travel to ISIS is arranged and occurs, entry into the group including training and indoctrination, positive and negative experiences inside the group, and changes over time regarding commitment to the group and the militant jihadist ideology espoused by. The authors’ aim was to examine the regional anatomy of brain activation by cognitive tasks commonly used in hypoglycemia research and to assess the effect of acute hypoglycemia on these in healthy volunteers. Understanding the Schumann Resonance and its Effect on You! I look at the Schumann Resonance as the Earth's heartbeat. The originality of his work pushed at emotional, structural and. Thousands of products are available to collect from store or if your order's over £20 we'll deliver for free. 22 The method was then further refined with. The Schumann Resonance is the frequency of the electromagnetic field of the earth. Schumann resonances are global electromagnetic resonances: It is believed by the MSScience community - "to be generated and excited by lightning discharges in the cavity formed by the Earth's surface and the ionosphere. , 1980, Nickolaenko, 1997). Cardiovascular magnetic resonance imaging (CMR) has emerged as the gold standard to assess heart function. Source: Diamond Light World I have been fascinated to watch the virus-like quality of the current meme flooding new age circles which states that the Schuman Resonance is increasing. These resonances, or discharges, occur between the cavity formed by the Earth's. Journal of Veterinary Science(2018), 19 (1):137. " A team of physicists from the Moscow Institute of Physics and Technology (MIPT), however, have come. We show that due to the distribution of lightning. MRI is an imaging technique designed to visualise internal structures of the body using magnetic and electromagnetic fields which induce a resonance effect of hydrogen atoms. Like any tone produced by a musical instrument, the Schumann Resonance features a strong fundamental tone along with a number of subtler overtones. This was a fundamental discovery in physics. ru] From time to time these strange white anomalies turn up on the chart, drowning out all the other data for a while. 83 Hertz(Hz). Cinema Studies at NYU offers historical and theoretical frameworks to understand the resonances of moving image culture, especially in the fraught global context of the 21st century. The images are very reproducible, enabling physicians to reliably compare findings over time with repeated scans. Now is not the time to be pointing fingers at who is responsible for what and who is doing what behind the scenes. Saturn's rings span a great distance with the inner D ring approximately 6,700 kilometers from Saturn's cloud tops to the fringes of the E ring, 480,000 kilometers out. 83 hertz or within shallow spikes of this depending on how many storms are on the planet at the time. But the Earth resonance frequency is not fixed in time. A German physicist, Winfried Otto Schumann, documented the Schumann Resonance in 1952. intensity of a sound decreases over time. The paper describes the development of passive, chipless tags for a novel indoor self-localization system operating at high mm-wave frequencies. As we are living on a planet with a specific dimension of the cavity Earth-ionosphere, there is a resonance frequency (named Schumann resonance) through which we absorb the vital energies of the Earth. It is supposed to be steady at 7. White Resonance Born in 2004 Influenced by Dj's and producers such as Fluke , Underworld, Chemical Brothers and Others. 3) Since 2014 the Schumann resonance has been rising. On January 31, 2017, for the first time in recorded history, the Schumann resonance reached frequencies of 36+ Hz. This method was compared with 2H measurement in carbons 5 and 2 using gas chromatography–mass spectrometry (hexamethylenetetramine [HMT]) and with in vivo 13C magnetic. Schumann resonance is a standing wave of electromagnetic field, which occurs when a space between the surface of the Earth and the ionosphere makes a resonant cavity for electromagnetic waves. Live schumann resonance chart. Change over time So far, you’ve made changes to sound by dragging the synth’s controls by hand. 1999) in the gastric lumen and to obtain three-dimensional images of the intragastric. 9 Brahms writing to Clara Schumann. fMRI [25–27] is a method for non-invasively measuring changes in brain activity over time (see for a recent review). (Stillness in the Storm Editor) Iona Miller wrote the following extensive research paper discussing the infamous Schumann Resonance. Evidence for his assertions comes from the Schumann Resonance. Over any time interval of duration 1/γ the value of E (t) changes by a factor of e −1 (from e −γt to e −γ (t+1/γ) = e −1 e −γt). Dyck, MD, and Jennifer A. In light of all of the above observations, we are proposing that the Schumann resonance may be the substrate for a radar-type extrasensory perception mechanism common to all living beings: like water bouncing off of rocks and other submerged objects, this non-specific frequency is absorbed and re-radiated in unique interference patterns by all. T1- and T2-weighted (T1W, T2W) imaging and fluid-attenuated inversion recovery (FLAIR) sequence MRI were. The bottom plot is the same as before. STAR is the flagship line for TAMA drums. It “was the first time since before the Civil War that the South was not solidly Democratic,” Goldfield says. The first and most important issue is that the massive non native EMF’s now in our “living zone” is blocking the Schumann resonance from being properly sensed by all living things. Schumann Resonance And The Time Speeding Up Phenomenon by Gregg Prescott July 24, 2016 in5d. Around 70% of women married before age 46, and the standardized marriage rates remained relatively stable during most of our study period. It is the mechanism by which virtually all sinusoidal waves and vibrations are generated. 83 Hz to 36+ Hz is a big deal. Berlin, Germany – 24 June 2020 – Berlin Cures Holding AG, a biotech company developing aptamers for autoimmune diseases, today announces that it is evaluating BC 007, a β1-adrenoceptor. 83 Hz is actually a natural electromagnetic resonance in the ionosphere produced by lighting storms and solar radiation. 1-3—Linda Correll Roesner has called attention to the composer's mosaic-like assembly of fragmentary ideas, suggesting a kind of. But don’t just write it down, make it your own. (Stillness in the Storm Editor) Iona Miller wrote the following extensive research paper discussing the infamous Schumann Resonance. Overview Cavernous angiomas belong to a group of intracranial vascular malformations that are developmental malformations of the vascular bed. In the last few years we would frequently see spikes to 20, 30 or 40. Local time is expressed in hours of Tomsk summer standard time (TSST). Known for his prolific output, records show that Schumann suffered from extreme auditory hallucinations during the later part of his life, claiming that he would hear entire symphonies in his head. Magnetic resonance imaging (MRI) to But bed rest, splints, bracing, or traction for long periods of time is not also be used to treat osteoarthritis. A German physicist, Winfried Otto Schumann, documented the Schumann Resonance in 1952. 1989), motility (Issa et al. Pay over time with KLARNA 24 Month Warranty 100% Secure Checkout The Schumann Resonance: Why We Need Earth’s Healing Vibrations. The surface plasmon resonance peak shifts from 561 to 572 nm, while the apparent color changes from red to black, which is partly related to the change in. ) The length of superconducting wire in the magnet is typically several miles. The most Schumann families were found in the USA in 1920. In the CNS, myelin is synthesized by oligodendrocytes, while in the PNS, myelin is synthesized by Schwann cells. 29-2035 Magnetic Resonance Imaging Technologists. Eight right-handed volunteers performed a set of cognitive tasks—finger tapping (FT), simple reaction time (SRT), and four-choice reaction time (4CRT)—twice during blood oxygen. It also talks to preparations over these last few years, that many of us Sensitives have found hard to understand. A meme is an idea or disinformation that spreads contagiously like a virus. There has been a lot of confusion in recent times (myself being included in this confusion) with the Geomagnetic activity and that of the Schumann's resonance. Over many years of monitoring the earth's magnetic field with the Global Coherence Monitoring System, which uses multiple recording stations strategically located around the earth, we have not observed any evidence for the claim that the frequencies of the Schumann resonances are changing beyond the normal diurnal variation. Nevertheless, because the Schumann resonance frequencies are defined by the dimensions of the Earth, many New Age proponents and alternative medicine advocates have come to regard 7. 83 Hz) by Time for you. 1999) in the gastric lumen and to obtain three-dimensional images of the intragastric. decay time for energy Compare ----- time to oscillate one radian Q: Write down an expression for this ratio. For thousands of years the Schumann Resonance or pulse (heartbeat) of Earth has been 7. If you do need treatment, options include antibiotics, sclerotherapy, surgery, laser therapy, and compression therapy. Netflix dramatic comedy Dead to Me doubled down on darkness in its second season. We analyze signals, mathematical functions, or perhaps scientific data, as measured in sequential time samples. 83 hertz which is a very low frequency. The biggest difference is that MRIs (magnetic resonance imaging) use radio waves and CT (computed tomography) scans use X-rays. Lightning produces electromagnetic fields and waves in all frequency ranges. Schumann and H. The term morphogenesis is a process in which the natural system produces and regulates the configuration of a material in space and over time. Let's discuss How Schumann resonance frequency affect our brain. Historians disagree over whether the pair ever acted on their feelings – but this quotation is pretty unequivocal…. legal status. The MRI scan itself doesn't hurt. SR is stable; it is NOT rising. The top shows the strength of the force that the gas disc exerts on the planets. Over many years of monitoring the earth’s magnetic field with the Global Coherence Monitoring System, which uses multiple recording stations strategically located around the earth, we have not observed any evidence for the claim that the frequencies of the Schumann resonances are changing beyond the normal diurnal variation. Not only do wireless waves affect the Schumann resonance which is what everything on earth evolved with, and at what our alpha wave rhythms resonate, but on top of that, 5G frequencies affect thought patterns and the mind. We investigated whether the accuracy and reproducibility of real-time 3-dimensional echocardiography (RUDE) would make this modality more feasible for serial follow-up of LV measurements. The project involves development, validation, and application of new medical image analysis algorithm where cortical thickness is measured from serial brain magnetic resonance imaging (MRI) with improved efficiency and greater sensitivity. It is time you realize that the pulse of the Earth, the Schumann resonance, is coded for in your brain and it is linked to every gene in your body. Schumann resonances are global electromagnetic resonances: It is believed by the MSScience community - "to be generated and excited by lightning discharges in the cavity formed by the Earth's surface and the ionosphere. TSST = UTC + 7 hours. Like any tone produced by a musical instrument, the Schumann Resonance features a strong fundamental tone along with a number of subtler overtones. 33% in the conventional arm, RR: 1. Schumann resonance transients which propagate around the globe can potentially generate a correlated background in widely separated gravitational-wave detectors. scientist Earle Williams (1992) constructed a powerful argument that links Schumann resonances to convection and ultimately to widespread tropical and/or global temperature. MRI scan – magnetic resonance imaging What is an MRI scan? An MRI (Magnetic Resonance Imaging) scanner uses magnetic fields, radio waves and a computer to take pictures of the inside of your body. First some historical context to SR research is given, followed by some theoretical. Payne : Self-Capacitance of Toroidal Inductors with Ferrite Cores 3 1 10 100 1000 1 10 100 1000 Frequency MHz Published Permeability FairRite Matl 61 µ' Material 61 µ'' Material 61 The equation above assumes that there is a standing wave on the conductor so that it resonates in the same way as a transmission line. The regular annual. The image above shows the Schumann Resonance of about 7. It is programmed, encouraged by advertisers, and is a driving ego force that keeps people in an insanity loop. Using chaos as a tool, scientists discover new method of making 3D-heterostructures Date: June 23, 2020 Source: DOE/Ames Laboratory Summary: Scientists have developed a new approach for generating. Over many years of monitoring the earth's magnetic field with the Global Coherence Monitoring System, which uses multiple recording stations strategically located around the earth, we have not observed any evidence for the claim that the frequencies of the Schumann resonances are changing beyond the normal diurnal variation. In another post, a while back, I said that I had failed to note anything significant about any particular frequency, mentioning the generally cited Schumann Resonance frequency of 7. Through the harmonic physics of atomic resonance and damping, Nature has engineered DNA with its own eggshell container to protect the geometric resonance of life over time. The body map: the Spiral. \r\r528 Hz is the love frequency and is used as the undercurrent binaural drone at the speed of the Schumann resonance for this meditation. Undeniably, there is a quickening going on and while many people feel this in their every day lives, we have been looking for proof of this phenomena. Many of his best-known piano pieces were written for his wife, the pianist Clara Schumann. This is a big deal and yet there is no publicly provided information on how this spike is impacting the earth’s geomagnetic field and how this radically effects the human body. This resonance is 7. Welcome To ARRT. 83 Hz is actually a natural electromagnetic resonance in the ionosphere produced by lighting storms and solar radiation. 22 The method was then further refined with. But these must be scrupulously defined, in line with the values promoted and followed over time, embraced until the end, in an assured and proud way, aligned with the brand’s positioning. Classic Sound and Feel Each Key Series saxophone is hand-finished to enhance its design, emphasizing consistent air resistance, uniformly balanced keys, and lively resonance throughout its entire range. Anthony Kim1 MD, FAANS, Alex Ring BS, Toni Jin2 MD, Robert Isenhart, MSC, Alex. , risk factors) that treatment would prevent the development of a speech, language, communication, or feeding and swallowing disorder; reduce the degree of. Förster resonance energy transfer is named after the German scientist Theodor Förster. You'll see below this is no longer true. (Some losses do occur over time due to infinitely small resistance of the coil. The Schumann Resonances are a set of spectrum peaks in the ELF portion of the Earth's electromagnetic field spectrum. In the last few years we would frequently see spikes to 20, 30 or 40. The produced magnetic field density depends on the applied voltage and the design parameters of the inductor such as; Configuration of the heater yoke, winding arrangements and number of turns air-gap between the heater and strip surface, poles cross-section. We investigated whether the accuracy and reproducibility of real-time 3-dimensional echocardiography (RT3DE) would make this modality more feasible for serial follow-up of LV measurements. over time will. To keep loss-resistance low over time, use decent quality connectors, with teflon centers - not cheap plastic that completely melt away when a soldering iron is in the same room. At 4 pm we had another very strong peak that came close to 70 Hz again. The Schumann resonances are fairly broad, unlike man-made signals, which are normally nice and sharp. The Schumann Resonance increasing they say. The xylophone has a brittle, metallic sound, while the marimba is somewhat more mellow or wooden to the listener. The Schumann Resonance measures the number of lightning discharges on planet Earth over a given time frame. GCI conducts groundbreaking research on the. The Floyd protests carry particular resonance in the banlieues, high-rise estates where friction between the police and residents, many of them of immigrant origin, frames daily life. Schumann Resonance. The geological time scale measures time on a scale involving four main units: An epoch is the smallest unit of time on the scale and encompasses a period of millions of years Chronologically, epochs are clumped together into larger units called periods. Neuroscientists have long held that the brains of children thin down over time. Earth's magnetic field, which protects the planet from huge blasts of deadly solar radiation, has been weakening over the past six months, according to data collected by a European Space Agency. Shop for Schumann: Complete Piano Trios/Complete String Quartets/ from WHSmith. 8, 14, 20, 26, 33, 39 and 45 Hz, with a daily variation of around ±0. The surgery may need to be repeated later if the. Phase contrast angiography relies on dephasing the moving spins submitted to a bipolar gradient. Schumann resonance has been a natural and constant frequency of planet Earth, pulsating exactly at 7. 94/s is the resonance frequency. FFT resolution 10,5 mHz, scroll time 40 seconds. I keep saying that illness is a frequency. 21 The first study that reported natural abundance liver glycogen content in humans used a single 10‐cm 13 C‐only coil and a 2. Annual and semiannual variations detected previously in the relative amplitudes of Schumann resonances (SR) in the first three modes are confirmed by the extended data series applied here. Analogs to FRET are: BRET , Bioluminescence Resonance Energy Transfer, where the donor emitts light through luminescence (typically Luciferase ). Also the cars, passing 25 m away from the sensor, leave their trace in the area below 10 Hz. This review is aimed at the reader generally unfamiliar with the Schumann Resonances. Current Schumann Resonance, Current Schumann Resonance 2019 2020, Current Spiritual Energies, Daily Ascension Energy Update, Earth Ascension 2019 2020,. 6 Application-Forced Spring Mass Systems and Resonance In this section we introduce an external force that acts on the mass of the spring in addition to the other forces that we have been considering. — When the coronavirus pandemic triggered shutdowns and stay-at-home orders in March, Daniel Byrd remembers having one immediate thought, one big regret. Danielle LaPorte For the past seven or so years we’ve operated my company on this strategy: that visibility lifts the ship, that exposure translates into dollars. Herbert König demonstrated a connection between Schumann Resonance and brain rhythms. gov brings you the latest news, images and videos from America's space agency, pioneering the future in space exploration, scientific discovery and aeronautics research. The Schumann Resonance measures the number of lightning discharges on planet Earth over a given time frame. a period of one time constant (t=¿ = 1) the output has decayed to y(¿) = e¡1y(0) or 36. pH means hydrogen potential (amount of H + protons), which indicates the acidity, neutrality, or alkalinity of an aqueous solution. It was considered an anomaly when in 2014 this frequency rose from its usual 7. ) The length of superconducting wire in the magnet is typically several miles. 8 Hz, 27 Hz, 34 Hz and 39Hz. MRI is an imaging technique designed to visualise internal structures of the body using magnetic and electromagnetic fields which induce a resonance effect of hydrogen atoms. First, let's review what an MR scanner actually measures - 'magnetic resonance. Whereas MRI is used to image various parts of the body-bones and joints, soft tissues, muscles, internal organs, and blood vessels, MRA is specifically intended to show the arteries and veins. In the United States, falls are the leading cause of accidental death and the 7th leading cause of death in people age 65 or over. Filter sweeps take on a very different character with different resonance settings. 8Hz and it's harmonics at 14, 20, 26Hz, etc. 83 and 8 Hz. Since there is a concentration of lightning activity during the afternoon in Southeast Asia, Africa and America there are Schumann Resonance amplitude peaks at 10, 16 and 22 UT (universal time), with activity over America around 22 UT being dominant. Physical stamina. Schumann resonance is a standing wave of electromagnetic field, which occurs when a space between the surface of the Earth and the ionosphere makes a resonant cavity for electromagnetic waves. Functional magnetic resonance imaging, or fMRI, is a technique for measuring brain activity. Your doctor may use Ultrasound or Body MRI to diagnose and evaluate your condition. The average in vivo release rate of LNG is approximately 14. Figure 4 - Dependence of frequencies of the Schumann resonance in hertz on the local time. Rabi working in the Pupin Physic Laboratory in New York City, observed the quantum phenomenon dubbed nuclear magnetic resonance (NMR). These resonances, or discharges, occur between the cavity formed by the Earth's. Lightning produces electromagnetic fields and waves in all frequency ranges. When reflecting on the Schumann Resonant frequencies over the last 12 months (2016-2017), there has been an upward trend in the resonances, with many stations around the world recording readings in the 40's & even 50 hz frequencies. Researchers examine how Parkinson’s disease alters brain activity over time Tracking neural changes could help researchers test therapies that slow disease progression. 83Hz over a very long period of evolution. • Resonance knob to emphasize or suppress portions of the signal above or below the defined cutoff frequency. This will ultimately explain the number’s importance. A series of comparisons between the waves observed in the SAR and those in the numerical simulations are shown in Fig. The Schumann wave has also been described as the heartbeat of the earth. In the Speech Production and Articulation kNowledge group at USC, we use real-time Magnetic Resonance Imaging and other technologies to investigate the dynamics of speech production -- how the vocal tract is shaped over time -- to inform our knowledge of phonological structure and its cognitive representation. Magnetic resonance technologists work in large urban hospitals and clinics. Resonance occurs widely in nature, and is exploited in many man made devices. The Earth will be stopped, and in 2 or 3 days it will start turning again in the opposite direction. Scheuermann's kyphosis develops over time during periods of bone growth (such as puberty). 83 Hz - Schumann Resonance. Earth's magnetic field has weakened by 15 per cent over the last 200 years. Dependences of the amplitudes of Schumann resonance on local time. This is a big deal and yet there is no publicly provided information on how this spike is impacting the earth’s geomagnetic field and how this radically effects the human body. #schumann #resonance #frequency. Diagnostic services can also be used to determine. 83 and 8 Hz. ” under different circumstances, but given other issues. The Schumann Resonances are generated by lightning strikes, vary throughout the day and over time, change unpredictably and influenced by solar flares. 1, case (d)). Second, they contain a great deal of information. Robert Schumann (8 June 1810 - 29 July 1856) is widely regarded as one of the greatest composers of the Romantic era. These resonances, or discharges, occur between the cavity formed by the Earth's surface and the ionosphere. 83 Hz - Schumann Resonance. , fibroadenomas) and disease (breast cancer). This graph ends at 3am of 5/6 (EST, USA), so there is no reporting past that time period. The Schumann Resonance Spectrogram Chart. 8 to 36+ hertz. SOCIETY FOR BRAIN MAPPING AND THERAPEUTICS ABSTRACT for AUTISM TITLE: Noninvasive EEG-EKG guided trans-magnetic stimulation at natural resonance frequency in children with autism: randomized double-blinded pilot study K. The official description from Wikipedia is- "The Schumann resonances (SR) are a set of spectrum peaks in the extremely low frequency (ELF) portion of the Earth's electromagnetic field spectrum. Improve your Health, Increase your Quality of Life, get Protection from EMF, and get Grounded. The models used provide overall frequency variations of * 1. For example, suppose that the mass of a spring/mass system is being pushed (or. To keep loss-resistance low over time, use decent quality connectors, with teflon centers - not cheap plastic that completely melt away when a soldering iron is in the same room. The current frequency of the Earth's resonance is estimated at over 11 Hz. This study describes magnetic resonance imaging (MRI) results and changes in lateral ventricular size over time in a canine ischemic stroke model. RESONANCES Catalogue + CD Les Musées de la Ville de Paris ISBN-2-87900-875-1 ISSN 1272-2103 12€ If you're in Paris now or coming here in the foreseeable future, a visit to Russian émigré sculptor Ossip Zadkine's former loft on the rue d'Assas near the Jardin du Luxembourg is certainly worth considering. This energy can increase or decrease at times, and many think it affects our consciousness. The Schumann Resonance measures the number of lightning discharges on planet Earth over a given time frame. 8 and 1 Hz, respectively. The spectrum figures clearly show the various Schumann resonances and how the resonances change over time. These resonances are inherently stable over time and space. We performed 2DE and RT3DE and cardiac magnetic resonance imaging (MRI) in 50 patients with previous infarction and varying degrees of LV function (44 men; 61. The Schumann resonances (SR) are a set of spectrum peaks in the extremely low frequency (ELF) portion of the Earth’s electromagnetic field spectrum. The most Schumann families were found in the USA in 1920. In comparison to single-photon emission CT or positron emission tomography, the established methods for clinical evaluation of myocardial perfusion, MR MPI offers superior spatial resolution with the potential to visualize sub-endocardial defects, and does. 83 Hertz(Hz). Schumann Resonance figure that many people hear about is a number made popular by researcher Robert Beck whose work on ELF signals, Earth resonances, and their effect on brain wave frequencies was presented at a U. 3 Hz, and higher. k u c i n g p i n k Body {background-color: #000000; cursor. Classical music fans know the names Mendelssohn and Schumann. When we move away from this biologically attuned resonance field, it is highly likely to result in disorganization of our electrophysiology. The official description from Wikipedia is: "The Schumann resonances (SR) are a set of spectrum peaks in the extremely low frequency (ELF) portion of the Earth's electromagnetic field spectrum. In this case, the system oscillates as it slowly returns to equilibrium and the amplitude decreases over time. Pay over time with KLARNA 24 Month Warranty 100% Secure Checkout The Schumann Resonance: Why We Need Earth’s Healing Vibrations. 83 hz, but since the 1960's, the Schumann Resonance has been steadily on the rise. tones rise and fall in frequency and amplitude in imitation of the frequency and amplitude variations of vocal resonances over the course of an utterance. Schumann resonances are global electromagnetic resonances, generated and excited by lightning discharges in the cavity formed by the Earth's surface and the ionosphere. The paper describes the development of passive, chipless tags for a novel indoor self-localization system operating at high mm-wave frequencies. Create a Network. Her works for the most part do not sound much like Robert's except in the most general way; at times she harks back to Mendelssohn's gentle lyricism, and her absorption of Chopin, as compared with that of her husband's, is oriented less toward daring smudges of harmonic color and more toward pianism. The official description from Wikipedia is… "The Schumann resonances (SR) are a set of spectrum peaks in the extremely low frequency (ELF) portion of the Earth's electromagnetic field spectrum. GCI conducts groundbreaking research on the. The former San Antonio mayor and secretary of housing and urban development failed to get traction in the 2020 Democratic primary, but his campaign was. Diagnosing multiple sclerosis is anything but easy. August 2012: (A) Sagittal enhanced scan shows hypothalamus and infundibulum enlarged significantly (long arrow), with no significant pituitary stalk shift; the pituitary is swollen and the meninx is thickened and significantly enlarged, presenting the “dural tail sign” (short arrow); and (B) Coronal. 83 and 8 Hz. In the CNS, myelin is synthesized by oligodendrocytes, while in the PNS, myelin is synthesized by Schwann cells. It was considered an anomaly when in 2014 this frequency rose from its usual 7. We are used to viewing our planet as a scrumptious gem of molten and water for humanity to devour at the expense of our Earth’s health and wellbeing. The biggest difference is that MRIs (magnetic resonance imaging) use radio waves and CT (computed tomography) scans use X-rays. Observed over time, 10 pT bursts ap-pear over a 1 pT background [12], at a rate of ˇ0:5 Hz. Annual and semiannual variations detected previously in the relative amplitudes of Schumann resonances (SR) in the first three modes are confirmed by the extended data series applied here. Cardiovascular magnetic resonance imaging (CMR) has emerged as the gold standard to assess heart function. has increased dramatically while the occurance of illnesses have re­main­ed constant. We performed 2DE and RT3DE and cardiac magnetic resonance imaging (MRI) in 50 patients with previous infarction and varying degrees of LV function (44 men. It is important to measure Schumann resonance and compare the level inside and outside the mine. Bi-analyte SERS (BiASERS) technique largely improves the accuracy and reliability of SM SERS detection over mono-analyte approach by avoiding to use ultra-low concentration samples and induced inaccuracy and very limited events. On 1/31/2017, for the first time in recorded history, the Schumann Resonance has reached frequencies of 36+. , the base Schumann Resonance) is introduced to human biology, continue to emerge in controlled scientific experiments?. The Schumann Resonance Spectrogram Chart The Schumann Resonance Chart displays data from the magnetic field detector to monitor the resonances occurring in the plasma waves constantly circling the earth in the ionosphere. Tracking brain changes in people with Parkinson’s: A new study has found that neural activity in certain brain areas declines over time in individuals with Parkinson’s. “I wish we had. An example of progressive FA decrease along the pyramidal tract below the primary lesion over time is shown in fig 1. All life has adapted over time to live in the electromagnetic frequencies of the Schumann Resonance, somewhere between 7. Note however, that unlike the natural speech signal, sinewave speech does not. It is supposed to be steady at 7. Physical stamina. Monitoring the intensity and frequencies of the lightning-induced ELF SCHUMANN RESONANCE could help monitor changes in the Earth's climate over time. A mammogram image has a black background and shows the breast in variations of gray and white. TSST = UTC + 7 hours. There is one thread to this […]. People have changed over time, growing ever more distant and isolated from others – while at the same time finding new ways and technologies that let individuals connect and feel with others. 8 and 1 Hz, respectively. This energy, circling as a wave between the ionosphere and the earth, bumps into itself amplifying frequencies and turning them into resonant waves. Schumann's successor, Dr. Air Force’s Communications/Navigation Outage Forecast System (C/NOFS) satellite. It’s the measurement of 7. The official description from Wikipedia is- “The Schumann resonances (SR) are a set of spectrum peaks in the extremely low frequency (ELF) portion of the Earth’s electromagnetic field spectrum. Through the harmonic physics of atomic resonance and damping, Nature has engineered DNA with its own eggshell container to protect the geometric resonance of life over time. The paper describes the development of passive, chipless tags for a novel indoor self-localization system operating at high mm-wave frequencies. Either as. The Schumann family name was found in the USA, the UK, Canada, and Scotland between 1871 and 1920. When the Earth stops its rotation and the resonant frequency reaches 13 cycles we will be at a zero point magnetic field. In 1880 there were 122 Schumann families living in New York. Dizziness and vertigo ([Table 1][1]) are common clinical complaints. If we push a child on a swing once during each period, and we always push in the same direction, we will increase the amplitude of the swing, until the positive work we do on the swing equals the negative work done on the swing by the frictional forces. Over many years of monitoring the earth's magnetic field with the Global Coherence Monitoring System, which uses multiple recording stations strategically located around the earth, we have not observed any evidence for the claim that the frequencies of the Schumann resonances are changing beyond the normal diurnal variation. Anthony Kim1 MD, FAANS, Alex Ring BS, Toni Jin2 MD, Robert Isenhart, MSC, Alex. ' A strong magnetic field is placed across the tissue along the direction of the bore of the magnet. Well, it seems like that, and in this article, I will show you what I see when looking at data. Introduction. The present study is based on simultaneous measurements of the atmospheric electric potential gradient (PG) and Schumann resonances at Nagycenk station (Hungary) from 1993 to 1996. We feel the Ascension Energies will keep increasing over time and will keep you informed as Guided. In general, molecules with multiple resonance structures will be more stable than one with. ) The power lost in the resistor is the RMS voltage squared divided by the resistance ("v-squared over R"). Note however, that unlike the natural speech signal, sinewave speech does not. So, if the fundamental Schumann Resonance is truly rising (and it isn't), why does this Earth-human coupling phenomenon, which occurs reliably and demonstrably every time the 7 Hz to 8 Hz range of frequencies (i. Your breathing rate influences your HRV and Resonance score, which is clearly displayed in real-time. They become "shallow breathers," using only a small portion of their lungs' capacity. Real-time Schumann Resonance. Joel Rosenberg, a U. Tracy, MD Abstract Chronic inflammatory demyelinating polyradiculoneuropathy (CIDP) is probably the best recognized progressive immune-mediated peripheral neuropathy. 83 Hz - Schumann Resonance. 8, 14, 20, 26, 33, 39, and 45 Hz. At the same time there is an increase in the resonant frequency of the Earth (Schumann Resonance). A meme is an idea or disinformation that spreads contagiously like a virus. The scientist R, Wever from the highly acclaimed Max Planck Institute for behavioural physiology in Germany did extensive research on the effects of the Schumann resonance on the human brain in the sixties which has been corroborated with many other scientific experiments since. § 106 and 17 U. This guidance speaks to the Lightworkers, Guides, Teachers, Creators and Soul-led, and asks of us all not to back away, but to SHINE. For an example, they point to the human brain's electromagnetic waves that are synchronized to the Schumann frequency. METHODS AND MATERIALS Two thousand thirty-seven MRI scans from 58 women divided into young, middle-aged, and older groups were screened. The local time is expressed in the hours of Tomsk Summer Daylight Time. We analyze signals, mathematical functions, or perhaps scientific data, as measured in sequential time samples. ) The power lost in the resistor is the RMS voltage squared divided by the resistance ("v-squared over R"). radian's time. This in turn caused hair follicles to shrink. Inner power & higher self awakening - Schumann Resonance Music - Schumann Frequency (7. Magnetic resonance spectroscopy (MRS) can be used in vivo to quantify neurometabolite concentration and provide evidence for the involvement of different neurotransmitter systems, e. Neuroscientists have long held that the brains of children thin down over time. Classical music fans know the names Mendelssohn and Schumann. These were correlated with demographic, clinical, and radiological data to better identify the disease risk features. Improve your Health, Increase your Quality of Life, get Protection from EMF, and get Grounded. Today’s maximum is Power 98 as previously reported. Around 70% of women married before age 46, and the standardized marriage rates remained relatively stable during most of our study period. With R ≠ 0 [ edit ] When R ≠ 0 and the circuit operates in resonance. It is these resonance properties of this global spherical capacitor or resonator) that Schumann predicted over 40 years ago. Single-molecule (SM) detection in surface-enhanced Raman spectroscopy (SERS) demonstrates ultra-high sensitivity and boosts wide applications. Time series data consist of measurements of a variable over time. The Schumann Resonance – The Earth's Heartbeat. When we move away from this biologically attuned resonance field, it is highly likely to result in disorganization of our electrophysiology. At the time, they had no idea that the date for their "micro-wedding" would come just over a week after protests sprung up around the country in the wake of George Floyd's death. December 30th 2015 the Schumann Resonance spiked to over 50 Hz. May interview patient, explain MRI procedures, and position patient on examining table. Learn more about his life and works in this article. Marciak-Kozłowska, J. $\gamma^2 = 4\omega_0^2$ is the Critically Damped case. The Schumann Resonance is a global electromagnetic resonance named by physicist Winfried Otto Schumann who predicted it mathematically in 1952. Introduction. Humans, animals and plants have become accustomed to this frequency of 7. the piece works as a six-minute celebration every time you watch it. 83 Hz - Schumann Resonance. 2 out of 5 stars 948. It's increasingly hard to find a web page dedicated to the sales of alternative medicine products or New Age spirituality that does not cite the Schumann resonances as proof that some product or service is rooted in science. Understanding the Schumann Resonance and its Effect on You! I look at the Schumann Resonance as the Earth’s heartbeat. Mattis’s comments have a special resonance abroad -- until his resignation in 2019 he traveled widely to persuade allies that America’s institutions were strong enough to withstand Trump. Electronic Fuel Injection Works and system AMNIMARJESLOW GOVERNMENT 91220017 LOR FUEL LIC CLEAR EMISSIONS 02096010014 INJECT LJBUSAF XWAM ## \$# How a fuel injection system works For the engine to run smoothly and efficiently it needs to be provided with the right quantity of fuel /air mixture according to its wide range of demands. Shop for Schumann: Complete Piano Trios/Complete String Quartets/ from WHSmith. Single-molecule (SM) detection in surface-enhanced Raman spectroscopy (SERS) demonstrates ultra-high sensitivity and boosts wide applications. Explore the latest in brain science including dementia, traumatic brain injury, cerebellar stroke, restless arms, herpes pain, & more. Eight right-handed volunteers performed a set of cognitive tasks—finger tapping (FT), simple reaction time (SRT), and four-choice reaction time (4CRT)—twice during blood oxygen. Spectrograms of the data (i. Filter sweeps take on a very different character with different resonance settings. 2K by immersing it in liquid helium. Historians disagree over whether the pair ever acted on their feelings – but this quotation is pretty unequivocal…. This BMV CD contains three powerful 15 minute brainwave entrainment sessions that are specifically engineered to tune your brainwaves to the SCHUMANN RESONANCE (7. The top chart shows the daily average (mean) values of the fundamental (first-order) Schumann resonance frequency from October 2001 through May 2009. Resonance structures are used when a single Lewis structure cannot fully describe the bonding; the combination of possible resonance structures is defined as a resonance hybrid, which represents the overall delocalization of electrons within the molecule. A meme is an idea or disinformation that spreads contagiously like a virus. The opposite then must be true that as our speed decreases, time speeds up. 8), or 69 ms. In general, molecules with multiple resonance structures will be more stable than one with. Increased alpha activity can be seen later in the waveforms, starting at around the time the blood-pressure wave reaches the brain. At 4 pm we had another very strong peak that came close to 70 Hz again. The scientist R, Wever from the highly acclaimed Max Planck Institute for behavioural physiology in Germany did extensive research on the effects of the Schumann resonance on the human brain in the sixties which has been corroborated with many other scientific experiments since. The pH is measured on a logarithmic scale, which means that decreasing the pH by 1 point makes the solution ten times more acidic. The Floyd protests carry particular resonance in the banlieues, high-rise estates where friction between the police and residents, many of them of immigrant origin, frames daily life. Myelin is a multilamellar membrane structure surrounding axons in both the CNS and PNS that facilitates nerve conduction. WARNING: Remember Folks, this information is for educational purpose only and should never be used to make a decision regarding proposed medical treatment intervention(s). 83 Hertz(Hz). He determined that the frequency of these electromagnetic waves is very low, ranging from 7. 83 Hz or the electromagnetic frequency of our planet, to be exact. The Earth will be stopped, and in 2 or 3 days it will start turning again in the opposite direction. This takes time. The resonance shape gets little modified occasionally, mostly due to perturbations by the Jupiter planet, but it always restores briefly. The surgery may need to be repeated later if the. So, if the fundamental Schumann Resonance is truly rising (and it isn't), why does this Earth-human coupling phenomenon, which occurs reliably and demonstrably every time the 7 Hz to 8 Hz range of frequencies (i. 8% of its initial value, after two time constants the response is y(2¿) = 0:135y(0). It also talks to preparations over these last few years, that many of us Sensitives have found hard to understand. Observed over time, 10 pT bursts ap-pear over a 1 pT background [12], at a rate of ˇ0:5 Hz. · A character played by an actor. It “was the first time since before the Civil War that the South was not solidly Democratic,” Goldfield says. Diffusion tensor imaging (DTI) and proton magnetic resonance spectroscopy (MRS) are potentially sensitive and quantitative methods of detection in. Over many years of monitoring the earth’s magnetic field with the Global Coherence Monitoring System, which uses multiple recording stations strategically located around the earth, we have not observed any evidence for the claim that the frequencies of the Schumann resonances are changing beyond the normal diurnal variation. 528 Hz love frequency based meditation; with a Schumann resonance, binaural entrainment vibration rate of 7. Over time, they found, the stem cells accumulated genetic errors that led them to not be able to rejuvenate any longer. Evidence for his assertions comes from the Schumann Resonance. It’s hard to keep abreast of every bad actor and natural disaster impacting the internet, but O. ” under different circumstances, but given other issues. Over many years of monitoring the earth's magnetic field with the Global Coherence Monitoring System, which uses multiple recording stations strategically located around the earth, we have not observed any evidence for the claim that the frequencies of the Schumann resonances are changing beyond the normal diurnal variation. Schumann Resonances By Matt Castle • 04 December 2018 Three times a day, every day⁠—roughly 9am, 2pm, and 8pm Coordinated Universal Time⁠—an extremely low frequency electromagnetic pulse races around the Earth, reverberating between the lower edge of the ionosphere and the planetary surface. Dizziness and vertigo ([Table 1][1]) are common clinical complaints. 1T NMR spectrometer. In another post, a while back, I said that I had failed to note anything significant about any particular frequency, mentioning the generally cited Schumann Resonance frequency of 7. But instead of creating images of organs and tissues like MRI, fMRI looks at blood flow in the brain to detect areas of activity. December 30th 2015 the Schumann Resonance spiked to over 50 Hz. with Schumann Resonances, with an entrainment of the subject's EEG by the healer's resonance standing waves, and with eventual phase coupling seen between the healer and subject-paired EEGs. The bottom plot is the same as before. Neither has changed significantly over time therefore the Schumann Resonance hasn't changed significantly over time. Precise measurements of root system architecture traits are an important requirement for plant phenotyping. It is important to measure Schumann resonance and compare the level inside and outside the mine. The most Schumann families were found in the USA in 1920. An example of progressive FA decrease along the pyramidal tract below the primary lesion over time is shown in fig 1. These include increasing how strong your muscles are by doing things such as lifting weights. 1882 - Nikola Tesla discovered the Rotating Magnetic Field in Budapest, Hungary. • Strength stage, to give a subtle change to the filter strength. When we move away from this biologically attuned resonance field, it is highly likely to result in disorganization of our electrophysiology. This resonator is the result of the conductivity of Earth, the insulative properties of the lower atmosphere and a highly charged and conductive. If it is not causing a problem, you and your doctor may decide to watch it over time. Schumann Resonance Today Peaks: 4/30 17:00 UTC – The amplitude at 70 Hz lasted until 11 am UTC, after which a relative calm period of about 4 hours followed, until to 3 pm UTC. 1T NMR spectrometer. Cocobolo wood accents on touch pieces and braces enhance the warm presence and unique feel of the Key Series saxophones. Hence, the resonance stabilizes planet orbits, because if one planet gets a little late to the meet-point, it is attracted by the other planet. The possibility that the electrical components of the time-varying electrical potentials produced by the brain may occasionally overlap and become synchronous with ultra-low frequency (ULF) electromagnetic. For thousands of years, the Schumann resonance biologic heartbeat of Earth has been 7. The official description from Wikipedia is- "The Schumann resonances (SR) are a set of spectrum peaks in the extremely low frequency (ELF) portion of the Earth's electromagnetic field spectrum. Hammering on the wooden keys causes the impact to resonate through the tubes. The global electrical circuit is established by the naturally occurring presence of a thin veneer of insulating air (our atmosphere) sandwiched between the conductive Earth and the conductive mesosphere/ionosphere (e. The scientist R, Wever from the highly acclaimed Max Planck Institute for behavioural physiology in Germany did extensive research on the effects of the Schumann resonance on the human brain in the sixties which has been corroborated with many other scientific experiments since. com You can find the daily Schumann Resonance here. 83 Hz to 36+ Hz is a big deal. The excess was just over the top. The first principle refers to an existing resonance frequency which connects us to the vitalizing and beneficial energies of the Earth. Its frequency range is based on the work of Winifried Otto Schumann (1877-1974), a German scientist whose research is applied in the evaluation of the effect of climate change and global warming. You can see how Schumann families moved over time by selecting different census years. Then, you surf the waves of life elegantly and experience beauty everywhere. #schumann #resonance #frequency. The objective of this study was to further explore the cartilage volume changes in knee osteoarthritis (OA) over time using quantitative magnetic resonance imaging (qMRI). Ankermüller, a physician, immediately made the connection between the Schumann resonances and the alpha rhythm of brain waves. As a result, Resonance’s Best Cities rankings don’t just consider cities as places to live, work or visit—they take a more holistic approach using a wide range of factors that show positive correlations with attracting employment, investment and/or visitors—ranging from the number of culinary experiences, museums, and sights and landmarks each city offers, to the number of Fortune 500. The branding landscape is fast-paced and continually changing, yet is a stimulating and exciting environment in which to work. It is characterized by a symmetrical, motor-. A phasor is a vector whose length represents the amplitude I 0 (see the diagram ). Local time is expressed in hours of Tomsk summer standard time (TSST). MrMBB333 Space Weather Earthquake Forecasting, Earths Magnetic Field, Ionosphere, Ultra Violet Light, UV, #MrMBB333. Nothing’s perfect so over time every oscillator loses energy. The Earth will be stopped, and in 2 or 3 days it will start turning again in the opposite direction. Gerbner and Gross assert: "Television is a medium of the socialization of most people into standardized roles and behaviors. STAR is the flagship line for TAMA drums. Resonance occurs widely in nature, and is exploited in many man made devices. 38 and 97; and the string quartets, op. We have begun a new cycle back to the 5th dimension-the frequency of Unconditional Love and Light…. CURRENT SPINAL RESEARCH: The Cutting Edge Treatment for the Spine. According to a real time graph from Heart Math there has been a spike in the earth's geomagnetic field, the data only goes back 1 year, but I managed to find data going back to April 2016, meaning it could actually go back further than 15 months. The Schumann Resonance is like the Earth's heartbeat. However, since 1980 this. 1007/s00247-017-4047-y. The Schumann's resonance does not fluctuate much, but has 8 different frequencies ranging from 0. LILETTA can be replaced at the time of removal with a new LILETTA if continued contraceptive protection is desired. This is a big deal and yet there is no publicly provided information on how this spike is impacting the earth’s geomagnetic field and how this radically effects the human body. This time-matching seems to support the suggestion of a significant influence of the day-night ionosphere asymmetry on Schumann resonance amplitudes. over time will. The time domain signals shown in Figure 1 are converted to the frequency domain with the Fourier transform. Compulsivity is a cross. ) The power lost in the resistor is the RMS voltage squared divided by the resistance ("v-squared over R"). Power is the sum of the power in all frequencies detected by the site magnetometer from 0. This protection, caused by the invention's ability to enhance grounding or resonance to the Schumann Resonance, can be magnified due to the scalar enhancement which results if one uses two or more of the AC units at one time, Thus, a user of the invention can protect himself or herself with the pendant version of the invention, and/or all. Speech-Language Pathology Medical Review Guidelines 5 communicate and/or swallow effectively is reduced or impaired or when there is reason to believe (e. Tracy, MD Abstract Chronic inflammatory demyelinating polyradiculoneuropathy (CIDP) is probably the best recognized progressive immune-mediated peripheral neuropathy. We have the fundamental frequency (7. Each row of images is from a different site location, new sites will be added as they become operational. Over many years of monitoring the earth's magnetic field with the Global Coherence Monitoring System, which uses multiple recording stations strategically located around the earth, we have not observed any evidence for the claim that the frequencies of the Schumann resonances are changing beyond the normal diurnal variation. 83 Hz to somewhere in the 15-25 Hz levels—so a jump from 7. Sixth Edition, last update July 25, 2007.
Browse Questions Oscillations # How many minimum observations of the displacement (at various time instants) are required to be made to quantify a simple harmonic motion (form the exact equation describing the displacement as a function of time): $\begin{array}{1 1}(a)\;1 &(b)\;3 \\( c)\; 2 &(d)\;6 \end{array}$ There are 3 unknowns generally in the equation describing SHM. They are amplitude, frequency & phase constant . Thus if amplitude , frequency & phase constant. Thus if three observations displacement are made at three time instants then three simultaneous equations can be set & they can be solved to describe the motion completely. Note some times the initial displacement & Initial velocity can both can be zero which will result in 4 unknowns.
Astrochemical ices are known to undergo morphological changes, from amorphous to crystalline, upon warming the ice from lower (10 K) to higher temperatures. Phase changes are mostly identified by the observation of significant changes in the InfraRed (IR) spectrum, where the IR bands that are broad in the amorphous phase are narrower and split when the ice turns crystalline. To-date all the molecules that are studied under astrochemical conditions are observed to follow such a behaviour without significant attenuation in the IR wavelength. However, in this paper we report a new observation when propargyl ether ($$C_3H_3OC_3H_3$$) is warmed from the amorphous phase, at 10 K, through the phase transition temperature of 170 K, the crystalline ice being found to strongly attenuate IR photons at the mid-IR wavelengths.
# Quiz 4: Linearization ## 7 thoughts on “Quiz 4: Linearization” 1. In response to the following question: “I have a question about part 4 of quiz 4, namely what exactly do you mean by controller gain in this case? At first I thought maybe it was the coefficient on the NTC resistor, but I doubt that’s the case.”: Agreed, $\kappa$ would not change. Assume you have a P-controller with gain $k_p$. If the sensor gain increases, we need to reduce $k_p$ by the same amount to keep $L(s)$ the same. This can be seen more general. For example, if we consider a PI-controller $H(s)=k_p + k_I/s$, this can be rewritten to feature a general controller gain $k$ as $$H(s)=k \cdot \left( 1 + \frac{1}{\tau_I s} \right)$$ where $k$ is now the overall controller gain and $\tau_I$ the integrator time constant. In this alternative form, the controller zero at $z=-1/\tau_I$ becomes independent from the gain. Here, $k$ would be reduced by the same amount that $k_s$ increases. 2. Hi Dr.Haidekker, For part 2 of Quiz 4, do we consider the approximated equation in equation 2 for calculating ks? • Yes, always the approximation. The first derivative of $$\frac{1}{a+e^{-kt}}$$ is doable, but for the purposes of this homework unnecessary. 3. Can anyone help with part 2? I am confused on where to even start. Is voltage the input and temperature the output? • We are looking at the sensor only, with only a vague idea about the control loop itself (see above). Part 2 follows the example in the book, Section 8.1, notably Eqns 8.1 and 8.2 — however, you need to take Eq. 2 from the quiz instead of Eq. 8.1. One thing that helps me a lot is to plot it. Plot the $V_O$ over $T$ curve. Sketch the operating point. Sketch the tangent and draw it out to where it intercepts the $V_O$-axis. Then see if the equation you get matches this line. 4. Dr. Haidekker, For part 4 of the quiz, do we need to include the summation point from the intercept in our loop when finding the ratio or fact for the controller gain? • No. The intercept is an additive constant (i.e., a signal). Conversely, $k_s$ is a system coefficient. Only $k_s$ is of interest in this question.
# Deeparnab Chakrabarty According to our database1, Deeparnab Chakrabarty authored at least 116 papers between 2005 and 2019. Collaborative distances: • Dijkstra number2 of three. • Erdős number3 of two. Book In proceedings Article PhD thesis Other ## Bibliography 2019 Generalized Center Problems with Outliers. ACM Trans. Algorithms, 2019 Simpler and Better Algorithms for Minimum-Norm Load Balancing. CoRR, 2019 Fair Algorithms for Clustering. CoRR, 2019 Approximation algorithms for minimum norm and ordered optimization problems. Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, 2019 Adaptive Boolean Monotonicity Testing in Total Influence Time. Proceedings of the 10th Innovations in Theoretical Computer Science Conference, 2019 Simpler and Better Algorithms for Minimum-Norm Load Balancing. Proceedings of the 27th Annual European Symposium on Algorithms, 2019 2018 SIAM J. Comput., 2018 Adaptive Boolean Monotonicity Testing in Total Influence Time. Electronic Colloquium on Computational Complexity (ECCC), 2018 Domain Reduction for Monotonicity Testing: A $o(d)$ Tester for Boolean Functions on Hypergrids. Electronic Colloquium on Computational Complexity (ECCC), 2018 Approximation Algorithms for Minimum Norm and Ordered Optimization Problems. CoRR, 2018 Domain Reduction for Monotonicity Testing: A o(d) Tester for Boolean Functions on Hypergrids. CoRR, 2018 Generalized Center Problems with Outliers. CoRR, 2018 Adaptive Boolean Monotonicity Testing in Total Influence Time. CoRR, 2018 Better and Simpler Error Analysis of the Sinkhorn-Knopp Algorithm for Matrix Scaling. CoRR, 2018 Better and Simpler Error Analysis of the Sinkhorn-Knopp Algorithm for Matrix Scaling. Proceedings of the 1st Symposium on Simplicity in Algorithms, 2018 A o(d) · polylog n Monotonicity Tester for Boolean Functions over the Hypergrid [n]d. Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, 2018 Dynamic Algorithms for Graph Coloring. Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, 2018 Interpolating between k-Median and k-Center: Approximation Algorithms for Ordered k-Median. Proceedings of the 45th International Colloquium on Automata, Languages, and Programming, 2018 Generalized Center Problems with Outliers. Proceedings of the 45th International Colloquium on Automata, Languages, and Programming, 2018 2017 Property Testing on Product Distributions: Optimal Testers for Bounded Derivative Properties. ACM Trans. Algorithms, 2017 A o(d) · polylog n Monotonicity Tester for Boolean Functions over the Hypergrid [n]d. Electronic Colloquium on Computational Complexity (ECCC), 2017 A Lower Bound for Nonadaptive, One-Sided Error Testing of Unateness of Boolean Functions over the Hypercube. Electronic Colloquium on Computational Complexity (ECCC), 2017 Optimal Unateness Testers for Real-Valued Functions: Adaptivity Helps. Electronic Colloquium on Computational Complexity (ECCC), 2017 Interpolating between k-Median and k-Center: Approximation Algorithms for Ordered k-Median. CoRR, 2017 Dynamic Algorithms for Graph Coloring. CoRR, 2017 A $o(d) \cdot \text{polylog}~n$ Monotonicity Tester for Boolean Functions over the Hypergrid [n]d. CoRR, 2017 A Lower Bound for Nonadaptive, One-Sided Error Testing of Unateness of Boolean Functions over the Hypercube. CoRR, 2017 Optimal Unateness Testers for Real-Valued Functions: Adaptivity Helps. CoRR, 2017 Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, 2017 The Heterogeneous Capacitated k-Center Problem. Proceedings of the Integer Programming and Combinatorial Optimization, 2017 Deterministic Fully Dynamic Approximate Vertex Cover and Fractional Matching in O(1) Amortized Update Time. Proceedings of the Integer Programming and Combinatorial Optimization, 2017 Optimal Unateness Testers for Real-Valued Functions: Adaptivity Helps. Proceedings of the 44th International Colloquium on Automata, Languages, and Programming, 2017 2016 Monotonicity Testing. Encyclopedia of Algorithms, 2016 Max-Min Allocation. Encyclopedia of Algorithms, 2016 Special Issue: APPROX-RANDOM 2014: Guest Editors' Foreword. Theory of Computing, 2016 An o(n) Monotonicity Tester for Boolean Functions over the Hypercube. SIAM J. Comput., 2016 Facility Location with Client Latencies: LP-Based Techniques for Minimum-Latency Problems. Math. Oper. Res., 2016 Detecting Character Dependencies in Stochastic Models of Evolution. Journal of Computational Biology, 2016 A Õ(n) Non-Adaptive Tester for Unateness. Electronic Colloquium on Computational Complexity (ECCC), 2016 A $\widetilde{O}(n)$ Non-Adaptive Tester for Unateness. CoRR, 2016 Graph Balancing with Two Edge Types. CoRR, 2016 CoRR, 2016 The Heterogeneous Capacitated $k$-Center Problem. CoRR, 2016 The Non-Uniform k-Center Problem. CoRR, 2016 Deterministic Fully Dynamic Approximate Vertex Cover and Fractional Matching in $O(1)$ Amortized Update Time. CoRR, 2016 IQ-Hopping: distributed oblivious channel selection for wireless networks. Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing, 2016 The Non-Uniform k-Center Problem. Proceedings of the 43rd International Colloquium on Automata, Languages, and Programming, 2016 2015 Recognizing Coverage Functions. SIAM J. Discrete Math., 2015 CoRR, 2015 Approximability of Capacitated Network Design. Algorithmica, 2015 On (1, )-Restricted Assignment Makespan Minimization. Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, 2015 Property Testing on Product Distributions: Optimal Testers for Bounded Derivative Properties. Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, 2015 Proceedings of the IEEE 56th Annual Symposium on Foundations of Computer Science, 2015 2014 An Optimal Lower Bound for Monotonicity Testing over Hypergrids. Theory of Computing, 2014 Submodularity Helps in Nash and Nonsymmetric Bargaining Games. SIAM J. Discrete Math., 2014 Property Testing on Product Distributions: Optimal Testers for Bounded Derivative Properties. Electronic Colloquium on Computational Complexity (ECCC), 2014 On $(1, ε)$-Restricted Assignment Makespan Minimization. CoRR, 2014 Property Testing on Product Distributions: Optimal Testers for Bounded Derivative Properties. CoRR, 2014 Provable Submodular Minimization using Wolfe's Algorithm. CoRR, 2014 Provable Submodular Minimization using Wolfe's Algorithm. Proceedings of the Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, 2014 Welfare maximization and truthfulness in mechanism design with ordinal preferences. Proceedings of the Innovations in Theoretical Computer Science, 2014 2013 Hypergraphic LP Relaxations for Steiner Trees. SIAM J. Discrete Math., 2013 An optimal lower bound for monotonicity testing over hypergrids. Electronic Colloquium on Computational Complexity (ECCC), 2013 A o(n) monotonicity tester for Boolean functions over the hypercube. Electronic Colloquium on Computational Complexity (ECCC), 2013 An optimal lower bound for monotonicity testing over hypergrids CoRR, 2013 A o(n) monotonicity tester for Boolean functions over the hypercube CoRR, 2013 Welfare Maximization and Truthfulness in Mechanism Design with Ordinal Preferences. CoRR, 2013 Optimal bounds for monotonicity and lipschitz testing over hypercubes and hypergrids. Proceedings of the Symposium on Theory of Computing Conference, 2013 A o(n) monotonicity tester for boolean functions over the hypercube. Proceedings of the Symposium on Theory of Computing Conference, 2013 Budget smoothing for internet ad auctions: a game theoretic approach. Proceedings of the fourteenth ACM Conference on Electronic Commerce, 2013 An Optimal Lower Bound for Monotonicity Testing over Hypergrids. Proceedings of the Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 2013 Capacitated Network Design on Undirected Graphs. Proceedings of the Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 2013 2012 Review of design of approximation algorithms, by David P. Williamson and David B. Shmoys. SIGACT News, 2012 Optimal bounds for monotonicity and Lipschitz testing over the hypercube. Electronic Colloquium on Computational Complexity (ECCC), 2012 Testing Coverage Functions CoRR, 2012 Optimal bounds for monotonicity and Lipschitz testing over the hypercube CoRR, 2012 Approximability of the Firefighter Problem - Computing Cuts over Time. Algorithmica, 2012 Testing Coverage Functions. Proceedings of the Automata, Languages, and Programming - 39th International Colloquium, 2012 2011 New geometry-inspired relaxations and algorithms for the metric Steiner tree problem. Math. Program., 2011 Variance on the Leaves of a Tree Markov Random Field: Detecting Character Dependencies in Phylogenies CoRR, 2011 Social Welfare in One-sided Matching Markets without Money CoRR, 2011 Approximability of Sparse Integer Programs. Algorithmica, 2011 Facility Location with Client Latencies: Linear Programming Based Techniques for Minimum Latency Problems. Proceedings of the Integer Programming and Combinatoral Optimization, 2011 Approximability of Capacitated Network Design. Proceedings of the Integer Programming and Combinatoral Optimization, 2011 Social Welfare in One-Sided Matching Markets without Money. Proceedings of the Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 2011 Optimal Lower Bounds for Universal and Differentially Private Steiner Trees and TSPs. Proceedings of the Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 2011 2010 Design is as Easy as Optimization. SIAM J. Discrete Math., 2010 Rationality and Strongly Polynomial Solvability of Eisenberg--Gale Markets with Two Agents. SIAM J. Discrete Math., 2010 On the Approximability of Budgeted Allocations and Improved Lower Bounds for Submodular Welfare Maximization and GAP. SIAM J. Comput., 2010 Integrality gap of the hypergraphic relaxation of Steiner trees: A short proof of a 1.55 upper bound. Oper. Res. Lett., 2010 G-parking functions, acyclic orientations and spanning trees. Discrete Mathematics, 2010 Optimal Lower Bounds for Universal and Differentially Private Steiner Tree and TSP CoRR, 2010 Approximability of Capacitated Network Design CoRR, 2010 Facility Location with Client Latencies: Linear-Programming based Techniques for Minimum-Latency Problems CoRR, 2010 Integrality Gap of the Hypergraphic Relaxation of Steiner Trees: a short proof of a 1.55 upper bound CoRR, 2010 On Column-restricted and Priority Covering Integer Programs CoRR, 2010 Hypergraphic LP Relaxations for Steiner Trees. Proceedings of the Integer Programming and Combinatorial Optimization, 2010 On Column-Restricted and Priority Covering Integer Programs. Proceedings of the Integer Programming and Combinatorial Optimization, 2010 2009 On competitiveness in uniform utility allocation markets. Oper. Res. Lett., 2009 The Effect of Malice on the Social Optimum in Linear Load Balancing Games CoRR, 2009 Hypergraphic LP Relaxations for Steiner Trees CoRR, 2009 On Allocating Goods to Maximize Fairness CoRR, 2009 Approximation Algorithms for the Firefighter Problem: Cuts over Time and Submodularity. Proceedings of the Algorithms and Computation, 20th International Symposium, 2009 Algorithms for Message Ferrying on Mobile ad hoc Networks. Proceedings of the IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, 2009 On Allocating Goods to Maximize Fairness. Proceedings of the 50th Annual IEEE Symposium on Foundations of Computer Science, 2009 2008 Algorithmic aspects of connectivity, allocation and design problems. PhD thesis, 2008 Budget Constrained Bidding in Keyword Auctions and Online Knapsack Problems. Proceedings of the Internet and Network Economics, 4th International Workshop, 2008 Efficiency, Fairness and Competitiveness in Nash Bargaining Games. Proceedings of the Internet and Network Economics, 4th International Workshop, 2008 New Geometry-Inspired Relaxations and Algorithms for the Metric Steiner Tree Problem. Proceedings of the Integer Programming and Combinatorial Optimization, 2008 On the Approximability of Budgeted Allocations and Improved Lower Bounds for Submodular Welfare Maximization and GAP. Proceedings of the 49th Annual IEEE Symposium on Foundations of Computer Science, 2008 2007 Proceedings of the Internet and Network Economics, Third International Workshop, 2007 On Competitiveness in Uniform Utility Allocation Markets. Proceedings of the Internet and Network Economics, Third International Workshop, 2007 2006 Eisenberg-Gale Markets: Rationality, Strongly Polynomial Solvability, and Competition Monotonicity. Electronic Colloquium on Computational Complexity (ECCC), 2006 New Results on Rationality and Strongly Polynomial Time Solvability in Eisenberg-Gale Markets. Proceedings of the Internet and Network Economics, Second International Workshop, 2006 Design Is as Easy as Optimization. Proceedings of the Automata, Languages and Programming, 33rd International Colloquium, 2006 2005 Fairness and optimality in congestion games. Proceedings of the Proceedings 6th ACM Conference on Electronic Commerce (EC-2005), 2005
# Difference between revisions of "Isoperimetric Inequalities" ### Isoperimetric Inequality If a figure in the plane has area $A$ and perimeter $P$ then $\frac{4\pi A}{p^2} < 1$. This means that given a perimeter $P$ for a plane figure, the circle has the largest area. Conversely, of all plane figures with area $A$, the circle has the least perimeter.
Thread: "Rare" Primes View Single Post 2008-08-29, 14:24   #55 R.D. Silverman Nov 2003 22·5·373 Posts Quote: Originally Posted by retina f(x)=29-x^2 ? Although not explicitly stated, I believe that the domain is N. Now, f(x) is prime for x = 0, 4 and no other. If you accept the more general definition of prime (i.e. not restricted to just N) then f(x) will be prime i.o. (although a proof is lacking). If we allow x \in R, then f(x) is indeed prime the required number of times.
# How to modify DNA evolution model to fit actual data? Bioinformatics Asked by Anthony Guterres on December 8, 2020 I’m working on understanding the evolution of a gene in a phylogenetic tree. I know the rates of evolution of each organism (from the tree). I am taking a random DNA sequence with my gene, evolving it along the tree with the said rates and comparing this to the actual gene in these various organisms. I am ‘evolving’ the gene if randomly at a probability determined by the rate at each step in the tree. The gene from my simulation is different from what is actually in these organisms. How do I modify my model, so that my simulation gives the same results? Mutation is not a random process because there are many mutational biases, e.g. the transition vs. transversion ratio. Ideally use a proper evolutionary model. Maybe try a tool that incorporates substitution and indel models. I'm not up-to-date on the best packages, but e.g. Answered by Chris_Rands on December 8, 2020 I decided that this was too involved for a comment, so I'm making it an answer. It is still quite difficult to answer the question given the absence of methods explanation; but I think that there's some misunderstanding of phylogenetic modeling going on here, and I can at least address that. I'm going to try to break it down into its components, starting with the basic statistical idea of simulation and then going back to phylogenies. Statistical reasoning Let's say that I flip 10 coins (2 outcomes: head (H) and tails (T)), and I observe the sequence HTHTTTHHTT. I then fit a maximum likelihood model, and find that the best model for the data has $$Pr(H) = 0.4$$, $$Pr(T) = 0.6$$, and that these two outcomes are mutually exhaustive: $$Pr(H) + Pr(T) = 1.0$$ I can then use a binomial model to re-simulate data based on these parameters. I will observe various sequences, for example: HTTTTTTHHT TTTTHHTHHH HTHHTHHTHH... However, based purely on the parameters of the model, I don't expect every new simulation to recapitulate the exact same HTHTTTHHTT sequence, because I have reduced its information to some rate parameters for the simulation. Each of those new sequences will yield model parameters ($$Pr(H), Pr(T)$$) that are similar to the training data, but they will not necessarily be particularly similar to the training data in sequence. Phylogenetic modeling Phylogenetic modeling works in the same way, such that you reduce the observations at the tips of the tree into some model with some parameters. Most of the time you are estimating e.g. a rate matrix of parameters for transition between states for a set of residues (in simplest case for DNA 4 nucleotide states and for proteins, 20 amino acid states usually). If you are working with gene sequences you are probably using the 4 nucleotide states or some derivation thereof. Simulations usually work by starting with some fixed ancestral state at the root (which can take various states), and then evolving forward from there along each branch of the tree using the rate matrix in a continuous-time Markov chain to "replay the tape of life", by assuming that the model you've fit somewhat reflects truth. While opinions differ somewhat on what to expect from "replaying the tape of life" (see that linked review for more), basically no one expects to get the same results as the input data, or even 90% similar to the input data. That would indicate that evolution is highly deterministic, which just isn't reliably true. Of course there are situations where such determinism happens to be the case, but they are relatively rare, and I don't think that sequence evolution is considered to be very deterministic at all. It's a little different in that the tree defines a correlation structure of the tips, which constrains the simulation results to be somewhat more similar to the training data than we expect from coin-flipping. But there's still no reason to expect particularly similar results on the tips. But you can also model discrete characters on a tree, which can be even simpler. For example, I used the R APE package to simulate the following character states (A or B for each tip) on a simulated tree of 10 tips with the following commands: tr = rtree(10) c_tr = rTraitDisc(tr) I can then build a model for this character and this tree using the following: rates = ace(c_tr, tr, type="discrete") and use it to simulate new sets of states: # A (index 1) is estimated to be ancestral state with 99% confidence new_sim = rTraitDisc(tr, rate=rates$rates, states=c("A", "B"), root.value=1) plot(tr) tiplabels(new_sim, adj = c(1,0)) Look! Not a single observation of state B! Let's try it again: new_sim2 = rTraitDisc(tr, rate=rates$rates, states=c("A", "B"), root.value=1) plot(tr) Ok, this is more similar, but there's still another state B on the tips. Conclusion Evolution is not very deterministic. The correlation structure of the tree makes the outcomes less random than independent trials as in coin-flipping, but it is not at all expected that simulation results from phylogenetic models will match the training observations well. The best that can be said is that each simulated dataset can hopefully be reduced to the same model parameters as the input data, but that's a very different statement from yielding the exact same outcomes. If what you are looking for from simulation is to yield the exact same sequence states at the tips, then you are unlikely to get that from phylogenetic simulation. You might occasionally get pretty close, but that is not at all the expectation. Answered by Maximilian Press on December 8, 2020 ## Related Questions ### How to combine multiple files into one file? 1  Asked on July 30, 2020 ### Economist article on coronavirus 2  Asked on July 29, 2020 by onyourmark ### Simulating 3′ end tag-based scRNA-seq reads 0  Asked on July 26, 2020 by merv ### RNAseq biological replicates not clustering in PCA plots 2  Asked on July 26, 2020 by nmp116 ### Which sequence alignment tools support codon alignment? 3  Asked on July 25, 2020 by iakov-davydov
× # A huge limit. Let $$n\geq1$$ be an integer, calculate the following limit $\lim_{x\to0}\frac{\ln\sqrt[n]{\frac{\sin2^nx}{\sin x}}-\ln2}{\ln\sqrt[n]{(e+\sin^2x)(e+\sin^22x)...(e+\sin^2nx)}-1}$ Source : I've encountered this limit a couple years ago, I think that it is from some journal but I'm not sure Note by Haroun Meghaichi 3 years ago Sort by: After a little manipulation the given limit can be written as: $$\lim\limits_{x\to 0} \displaystyle\frac{\ln (\cos x\times\cos 2x\times\cos 4x\times...\times\cos 2^{n-1}x)}{\ln \left( 1+\dfrac{\sin^{2} x}{e}\right)\left( 1+\dfrac{\sin^{2} 2x}{e}\right)\left( 1+\dfrac{\sin^{2} 3x}{e}\right)...\left( 1+\dfrac{\sin^{2} nx}{e}\right)}$$ $$=\lim\limits_{x\to 0} \displaystyle\frac{\dfrac{\ln (1+\cos x-1)}{\cos x-1}\times\dfrac{\cos x-1}{x^{2}}+ \dfrac{\ln (1+\cos 2x-1)}{\cos 2x-1}\times\dfrac{\cos 2x-1}{(2x)^{2}}\times 2^{2}+...+\dfrac{\ln (1+\cos 2^{n-1}x-1)}{\cos 2^{n-1}x-1}\times\dfrac{\cos 2^{n-1}x-1}{(2^{n-1}x)^{2}}\times 2^{2n-2}}{\dfrac{\ln \left( 1+\dfrac{\sin^{2} x}{e}\right)}{\dfrac{\sin^{2} x}{e}}\times\dfrac{\dfrac{\sin^{2} x}{e}}{x^{2}}+\dfrac{\ln \left( 1+\dfrac{\sin^{2} 2x}{e}\right)}{\dfrac{\sin^{2} 2x}{e}}\times\dfrac{\dfrac{\sin^{2} 2x}{e}}{(2x)^{2}}\times 2^{2}+...+\dfrac{\ln \left( 1+\dfrac{\sin^{2} nx}{e}\right)}{\dfrac{\sin^{2} nx}{e}}\times\dfrac{\dfrac{\sin^{2} nx}{e}}{(nx)^{2}}\times n^{2}}$$ $$=\displaystyle\frac{\dfrac{-1}{2}(2^{0}+2^{2}+2^{4}+...2^{2n-2})}{\dfrac{1}{e}(1^{2}+2^{2}+3^{2}+...+n^{2})}$$ $$=\dfrac{(1-2^{2n})e}{n(n+1)(2n+1)}$$ · 3 years ago As the lowest power of $$x$$ in expansion of denominator is $$2$$, we can put $$\ln(1 + \cos 2^rx - 1) \approx \cos 2^rx - 1 \approx - \dfrac{1}{2} 2^r x^2$$. Hence, we avoid lengthy seeming expression :) · 3 years ago Using series expansion, I get the answer as $$\dfrac{(1 - 2 ^{2n}) e}{n(n+1)(2n+1)}$$ . Is it correct? · 3 years ago I believe that you should get $$e$$ instead of $$e^2$$. · 3 years ago Yes, I meant $$e$$, edited. · 3 years ago
× # A More Symmetric Exponentiation Exponentiation is distributive over multiplication, but it isn't commutative or associative like addition and multiplication are. Is there a binary operation that is distributive over multiplication, and also commutative and/or associative? In order to find one such operation, I assumed that there is an identity element. An easier question than the one above is: Is the identity element 0, 1, or neither? If you find an operation that works, can you then find a commutative and/or associative operation that is distributive over that? Note by Halvor Bratland 2 months, 1 week ago Sort by: Well, here's one: $a*b = \begin{cases} 1&\text{if } ab \ne 0 \\ 0&\text{if } ab = 0 \end{cases}.$ Is that the kind of thing you had in mind? - 3 weeks, 2 days ago Yeah, that definitely works. And it’s pretty easy to find a similar operation that’s distributive over that one. The one I had in mind was a^(log_sqrt(2)(b)), which gives a wider range of outputs, but rarely integer ones. Yours also works better with negative numbers. - 3 weeks, 2 days ago Ah ok. If I change your sqrt(2) to an e, I get something symmetric-looking like $$e^{\ln(a)\ln(b)},$$ which is pretty nice. But it doesn't work on negative numbers. I guess maybe if you put absolute values on the $$a$$ and $$b$$? Does that work? And you can probably even fill it in at 0 by setting $$0 * b = 0.$$ I haven't checked all the details. - 3 weeks, 2 days ago I chose the square root of two because it continues the pattern of 2+2 = (2)(2) = 4, and the tangents at that point increasing by a factor of 2. - 3 weeks, 2 days ago
| English | 简体中文 | 796. Rotate String Description Given two strings s and goal, return true if and only if s can become goal after some number of shifts on s. A shift on s consists of moving the leftmost character of s to the rightmost position. • For example, if s = "abcde", then it will be "bcdea" after one shift. Example 1: Input: s = "abcde", goal = "cdeab" Output: true Example 2: Input: s = "abcde", goal = "abced" Output: false Constraints: • 1 <= s.length, goal.length <= 100 • s and goal consist of lowercase English letters.
# BiRank #### Definition this method targeted the problem of ranking vertices of bipartite graph, based on the graph’s link structure as well as prior information about vertices (termed as query vector). $BiRank$ iteratively assigns scores to vertices and finally converges to a unique stationary ranking. In contrast to the traditional random walk-based methods, $BiRank$ iterates towards optimizing a regularization function, which smooths the graph under the guidance of the query vector. A bipartite graph $G = (U \cup P,E)$ with its weight matrix $W$. A query vector $u^0$ $p^0$ encodes the prior belief concerning the vertices in $U$ and $P$, respectively, with respect to the ranking criterion. To rank vertices based on the graph structure, seminal algorithms like $PageRank$ and $HITS$ have been proposed. Motivated from their design, this intuition for bipartite graph ranking is that the scores of vertices should follow a smoothness convention, namely that: $a$ vertex (from one side) should be ranked high if it is connected to higher-ranked vertices (from the other side). This rule defines a mutually-reinforcing relationship, which is naturally implemented as an iterative process that refines each vertex’s score as the sum of the contribution from its connected vertices: $$P_j={\underset{i=1}{\overset{|U|}{\sum}}} w_{ij} u_i; u_i={\underset{j=1}{\overset{|P|}{\sum}}} w_{ij} P_j$$ As it is an additive update rule, normalization is necessary to ensure the convergence and stability. This method adopts the symmetric normalization scheme, which is inspired from of semi supervised learning on graphs. The idea is to smooth an edge weight by the degree of its two connected vertices simultaneously: $$p_j={\underset{i=1}{\overset{|U|}{\sum}}} {w_{ij} \over \sqrt d_i \sqrt d_j} u_i ; u_i={\underset{j=1}{\overset{|P|}{\sum}}} {w_{ij} \over \sqrt d_i \sqrt d_j} P_j$$ where $d_i$ and $d_j$ are the weighted degrees of vertices $u_i$ and $p_j$ , respectively. The use of symmetric normalization is a key characteristic of $BiRank$, allowing edges connected to a high degree vertex to be suppressed through normalization, lessening the contribution of high-degree vertices. This has the beneficial effect of toning down the dependence of top rankings on high degree vertices, a known defect of the random walk-based diffusion methods. This gives rise to better quality results. To account for the query vector $p^0$ and $u^0$ that encode the prior belief on the importance of the vertices, one can either opt for 1) incorporating the graph ranking results for combination in post-processing ($a.k.a$ late fusion), or 2) factoring the query vector directly into the ranking process. The first way of post-processing yields a ranking that is a compromise between two rankings; for scenarios that the query vector defines a full ranking of vertices, this ensemble approach might be suitable. However, when the query vector only provides partial information this method fails to identify an optimal ranking. In $BiRank$ the second way opted that factors the query vector directly into the ranking process, which has the advantage of using the query vector to guide the ranking process: $$p_j= \alpha {\underset{i=1}{\overset{|U|}{\sum}}} {w_{ij} \over \sqrt d_i \sqrt d_j} u_i +(1- \alpha) p_j^0$$ $$u_i= \beta {\underset{j=1}{\overset{|P|}{\sum}}} {w_{ij} \over \sqrt d_i \sqrt d_j} P_i +(1- \beta) u_i^0$$ where $\alpha$ and $\beta$ are hyper-parameters to weight the importance of the graph structure and the prior query vector, to be set between $[0, 1]$. To keep notation simple, it can also express the iteration in its equivalent matrix form: $$p = \alpha S^T u + (1-\alpha) p^0;$$ $$u = \beta S p + (1-\beta) u^0;$$ where $S = D_u^{-1\over 2} WD_p^{-1\over 2}$, the symmetric normalization of weight matrix $W$. This set of update rules called the $BiRank$ iteration, which forms the core of the iterative $BiRank$ algorithm. $BiRank$ ranks vertices by accounting for both the graph structure and prior knowledge. $BiRank$ is theoretically guaranteed to converge to a stationary solution, and can be explained as by both a regularization view and a Bayesian view. #### References • He, X., Gao, M., Kan, M.Y. and Wang, D., 2016. Birank: Towards ranking on bipartite graphs. IEEE Transactions on Knowledge and Data Engineering, 29(1), pp.57-71. DOI: 10.1109/TKDE.2016.2611584
2 like 0 dislike 172 views Besides simple random sampling, what other sampling methods can be used by data scientists and statisticians? | 172 views 1 like 0 dislike Often a simple random sample is not feasible, or at least not practical, so researchers will do their best to use other sampling methods that are likely to result in a representative sample. A few different sampling methods that may be used successfully are: 1.  $\textbf{stratified sampling:}$ subjects are categorized by similar traits, then sam- ple subjects are randomly selected from each category in numbers that are porportional to their numbers in the population. Example: 4 girls are randomly selected and then 6 boys are randomly selected from a population that is 40\% female. Stratified sampling guarantees a representative sample relative to the categories that are used. 2. $\textbf{cluster sampling:}$ there is already a natural categorization of subjects, usually by location. A sample of $\textbf{categories}$ is selected randomly, and $\textbf{every subject}$ in each of the selected categories is part of the sample. Example: randomly select 10 elementary schools in Georgia, then sample every teacher at each of those schools. Cluster sampling is done for the sake of saving time and/or money. 3. $\textbf{systematic sampling:}$ sample every $n$th subject. Example: sample every $1000$th m\&m to weigh and measure. Systematic sampling is common in manufacturing. by Gold Status (31,693 points) 1 like 0 dislike 1 like 0 dislike 1 like 0 dislike 0 like 0 dislike 1 like 0 dislike 0 like 0 dislike 1 like 0 dislike 0 like 0 dislike 0 like 0 dislike
My new ebook “Comprehending Comprehensions” is on pre-sale and 40% off! ## Dunder methods | Pydon't 🐍 This is an introduction to dunder methods in Python, to help you understand what they are and what they are for. (If you are new here and have no idea what a Pydon't is, you may want to read the Pydon't Manifesto.) # Introduction Python is a language that has a rich set of built-in functions and operators that work really well with the built-in types. For example, the operator + works on numbers, as addition, but it also works on strings, lists, and tuples, as concatenation: >>> 1 + 2.3 3.3 >>> [1, 2, 3] + [4, 5, 6] [1, 2, 3, 4, 5, 6] But what is it that defines that + is addition for numbers (integers and floats) and concatenation for lists, tuples, strings? What if I wanted + to work on other types? Can I do that? The short answer is “yes”, and that happens through dunder methods, the object of study in this Pydon't. In this Pydon't, you will • understand what are dunder methods; • why they are called like that; • see various useful dunder methods; • learn about what dunder methods correspond to what built-ins; • write your own dunder methods for example classes; and • realise that dunder methods are like any other method you have written before. # What are dunder methods? In Python, dunder methods are methods that allow instances of a class to interact with the built-in functions and operators of the language. The word “dunder” comes from “double underscore”, because the names of dunder methods start and end with two underscores, for example __str__ or __add__. Typically, dunder methods are not invoked directly by the programmer, making it look like they are called by magic. That is why dunder methods are also referred to as “magic methods” sometimes.1 Dunder methods are not called magically, though. They are just called implicitly by the language, at specific times that are well-defined, and that depend on the dunder method in question. ## The dunder method everyone knows If you have defined classes in Python, you are bound to have crossed paths with a dunder method: __init__. The dunder method __init__ is responsible for initialising your instance of the class, which is why it is in there that you usually set a bunch of attributes related to arguments the class received. For example, if you were creating an instance of a class Square, you would create the attribute for the side length in __init__: class Square: def __init__(self, side_length): """__init__ is the dunder method that INITialises the instance. To create a square, we need to know the length of its side, so that will be passed as an argument later, e.g. with Square(1). To make sure the instance knows its own side length, we save it with self.side_length = side_length. """ print("Inside init!") self.side_length = side_length sq = Square(1) # Inside init! If you run the code above, you will see the message “Inside init!” being printed, and yet, you did not call the method __init__ directly! The dunder method __init__ was called implicitly by the language when you created your instance of a square. ## Why do dunder methods start and end with two underscores? The two underscores in the beginning and end of the name of a dunder method do not have any special significance. In other words, the fact that the method name starts and ends with two underscores, in and of itself, does nothing special. The two underscores are there just to prevent name collision with other methods implemented by unsuspecting programmers. Think of it this way: Python has a built-in called sum. You can define sum to be something else, but then you lose access to the built-in that sums things, right? >>> sum(range(10)) 45 >>> sum = 45 >>> sum(range(10)) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'int' object is not callable Often, you see beginners using sum as a variable name because they do not know sum is actually a built-in function. If the built-in was named __sum__ instead of sum, it would be much more difficult for you to override it by mistake, right? But it would also make it much less convenient to use sum... However, for magic methods, we do not need their names to be super convenient to type, because you almost never type the name of a magic method. Therefore, Python decided that the magic methods would have names that start and end with two underscores, to make it less likely that someone would override one of those methods by accident! All in all, dunder methods are just like any other method you have implemented, with the small exception that dunder methods can be called implicitly by the language. All Python operators, like +, ==, and in, rely on dunder methods to implement their behaviour. For example, when Python encounters the code value in container, it actually turns that into a call to the appropriate dunder method __contains__, which means that Python actually runs the expression container.__contains__(value). Let me show you: >>> my_list = [2, 4, 6] >>> 3 in my_list False >>> my_list.__contains__(3) False >>> 6 in my_list True >>> my_list.__contains__(6) True Therefore, when you want to overload certain operators to make them work in a custom way with your own objects, you need to implement the respective dunder methods. So, if you were to create your own type of container, you could implement the dunder method __contains__ to make sure that your containers could be on the right-hand side of an expression with the operator in. # List of dunder methods and their interactions As we have seen, dunder methods are (typically) called implicitly by the language... But when? The dunder method __init__ is called when initialising an instance of a class, but what about __str__, or __bool__, or other dunder methods? The table that follows lists all dunder methods together with one or more (simplified) usage examples that would implicitly call the respective dunder method. This may include brief descriptions of situations where the relevant dunder method might be called, or example function calls that depend on that dunder method. These example situations may have caveats associated, so be sure to read the documentation on dunder methods whenever you want to play with a dunder method you are unfamiliar with. The table also includes links to the documentation of the dunder method under the emoji 🔗. When available, relevant Pydon'ts are linked under the emoji 🗒️. Finally, the row order of the table matches the order in which these dunder methods are mentioned in the “Data Model” page of the documentation, which does not imply any dependency between the various dunder methods, nor does it imply a level of difficulty in understanding the methods. Dunder method Usage / Needed for Link __init__ Initialise object 🔗 __new__ Create object 🔗 __del__ Destroy object 🔗 __repr__ Compute “official” string representation / repr(obj) 🗒️ 🔗 __str__ Pretty print object / str(obj) / print(obj) 🗒️ 🔗 __bytes__ bytes(obj) 🔗 __format__ Custom string formatting 🗒️ 🔗 __lt__ obj < ... 🔗 __le__ obj <= ... 🔗 __eq__ obj == ... 🔗 __ne__ obj != ... 🔗 __gt__ obj > ... 🔗 __ge__ obj >= ... 🔗 __hash__ hash(obj) / object as dictionary key 🔗 __bool__ bool(obj) / define Truthy/Falsy value of object 🗒️ 🔗 __getattr__ Fallback for attribute access 🔗 __getattribute__ Implement attribute access: obj.name 🔗 __setattr__ Set attribute values: obj.name = value 🔗 __delattr__ Delete attribute: del obj.name 🔗 __dir__ dir(obj) 🔗 __get__ Attribute access in descriptor 🔗 __set__ Set attribute in descriptor 🔗 __delete__ Attribute deletion in descriptor 🔗 __init_subclass__ Initialise subclass 🔗 __set_name__ Owner class assignment callback 🔗 __instancecheck__ isinstance(obj, ...) 🔗 __subclasscheck__ issubclass(obj, ...) 🔗 __class_getitem__ Emulate generic types 🔗 __call__ Emulate callables / obj(*args, **kwargs) 🔗 __len__ len(obj) 🔗 __length_hint__ Estimate length for optimisation purposes 🔗 __getitem__ Access obj[key] 🗒️ 🔗 __setitem__ obj[key] = ... or obj[] 🗒️ 🔗 __delitem__ del obj[key] 🗒️ 🔗 __missing__ Handle missing keys in dict subclasses 🔗 __iter__ iter(obj) / for ... in obj (iterating over) 🔗 __reversed__ reverse(obj) 🔗 __contains__ ... in obj (membership test) 🔗 __add__ obj + ... 🔗 __radd__ ... + obj 🔗 __iadd__ obj += ... 🔗 __sub__ 2 3 obj - ... 🔗 __mul__ 2 3 obj * ... 🔗 __matmul__ 2 3 obj @ ... 🔗 __truediv__ 2 3 obj / ... 🔗 __floordiv__ 2 3 obj // ... 🔗 __mod__ 2 3 obj % ... 🔗 __divmod__ 2 divmod(obj, ...) 🔗 __pow__ 2 3 obj ** ... 🔗 __lshift__ 2 3 obj << ... 🔗 __rshift__ 2 3 obj >> ... 🔗 __and__ 2 3 obj & ... 🔗 __xor__ 2 3 obj ^ ... 🔗 __or__ 2 3 obj | ... 🔗 __neg__ -obj (unary) 🔗 __pos__ +obj (unary) 🔗 __abs__ abs(obj) 🔗 __invert__ ~obj (unary) 🔗 __complex__ complex(obj) 🔗 __int__ int(obj) 🔗 __float__ float(obj) 🔗 __index__ Losslessly convert to integer 🔗 __round__ round(obj) 🔗 __trunc__ math.trunc(obj) 🔗 __floor__ math.floor(obj) 🔗 __ceil__ math.ceil(obj) 🔗 __enter__ with obj (enter context manager) 🔗 __exit__ with obj (exit context manager) 🔗 __await__ Implement awaitable objects 🔗 __aiter__ aiter(obj) 🔗 __anext__ anext(obj) 🔗 __aenter__ async with obj (enter async context manager) 🔗 __aexit__ async with obj (exit async context manager) 🔗 # Exploring a dunder method Whenever I learn about a new dunder method, the first thing I do is to play around with it. Below, I share with you the three steps I follow when I'm exploring a new dunder method: 1. try to understand when the dunder method is called; 2. implement a stub for that method and trigger it with code; and 3. use the dunder method in a useful situation. I will show you how I follow these steps with a practical example, the dunder method __missing__. ## What is the dunder method for? What is the dunder method __missing__ for? The documentation for the dunder method __missing__ reads: “Called by dict.__getitem__() to implement self[key] for dict subclasses when key is not in the dictionary.” In other words, the dunder method __missing__ is only relevant for subclasses of dict, and it is called whenever we cannot find a given key in the dictionary. ## How to trigger the dunder method? In what situations, that I can recreate, does the dunder method __missing__ get called? From the documentation text, it looks like we might need a dictionary subclass, and then we need to access a key that does not exist in that dictionary. Thus, this should be enough to trigger the dunder method __missing__: class DictSubclass(dict): def __missing__(self, key): print("Hello, world!") my_dict = DictSubclass() my_dict["this key isn't available"] # Hello, world! Notice how barebones the code above is: I just defined a method called __missing__ and made a print, just so I could check that __missing__ was being called. Now I am going to make a couple more tests, just to make sure that __missing__ is really only called when trying to get the value of a key that doesn't exist: class DictSubclass(dict): def __missing__(self, key): print(f"Missing {key = }") my_dict = DictSubclass() my_dict[0] = True if my_dict[0]: print("Key 0 was True.") # Prints: Key 0 was True my_dict[1] # Prints: Missing key = 1 ## Using the dunder method in a useful situation Now that we have a clearer picture of when __missing__ comes into play, we can use it for something useful. For example, we can try implementing defaultdict based on __missing__. defaultdict is a container from the module collections, and it's just like a dictionary, except that it uses a factory to generate default values when keys are missing. For example, here is an instance of defaultdict that returns the value 0 by default: from collections import defaultdict olympic_medals = defaultdict(lambda: 0) # Produce 0 by default olympic_medals["Phelps"] = 28 print(olympic_medals["Phelps"]) # 28 print(olympic_medals["me"]) # 0 So, to reimplement defaultdict, we need to accept a factory function, we need to save that factory, and we need to use it inside __missing__. Just as a side note, notice that defaultdict not only returns the default value, but also assigns it to the key that wasn't there before: >>> from collections import defaultdict >>> olympic_medals = defaultdict(lambda: 0) # Produce 0 by default >>> olympic_medals defaultdict(<function <lambda> at 0x000001F15404F1F0>, {}) >>> # Notice the underlying dictionary is empty -------^^ >>> olympic_medals["me"] 0 >>> olympic_medals defaultdict(<function <lambda> at 0x000001F15404F1F0>, {'me': 0}) >>> # It's not empty anymore --------------------------^^^^^^^^^ Given all of this, here is a possible reimplementation of defaultdict: class my_defaultdict(dict): def __init__(self, default_factory, **kwargs): super().__init__(**kwargs) self.default_factory = default_factory def __missing__(self, key): """Populate the missing key and return its value.""" self[key] = self.default_factory() return self[key] olympic_medals = my_defaultdict(lambda: 0) # Produce 0 by default olympic_medals["Phelps"] = 28 print(olympic_medals["Phelps"]) # 28 print(olympic_medals["me"]) # 0 # Conclusion Here's the main takeaway of this Pydon't, for you, on a silver platter: Dunder methods are specific methods that allow you to specify how your objects interact with the Python syntax, its keywords, operators, and built-ins. This Pydon't showed you that: • dunder methods are methods that are called implicitly by the Python language in specific situations; • “dunder” comes from “double underscore”, referring to the two underscores that are the prefix and the suffix of all dunder methods; • dunder methods are sometimes called magic methods because they are often called without explicit calls; • learning about a new dunder method can be done through a series of small, simple steps; and • dunder methods are regular Python methods of regular Python classes. If you liked this Pydon't be sure to leave a reaction below and share this with your friends and fellow Pythonistas. Also, don't forget to subscribe to the newsletter so you don't miss a single Pydon't! 1. I very much prefer the name “dunder method” over “magic method” because “magic method” makes it look like it's difficult to understand because there is wizardry going on! Spoiler: there isn't. 2. this dunder method also has a “right” version, with the same name but prefixed by an "r", and that is called when the object is on the right-hand side of the operation and the object on the left-hand side doesn't implement the behaviour. See __radd__ above. 3. this dunder method also has a “in-place” version, with the same name but prefixed by an "i", and that is called for augmented assignment with the given operator. See __iadd__` above. I hope you learned something new! If you did, consider following the footsteps of the readers who bought me a slice of pizza 🍕. Your small contribution helps me produce this content for free and without spamming you with annoying ads.
# American Institute of Mathematical Sciences doi: 10.3934/jimo.2018175 ## Optimal pricing and inventory strategies for introducing a new product based on demand substitution effects 1 Ingram School of Engineering, Texas State University, San Marcos, TX 78666, USA 2 School of Management and Engineering, Nanjing University, Nanjing, China 210093 3 Amazon, Seattle, WA 98109, USA * Corresponding author: Jingquan Li Received  July 2017 Revised  August 2018 Published  December 2018 This paper studies a single-period inventory-pricing problem with two substitutable products, which is very important in the area of Operations Management but has received little attention. The proposed problem focuses on determining the optimal price of the existing product and the inventory level of the new product. Inspired by practice, the problem considers various pricing strategies for the existing product as well as the cross elasticity of demand between existing and new products. A mathematical model has been developed for different pricing strategies to maximize the expected profit. It has been proven that the objective function is concave and there exists the unique optimal solution. Different sets of computational examples are conducted to show that the optimal pricing and inventory strategy generated by the model can increase profits. Citation: Zhijie Sasha Dong, Wei Chen, Qing Zhao, Jingquan Li. Optimal pricing and inventory strategies for introducing a new product based on demand substitution effects. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2018175 ##### References: show all references ##### References: Effect of the extant product's price on the new product's order quantity Effect of the extant product's price on the expected profit Effect of the inventory level of the existing product on the retailer's optimal policy and the expected profit Strategy Variables Values $Q_1$ 160 170 180 190 200 210 220 230 240 250 Unchanged $Q_2$ 840 830 820 810 800 800 800 800 800 800 $s_1 (\$)$12 12 12 12 12 12 12 12 12 12$ EP (\$)$ 5280 5360 5440 5520 5600 5600 5600 5600 5600 5600 $RP (\$)$100.0 100.0 100.0 100.0 100.0 101.0 102.0 103.0 104.0 105.0 Decreased$ Q_2 $840 830 820 810 800 790 780 770 760 746.6$ s_1 (\$)$ 12 12 12 12 12 11.7 11.4 11.1 10.8 10.5 $EP (\$)$5280 5360 5440 5520 5600 5617 5628 5633 5632 5611.4$ RP (\$)$ 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 99.7 Strategy Variables Values $Q_1$ 160 170 180 190 200 210 220 230 240 250 Unchanged $Q_2$ 840 830 820 810 800 800 800 800 800 800 $s_1 (\$)$12 12 12 12 12 12 12 12 12 12$ EP (\$)$ 5280 5360 5440 5520 5600 5600 5600 5600 5600 5600 $RP (\$)$100.0 100.0 100.0 100.0 100.0 101.0 102.0 103.0 104.0 105.0 Decreased$ Q_2 $840 830 820 810 800 790 780 770 760 746.6$ s_1 (\$)$ 12 12 12 12 12 11.7 11.4 11.1 10.8 10.5 $EP (\$)$5280 5360 5440 5520 5600 5617 5628 5633 5632 5611.4$ RP (\$)$ 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 99.7 Effect of the salvage value of the existing product on the retailer's optimal policy and the expected profit Strategy Variables Values $h_1$ -5 -4 -3 -2 -1 0 1 2 3 4 Unchanged $Q_2$ 800 800 800 800 800 800 800 800 800 800 $s_1 (\$)$12 12 12 12 12 12 12 12 12 12$ EP (\$)$ 5350 5400 5450 5500 5550 5600 5650 5700 5750 5800 $RP (\$)$105.0 105.0 105.0 105.0 105.0 105.0 105.0 105.0 105.0 105.0 Decreased$ Q_2 $750 750 750 750 750 750 750 750 750 750$ s_1 (\$)$ 10.5 10.5 10.5 10.5 10.5 10.5 10.5 10.5 10.5 10.5 $EP (\$)$5625 5625 5625 5625 5625 5625 5625 5625 5625 5625$ RP (\$)$ 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 Strategy Variables Values $h_1$ -5 -4 -3 -2 -1 0 1 2 3 4 Unchanged $Q_2$ 800 800 800 800 800 800 800 800 800 800 $s_1 (\$)$12 12 12 12 12 12 12 12 12 12$ EP (\$)$ 5350 5400 5450 5500 5550 5600 5650 5700 5750 5800 $RP (\$)$105.0 105.0 105.0 105.0 105.0 105.0 105.0 105.0 105.0 105.0 Decreased$ Q_2 $750 750 750 750 750 750 750 750 750 750$ s_1 (\$)$ 10.5 10.5 10.5 10.5 10.5 10.5 10.5 10.5 10.5 10.5 $EP (\$)$5625 5625 5625 5625 5625 5625 5625 5625 5625 5625$ RP (\$)$ 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 [1] Po-Chung Yang, Hui-Ming Wee, Shen-Lian Chung, Yong-Yan Huang. Pricing and replenishment strategy for a multi-market deteriorating product with time-varying and price-sensitive demand. Journal of Industrial & Management Optimization, 2013, 9 (4) : 769-787. doi: 10.3934/jimo.2013.9.769 [2] Yanyi Xu, Arnab Bisi, Maqbool Dada. New structural properties of inventory models with Polya frequency distributed demand and fixed setup cost. Journal of Industrial & Management Optimization, 2017, 13 (2) : 931-945. doi: 10.3934/jimo.2016054 [3] Yujing Wang, Changjun Yu, Kok Lay Teo. A new computational strategy for optimal control problem with a cost on changing control. Numerical Algebra, Control & Optimization, 2016, 6 (3) : 339-364. doi: 10.3934/naco.2016016 [4] Xiaoming Yan, Ping Cao, Minghui Zhang, Ke Liu. The optimal production and sales policy for a new product with negative word-of-mouth. Journal of Industrial & Management Optimization, 2011, 7 (1) : 117-137. doi: 10.3934/jimo.2011.7.117 [5] Bing-Bing Cao, Zhi-Ping Fan, Tian-Hui You. The optimal pricing and ordering policy for temperature sensitive products considering the effects of temperature on demand. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1153-1184. doi: 10.3934/jimo.2018090 [6] Ruopeng Wang, Jinting Wang, Chang Sun. Optimal pricing and inventory management for a loss averse firm when facing strategic customers. Journal of Industrial & Management Optimization, 2018, 14 (4) : 1521-1544. doi: 10.3934/jimo.2018019 [7] Wai-Ki Ching, Tang Li, Sin-Man Choi, Issic K. C. Leung. A tandem queueing system with applications to pricing strategy. Journal of Industrial & Management Optimization, 2009, 5 (1) : 103-114. doi: 10.3934/jimo.2009.5.103 [8] Ka Wo Lau, Yue Kuen Kwok. Optimal execution strategy of liquidation. Journal of Industrial & Management Optimization, 2006, 2 (2) : 135-144. doi: 10.3934/jimo.2006.2.135 [9] Jia Shu, Zhengyi Li, Weijun Zhong. A market selection and inventory ordering problem under demand uncertainty. Journal of Industrial & Management Optimization, 2011, 7 (2) : 425-434. doi: 10.3934/jimo.2011.7.425 [10] Fengjun Wang, Qingling Zhang, Bin Li, Wanquan Liu. Optimal investment strategy on advertisement in duopoly. Journal of Industrial & Management Optimization, 2016, 12 (2) : 625-636. doi: 10.3934/jimo.2016.12.625 [11] Xiangyu Gao, Yong Sun. A new heuristic algorithm for laser antimissile strategy optimization. Journal of Industrial & Management Optimization, 2012, 8 (2) : 457-468. doi: 10.3934/jimo.2012.8.457 [12] Mitali Sarkar, Young Hae Lee. Optimum pricing strategy for complementary products with reservation price in a supply chain model. Journal of Industrial & Management Optimization, 2017, 13 (3) : 1553-1586. doi: 10.3934/jimo.2017007 [13] Li Deng, Wenjie Bi, Haiying Liu, Kok Lay Teo. A multi-stage method for joint pricing and inventory model with promotion constrains. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020097 [14] Guibin Lu, Qiying Hu, Youying Zhou, Wuyi Yue. Optimal execution strategy with an endogenously determined sales period. Journal of Industrial & Management Optimization, 2005, 1 (3) : 289-304. doi: 10.3934/jimo.2005.1.289 [15] Jianbin Li, Ruina Yang, Niu Yu. Optimal capacity reservation policy on innovative product. Journal of Industrial & Management Optimization, 2013, 9 (4) : 799-825. doi: 10.3934/jimo.2013.9.799 [16] Konstantina Skouri, Ioannis Konstantaras. Two-warehouse inventory models for deteriorating products with ramp type demand rate. Journal of Industrial & Management Optimization, 2013, 9 (4) : 855-883. doi: 10.3934/jimo.2013.9.855 [17] Wei Liu, Shiji Song, Cheng Wu. Single-period inventory model with discrete stochastic demand based on prospect theory. Journal of Industrial & Management Optimization, 2012, 8 (3) : 577-590. doi: 10.3934/jimo.2012.8.577 [18] Magfura Pervin, Sankar Kumar Roy, Gerhard Wilhelm Weber. A two-echelon inventory model with stock-dependent demand and variable holding cost for deteriorating items. Numerical Algebra, Control & Optimization, 2017, 7 (1) : 21-50. doi: 10.3934/naco.2017002 [19] Lizhao Yan, Fei Xu, Yongzeng Lai, Mingyong Lai. Stability strategies of manufacturing-inventory systems with unknown time-varying demand. Journal of Industrial & Management Optimization, 2017, 13 (4) : 2033-2047. doi: 10.3934/jimo.2017030 [20] Katherinne Salas Navarro, Jaime Acevedo Chedid, Whady F. Florez, Holman Ospina Mateus, Leopoldo Eduardo Cárdenas-Barrón, Shib Sankar Sana. A collaborative EPQ inventory model for a three-echelon supply chain with multiple products considering the effect of marketing effort on demand. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-14. doi: 10.3934/jimo.2019020 2018 Impact Factor: 1.025
# Properties of Fluids To understand the different properties of fluids,we have to first understand what exactly is meant by the term fluids. By definition, anything that can flow is a fluid. The water we drink, the air we breathe are all examples of fluids. Essentially, all liquids and gases are fluids. There are mainly three properties of fluids. Kinematic, Thermodynamic and Physical properties. Since fluids are, like solids, also a form of matter, they have certain properties. The study of the properties of fluids in fluid mechanics helps us to utilize them for useful purposes. ## Different Properties Of Fluids Though each fluid is different from others in terms of composition and specific qualities, there are some properties which every fluid share. These properties can be broadly categorized under: • Kinematic properties such as velocity and acceleration. • Thermodynamic properties of fluids such as density, temperature, internal energy, pressure, specific volume and specific weight. • Physical properties of fluidssuch as appearance, colour and odour. Here we will look into some of the basic properties of fluids. ### Density The density of a fluid is its mass per unit volume. It is the ratio between the two. • Unit of Density of fluids is kg/m3. • The formula for deriving density is: $\frac{Mass}{Volume}$ The density is dependent on a number of factors such as pressure, temperature and its chemical combination. Impact of temperature, pressure are noticeable. Fluid Density (g/mL) Hydrogen 0.00009 Helium 0.0002 Air 0.003 Oxygen 0.0014 Carbon dioxide 0.002 Ethyl alcohol 0.79 Machine oil 0.9 Water 1.00 Seawater 1.03 Glycerol 1.26 Mercury 13.55 ### Temperature The property of fluids that determines the state of hotness or coldness of it. Temperature is measured in either Kelvin or Celsius or Fahrenheit. Kelvin is the most common one that is used because of its independence from the properties of the substance. The following graph clearly depicts the effect of temperature on fluids. ### Pressure The pressure of a fluid is the force applied by it per unit area. • Pressure is denoted by the letter ‘p’ • Calculated by formula $\frac{Force}{Area}$. • Its unit is N/m2. ### Specific Volume In fluid mechanics, specific volume is the reciprocal of density. It can be expressed as the volume that a fluid occupies per unit mass. • Specific volume is denoted by the letter ‘v’ • Calculated by formula $\frac{Volume}{Mass}$. • Its unit is m3/kg. ## Practice Questions On Properties Of Fluids Q1: What is fluid? Ans: Anything that has the property to flow categories as fluid. Q2: What are the types of matter enfolded under fluids. Ans: Liquids and gases are enfolded under fluids Q3: Give an example for fluids. Ans: Examples of fluids are Water, Oxygen, Molten lava, etc. Q4: Name the properties of fluids. Ans: There are three properties of fluids. Namely, Kinematic, Thermodynamic and Physical properties. Q5: Name the Kinetic property of a fluid? Ans: Kinematic properties such as the velocity and the acceleration. Q6: Name the Thermodynamic property of a fluid? Ans: Thermodynamic properties of fluids such as density, temperature, internal energy, pressure, specific volume and specific weight. Q7: Name the Physical property of a fluid? Ans: Physical properties of fluidssuch as appearance, color, and odor. Q8: Arrange the following terms in the ascending order of their density. Water, Carbon dioxide, Air, Seawater Ans: Carbon dioxide< Air< Water< Seawater Q9: Define the specific volume. Ans: Specific volume is expressed as the volume that a fluid occupies per unit mass. Q10: What is the relation between specific volume and density? Ans: Specific volume is the reciprocal of density. Hope you have got a brief knowledge of Fluids, Properties of fluids, parameters affecting the properties of fluids, etc. For a better understanding of thermodynamics do read the related articles and answer the practice questions. ### Related Physics Concepts Stay tuned with BYJU’S for more such interesting articles. Also, register to “BYJU’S-The Learning App” for loads of interactive, engaging physics related videos and an unlimited academic assist. #### Practise This Question Identify the planet which has an almost same duration of rotation about its own axis as that of Earth.
## How to Add Up Powers of Numbers Do you need to know the formula to tell you what the sum of the first N counting numbers, raised to a power? No, you do not. Not really. It can save a bit of time to know the sum of the numbers raised to the first power. Most mathematicians would know it, or be able to recreate it fast enough: $\sum_{n = 1}^{N} n = 1 + 2 + 3 + \cdots + N = \frac{1}{2}N\left(N + 1\right)$ But there are similar formulas to add up, say, the counting numbers squared, or cubed, or so. And a toot on Mathstodon, the mathematics-themed instance of social network Mastodon, makes me aware of a cute paper about this. In it Dr Alessandro Mariani describes A simple mnemonic to compute sums of powers. It’s a neat one. Mariani describes a way to use knowledge of the sum of numbers to the first power to generate a formula for the sum of squares. And then to use the sum of squares formula to generate the sum of cubes. The sum of cubes then lets you get the sub of fourth-powers. And so on. This takes a while to do if you’re interested in the sum of twentieth powers. But do you know how many times you’ll ever need to generate that formula? Anyway, as Mariani notes, this sort of thing is useful if you find yourself at a mathematics competition. Or some other event where you can’t just have the computer calculate this stuff. Mariani’s process is a great one. Like many mnemonics it doesn’t make literal sense. It expects one to integrate and differentiate polynomials. Anyone likely to be interested in a formula for the sums of twelfth powers knows how to do those in their sleep. But they’re integrating and differentiating polynomials for which, in context, the integrals and derivatives don’t exist. Or at least don’t mean anything. That’s all right. If all you want is the right answer, it’s okay to get there by a wrong method. At least if you verify the answer is right, which the last section of Mariani’s paper does. So, give it a read if you’d like to see a neat mathematical trick to a maybe useful result. ## My Little 2021 Mathematics A-to-Z: Ordinary Differential Equations Mr Wu, my Singapore Maths Tuition friend, has offered many fine ideas for A-to-Z topics. This week’s is another of them, and I’m grateful for it. # Ordinary Differential Equations As a rule, if you can do something with a number, you can do the same thing with a function. Not always, of course, but the exceptions are fewer than you might imagine. I’ll start with one of those things you can do to both. A powerful thing we learn in (high school) algebra is that we can use a number without knowing what it is. We give it a name like ‘x’ or ‘y’ and describe what we find interesting about it. If we want to know what it is, we (usually) find some equation or set of equations and find what value of x could make that true. If we study enough (college) mathematics we learn its equivalent in functions. We give something a name like f or g or Ψ and describe what we know about it. And then try to find functions which make that true. There are a couple common types of equation for these not-yet-known functions. The kind you expect to learn as a mathematics major involves differential equations. These are ones where your equation (or equations) involve derivatives of the not-yet-known f. A derivative describes the rate at which something changes. If we imagine the original f is a position, the derivative is velocity. Derivatives can have derivatives also; this second derivative would be the acceleration. And then second derivatives can have derivatives also, and so on, into infinity. When an equation involves a function and its derivatives we have a differential equation. (The second common type is the integral equation, using a function and its integrals. And a third involves both derivatives and integrals. That’s known as an integro-differential equation, and isn’t life complicated enough? ) Differential equations themselves naturally divide into two kinds, ordinary and partial. They serve different roles. Usually an ordinary differential equation we can describe the change for from knowing only the current situation. (This may include velocities and accelerations and stuff. We could ask what the velocity at an instant means. But never mind that here.) Usually a partial differential equation bases the change where you are on the neighborhood of where your location. If you see holes you can pick in that, you’re right. The precise difference is about the independent variables. If the function f has more than one independent variable, it’s possible to take a partial derivative. This describes how f changes if one variable changes while the others stay fixed. If the function f has only the one independent variable, you can only take ordinary derivatives. So you get an ordinary differential equation. But let’s speak casually here. If what you’re studying can be fully represented with a dashboard readout? Like, an ordered list of positions and velocities and stuff? You probably have an ordinary differential equation. If you need a picture with a three-dimensional surface or a color map to understand it? You probably have a partial differential equation. One more metaphor. If you can imagine the thing you’re modeling as a marble rolling around on a hilly table? Odds are that’s an ordinary differential equation. And that representation covers a lot of interesting problems. Marbles on hills, obviously. But also rigid pendulums: we can treat the angle a pendulum makes and the rate at which those change as dimensions of space. The pendulum’s swinging then matches exactly a marble rolling around the right hilly table. Planets in space, too. We need more dimensions — three space dimensions and three velocity dimensions — for each planet. So, like, the Sun-Earth-and-Moon would be rolling around a hilly table with 18 dimensions. That’s all right. We don’t have to draw it. The mathematics works about the same. Just longer. [ To be precise we need three momentum dimensions for each orbiting body. If they’re not changing mass appreciably, and not moving too near the speed of light, velocity is just momentum times a constant number, so we can use whichever is easier to visualize. ] We mostly work with ordinary differential equations of either the first or the second order. First order means we have first derivatives in the equation, but never have to deal with more than the original function and its first derivative. Second order means we have second derivatives in the equation, but never have to deal with more than the original function or its first or second derivatives. You’ll never guess what a “third order” differential equation is unless you have experience in reading words. There are some reasons we stick to these low orders like first and second, though. One is that we know of good techniques for solving most first- and second-order ordinary differential equations. For higher-order differential equations we often use techniques that find a related normal old polynomial. Its solution helps with the thing we want. Or we break a high-order differential equation into a set of low-order ones. So yes, again, we search for answers where the light is good. But the good light covers many things we like to look at. There’s simple harmonic motion, for example. It covers pendulums and springs and perturbations around stable equilibriums and all. This turns out to cover so many problems that, as a physics major, you get a little sick of simple harmonic motion. There’s the Airy function, which started out to describe the rainbow. It turns out to describe particles trapped in a triangular quantum well. The van der Pol equation, about systems where a small oscillation gets energy fed into it while a large oscillation gets energy drained. All kinds of exponential growth and decay problems. Very many functions where pairs of particles interact. This doesn’t cover everything we would like to do. That’s all right. Ordinary differential equations lend themselves to numerical solutions. It requires considerable study and thought to do these numerical solutions well. But this doesn’t make the subject unapproachable. Few of us could animate the “Pink Elephants on Parade” scene from Dumbo. But could you draw a flip book of two stick figures tossing a ball back and forth? If you’ve had a good rest, a hearty breakfast, and have not listened to the news yet today, so you’re in a good mood? The flip book ball is a decent example here, too. The animation will look good if the ball moves about the “right” amount between pages. A little faster when it’s first thrown, a bit slower as it reaches the top of its arc, a little faster as it falls back to the catcher. The ordinary differential equation tells us how fast our marble is rolling on this hilly table, and in what direction. So we can calculate how far the marble needs to move, and in what direction, to make the next page in the flip book. Almost. The rate at which the marble should move will change, in the interval between one flip-book page and the next. The difference, the error, may not be much. But there is a difference between the exact and the numerical solution. Well, there is a difference between a circle and a regular polygon. We have many ways of minimizing and estimating and controlling the error. Doing that is what makes numerical mathematics the high-paid professional industry it is. Our game of catch we can verify by flipping through the book. The motion of four dozen planets and moons attracting one another is harder to be sure we calculate it right. I said at the top that most anything one can do with numbers one can do with functions also. I would like to close the essay with some great parallel. Like, the way that trying to solve cubic equations made people realize complex numbers were good things to have. I don’t have a good example like that for ordinary differential equations, where the study expanded our ideas of what functions could be. Part of that is that complex numbers are more accessible than the stranger functions. Part of that is that complex numbers have a story behind them. The story features titanic figures like Gerolamo Cardano, Niccolò Tartaglia and Ludovico Ferrari. We see some awesome and weird personalities in 19th century mathematics. But their fights are generally harder to watch from the sidelines and cheer on. And part is that it’s easier to find pop historical treatments of the kinds of numbers. The historiography of what a “function” is is a specialist occupation. But I can think of a possible case. A tool that’s sometimes used in solving ordinary differential equations is the “Dirac delta function”. Yes, that Paul Dirac. It’s a weird function, written as $\delta(x)$. It’s equal to zero everywhere, except where $x$ is zero. When $x$ is zero? It’s … we don’t talk about what it is. Instead we talk about what it can do. The integral of that Dirac delta function times some other function can equal that other function at a single point. It strains credibility to call this a function the way we speak of, like, $sin(x)$ or $\sqrt{x^2 + 4}$ being functions. Many will classify it as a distribution instead. But it is so useful, for a particular kind of problem, that it’s impossible to throw away. So perhaps the parallels between numbers and functions extend that far. Ordinary differential equations can make us notice kinds of functions we would not have seen otherwise. And with this — I can see the much-postponed end of the Little 2021 Mathematics A-to-Z! You can read all my entries for 2021 at this link, and if you’d like can find all my A-to-Z essays here. How will I finish off the shortest yet most challenging sequence I’ve done yet? Will it be yellow and equivalent to the Axiom of Choice? Answers should come, in a week, if all starts going well. ## From my Sixth A-to-Z: Taylor Series By the time of 2019 and my sixth A-to-Z series , I had some standard narrative tricks I could deploy. My insistence that everything is polynomials, for example. Anecdotes from my slight academic career. A prose style that emphasizes what we do with the idea of something rather than instructions. That last comes from the idea that if you wanted to know how to compute a Taylor series you’d just look it up on Mathworld or Wikipedia or whatnot. The thing a pop mathematics blog can do is give some reason that you’d want to know how to compute a Taylor series. I regret talking about functions that break Taylor series, though. I have to treat these essays as introducing the idea of a Taylor series to someone who doesn’t know anything about them. And it’s bad form to teach how stuff doesn’t work too close to teaching how it does work. Readers tend to blur what works and what doesn’t together. Still, $f(x) = \exp(-\frac{1}{x^2})$ is a really neat weird function and it’d be a shame to let it go completely unmentioned. Today’s A To Z term was nominated by APMA, author of the Everybody Makes DATA blog. It was a topic that delighted me to realize I could explain. Then it started to torment me as I realized there is a lot to explain here, and I had to pick something. So here’s where things ended up. # Taylor Series. In the mid-2000s I was teaching at a department being closed down. In its last semester I had to teach Computational Quantum Mechanics. The person who’d normally taught it had transferred to another department. But a few last majors wanted the old department’s version of the course, and this pressed me into the role. Teaching a course you don’t really know is a rush. It’s a semester of learning, and trying to think deeply enough that you can convey something to students. This while all the regular demands of the semester eat your time and working energy. And this in the leap of faith that the syllabus you made up, before you truly knew the subject, will be nearly enough right. And that you have not committed to teaching something you do not understand. So around mid-course I realized I needed to explain finding the wave function for a hydrogen atom with two electrons. The wave function is this probability distribution. You use it to find things like the probability a particle is in a certain area, or has a certain momentum. Things like that. A proton with one electron is as much as I’d ever done, as a physics major. We treat the proton as the center of the universe, immobile, and the electron hovers around that somewhere. Two electrons, though? A thing repelling your electron, and repelled by your electron, and neither of those having fixed positions? What the mathematics of that must look like terrified me. When I couldn’t procrastinate it farther I accepted my doom and read exactly what it was I should do. It turned out I had known what I needed for nearly twenty years already. Got it in high school. Of course I’m discussing Taylor Series. The equations were loaded down with symbols, yes. But at its core, the important stuff, was this old and trusted friend. The premise behind a Taylor Series is even older than that. It’s universal. If you want to do something complicated, try doing the simplest thing that looks at all like it. And then make that a little bit more like you want. And then a bit more. Keep making these little improvements until you’ve got it as right as you truly need. Put that vaguely, the idea describes Taylor series just as well as it describes making a video game or painting a state portrait. We can make it more specific, though. A series, in this context, means the sum of a sequence of things. This can be finitely many things. It can be infinitely many things. If the sum makes sense, we say the series converges. If the sum doesn’t, we say the series diverges. When we first learn about series, the sequences are all numbers. $1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots$, for example, which diverges. (It adds to a number bigger than any finite number.) Or $1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \cdots$, which converges. (It adds to $\frac{1}{6}\pi^2$.) In a Taylor Series, the terms are all polynomials. They’re simple polynomials. Let me call the independent variable ‘x’. Sometimes it’s ‘z’, for the reasons you would expect. (‘x’ usually implies we’re looking at real-valued functions. ‘z’ usually implies we’re looking at complex-valued functions. ‘t’ implies it’s a real-valued function with an independent variable that represents time.) Each of these terms is simple. Each term is the distance between x and a reference point, raised to a whole power, and multiplied by some coefficient. The reference point is the same for every term. What makes this potent is that we use, potentially, many terms. Infinitely many terms, if need be. Call the reference point ‘a’. Or if you prefer, x0. z0 if you want to work with z’s. You see the pattern. This ‘a’ is the “point of expansion”. The coefficients of each term depend on the original function at the point of expansion. The coefficient for the term that has $(x - a)$ is the first derivative of f, evaluated at a. The coefficient for the term that has $(x - a)^2$ is the second derivative of f, evaluated at a (times a number that’s the same for the squared-term for every Taylor Series). The coefficient for the term that has $(x - a)^3$ is the third derivative of f, evaluated at a (times a different number that’s the same for the cubed-term for every Taylor Series). You’ll never guess what the coefficient for the term with $(x - a)^{122,743}$ is. Nor will you ever care. The only reason you would wish to is to answer an exam question. The instructor will, in that case, have a function that’s either the sine or the cosine of x. The point of expansion will be 0, $\frac{\pi}{2}$, $\pi$, or $\frac{3\pi}{2}$. Otherwise you will trust that this is one of the terms of $(x - a)^n$, ‘n’ representing some counting number too great to be interesting. All the interesting work will be done with the Taylor series either truncated to a couple terms, or continued on to infinitely many. What a Taylor series offers is the chance to approximate a function we’re genuinely interested in with a polynomial. This is worth doing, usually, because polynomials are easier to work with. They have nice analytic properties. We can automate taking their derivatives and integrals. We can set a computer to calculate their value at some point, if we need that. We might have no idea how to start calculating the logarithm of 1.3. We certainly have an idea how to start calculating $0.3 - \frac{1}{2}(0.3^2) + \frac{1}{3}(0.3^3)$. (Yes, it’s 0.3. I’m using a Taylor series with a = 1 as the point of expansion.) The first couple terms tell us interesting things. Especially if we’re looking at a function that represents something physical. The first two terms tell us where an equilibrium might be. The next term tells us whether an equilibrium is stable or not. If it is stable, it tells us how perturbations, points near the equilibrium, behave. The first couple terms will describe a line, or a quadratic, or a cubic, some simple function like that. Usually adding more terms will make this Taylor series approximation a better fit to the original. There might be a larger region where the polynomial and the original function are close enough. Or the difference between the polynomial and the original function will be closer together on the same old region. We would really like that region to eventually grow to the whole domain of the original function. We can’t count on that, though. Roughly, the interval of convergence will stretch from ‘a’ to wherever the first weird thing happens. Weird things are, like, discontinuities. Vertical asymptotes. Anything you don’t like dealing with in the original function, the Taylor series will refuse to deal with. Outside that interval, the Taylor series diverges and we just can’t use it for anything meaningful. Which is almost supernaturally weird of them. The Taylor series uses information about the original function, but it’s all derivatives at a single point. Somehow the derivatives of, say, the logarithm of x around x = 1 give a hint that the logarithm of 0 is undefinable. And so they won’t help us calculate the logarithm of 3. Things can be weirder. There are functions that just break Taylor series altogether. Some are obvious. A function needs lots of derivatives at a point to have a good Taylor series approximation. So, many fractal curves won’t have a Taylor series approximation. These curves are all corners, points where they aren’t continuous or where derivatives don’t exist. Some are obviously designed to break Taylor series approximations. We can make a function that follows different rules if x is rational than if x is irrational. There’s no approximating that, and you’d blame the person who made such a function, not the Taylor series. It can be subtle. The function defined by the rule $f(x) = \exp{-\frac{1}{x^2}}$, with the note that if x is zero then f(x) is 0, seems to satisfy everything we’d look for. It’s a function that’s mostly near 1, that drops down to being near zero around where x = 0. But its Taylor series expansion around a = 0 is a horizontal line always at 0. The interval of convergence can be a single point, challenging our idea of what an interval is. That’s all right. If we can trust that we’re avoiding weird parts, Taylor series give us an outstanding new tool. Grant that the Taylor series describes a function with the same rule as our original function. The Taylor series is often easier to work with, especially if we’re working on differential equations. We can automate, or at least find formulas for, taking the derivative of a polynomial. Or adding together derivatives of polynomials. Often we can attack a differential equation too hard to solve otherwise by supposing the answer is a polynomial. This is essentially what that quantum mechanics problem used, and why the tool was so familiar when I was in a strange land. Roughly. What I was actually doing was treating the function I wanted as a power series. This is, like the Taylor series, the sum of a sequence of terms, all of which are $(x - a)^n$ times some coefficient. What makes it not a Taylor series is that the coefficients weren’t the derivatives of any function I knew to start. But the experience of Taylor series trained me to look at functions as things which could be approximated by polynomials. This gives us the hint to look at other series that approximate interesting functions. We get a host of these, with names like Laurent series and Fourier series and Chebyshev series and such. Laurent series look like Taylor series but we allow powers to be negative integers as well as positive ones. Fourier series do away with polynomials. They instead use trigonometric functions, sines and cosines. Chebyshev series build on polynomials, but not on pure powers. They’ll use orthogonal polynomials. These behave like perpendicular directions do. That orthogonality makes many numerical techniques behave better. The Taylor series is a great introduction to these tools. Its first several terms have good physical interpretations. Its calculation requires tools we learn early on in calculus. The habits of thought it teaches guides us even in unfamiliar territory. And I feel very relieved to be done with this. I often have a few false starts to an essay, but those are mostly before I commit words to text editor. This one had about four branches that now sit in my scrap file. I’m glad to have a deadline forcing me to just publish already. Thank you, though. This and the essays for the Fall 2019 A to Z should be at this link. Next week: the letters U and V. And all past A to Z essays ought to be at this link. ## My Little 2021 Mathematics A-to-Z: Monte Carlo This week’s topic is one of several suggested again by Mr Wu, blogger and Singaporean mathematics tutor. He’d suggested several topics, overlapping in their subject matter, and I was challenged to pick one. # Monte Carlo. The reputation of mathematics has two aspects: difficulty and truth. Put “difficulty” to the side. “Truth” seems inarguable. We expect mathematics to produce sound, deductive arguments for everything. And that is an ideal. But we often want to know things we can’t do, or can’t do exactly. We can handle that often. If we can show that a number we want must be within some error range of a number we can calculate, we have a “numerical solution”. If we can show that a number we want must be within every error range of a number we can calculate, we have an “analytic solution”. There are many things we’d like to calculate and can’t exactly. Many of them are integrals, which seem like they should be easy. We can represent any integral as finding the area, or volume, of a shape. The trick is that there’s only a few shapes with volumes we can find exact formulas for. You may remember the area of a triangle or a parallelogram. You have no idea what the area of a regular nonagon is. The trick we rely on is to approximate the shape we want with shapes we know formulas for. This usually gives us a numerical solution. If you’re any bit devious you’ve had the impulse to think of a shape that can’t be broken up like that. There are such things, and a good swath of mathematics in the late 19th and early 20th centuries was arguments about how to handle them. I don’t mean to discuss them here. I’m more interested in the practical problems of breaking complicated shapes up into simpler ones and adding them all together. One catch, an obvious one, is that if the shape is complicated you need a lot of simpler shapes added together to get a decent approximation. Less obvious is that you need way more shapes to do a three-dimensional volume well than you need for a two-dimensional area. That’s important because you need even way-er more to do a four-dimensional hypervolume. And more and more and more for a five-dimensional hypervolume. And so on. That matters because many of the integrals we’d like to work out represent things like the energy of a large number of gas particles. Each of those particles carries six dimensions with it. Three dimensions describe its position and three dimensions describe its momentum. Worse, each particle has its own set of six dimensions. The position of particle 1 tells you nothing about the position of particle 2. So you end up needing ridiculously, impossibly many shapes to get even a rough approximation. With no alternative, then, we try wisdom instead. We train ourselves to think of deductive reasoning as the only path to certainty. By the rules of deductive logic it is. But there are other unshakeable truths. One of them is randomness. We can show — by deductive logic, so we trust the conclusion — that the purely random is predictable. Not in the way that lets us say how a ball will bounce off the floor. In the way that we can describe the shape of a great number of grains of sand dropped slowly on the floor. The trick is one we might get if we were bad at darts. If we toss darts at a dartboard, badly, some will land on the board and some on the wall behind. How many hit the dartboard, compared to the total number we throw? If we’re as likely to hit every spot of the wall, then the fraction that hit the dartboard, times the area of the wall, should be about the area of the dartboard. So we can do something equivalent to this dart-throwing to find the volumes of these complicated, hyper-dimensional shapes. It’s a kind of numerical integration. It isn’t particularly sensitive to how complicated the shape is, though. It takes more work to find the volume of a shape with more dimensions, yes. But it takes less more-work than the breaking-up-into-known-shapes method does. There are wide swaths of mathematics and mathematical physics where this is the best way to calculate the integral. This bit that I’ve described is called “Monte Carlo integration”. The “integration” part of the name because that’s what we started out doing. To call it “Monte Carlo” implies either the method was first developed there or the person naming it was thinking of the famous casinos. The case is the latter. Monte Carlo methods as we know them come from Stanislaw Ulam, mathematical physicist working on atomic weapon design. While ill, he got to playing the game of Canfield solitaire, about which I know nothing except that Stanislaw Ulam was playing it in 1946 while ill. He wondered what the chance was that a given game was winnable. The most practical approach was sampling: set a computer to play a great many games and see what fractions of them were won. (The method comes from Ulam and John von Neumann. The name itself comes from their colleague Nicholas Metropolis.) There are many Monte Carlo methods, with integration being only one very useful one. They hold in common that they’re build on randomness. We try calculations — often simple ones — many times over with many different possible values. And the regularity, the predictability, of randomness serves us. The results come together to an average that is close to the thing we do want to know. I hope to return in a week with a fresh A-to-Z essay. This week’s essay, and all the essays for the Little Mathematics A-to-Z, should be at this link. And all of this year’s essays, and all A-to-Z essays from past years, should be at this link. And if you’d like to shape the next several essays, please let me know of some topics worth writing about! Thank you for reading. ## How to Impress Someone by Multiplying Certain Big Numbers in Your Head Mental arithmetic is fun. It has some use, yes. It’s always nice when you’re doing work to have some idea what a reasonable answer looks like. But mostly it’s fun to be able to spot, oh, 24 times 16, that’s got to be a little under 400. I ran across this post, by Math1089, with a neat trick for certain multiplications. It’s limited in scope. Most mental-arithmetic tricks are; they have certain problems they do well and you need to remember a grab bag that covers enough to be useful. Here, the case is multiplying two numbers that start the same way, and whose ends are complements. That is, the ends add together to 10. (Or, to 100, or 1000, or some other power of two.) So, for example, you could use this trick to multiply together 41 and 49, or 64 and 66. (Or, if you needed, to multiply 2038 by 2062.) It won’t directly solve 41 times 39, though, nor 64 times 65. But you can hack it together. 64 times 65 is 64 times 66 — you have a trick for that — minus 64. 41 times 39 is tougher, but, it’s 41 times 49 minus 41 times 10. 41 times 10 is easy to do. This is what I mean by learning a grab bag of tricks. You won’t outpace someone who has their calculator out and ready to go. But you might outpace someone who has to get their calculator out, and you’ll certainly impress them. So it’s clever, and not hard to learn. If you feel like testing your high-school algebra prowess you can even work out why this trick works, and why it has the limits it does. ## How to Make a Straight Line in Different Circumstances I no longer remember how I came to be aware of this paper. No matter. Here is Paul Rojas’s The straight line, the catenary, the brachistochrone, the circle, and Fermat. It is about a set of optimization problems, in this case, attempts to find the shortest path something can follow. The talk of the catenary and the brachistochrone give away that this is a calculus paper. The catenary and the brachistochrone are some of the oldest problems in calculus as we know it. The catenary is the problem of what shape a weighted chain takes under gravity. The brachistochrone is the problem of what path a beam of light traces out moving through regions with different indexes of refraction. (As in, through films of glass or water or such.) Straight lines and circles we’ve heard of from other places. The paper relies on calculus so if you’re not comfortable with that, well, skim over the lines with $\int$ symbols. Rojas discusses the ways that we can treat all these different shapes as solutions of related, very similar problems. And there’s some talk about calculating approximate solutions. There is special delight in this as these are problems that can be done by an analog computer. You can build a tool to do some of these calculations. And I do mean “you”; the approach is to build a box, like, the sort of thing you can do by cutting up plastic sheets and gluing them together and setting toothpicks or wires on them. Then dip the model into a soap solution. Lift it out slowly and take a good picture of the soapy surface. This is not as quick, or as precise, as fiddling with a Matlab or Octave or Mathematica simulation. But it can be much more fun. ## How To Find A Logarithm Without Much Computing Power I don’t yet have actual words committed to text editor for this year’s little A-to-Z yet. Soon, though. Rather than leave things completely silent around here, I’d like to re-share an old sequence about something which delighted me. A lon while ago I read Edmund Callis Berkeley’s Giant Brains: Or Machines That Think. It’s a book from 1949 about numerical computing. And it explained just how to really calculate logarithms. Anyone who knows calculus knows, in principle, how to calculate a logarithm. I mean as in how to get a numerical approximation to whatever the log of 25 is. If you didn’t have a calculator that did logarithms, but you could reliably multiply and add numbers? There’s a polynomial, one of a class known as Taylor Series, that — if you add together infinitely many terms — gives the exact value of a logarithm. If you only add a finite number of terms together, you get an approximation. That suffices, in principle. In practice, you might have to calculate so many terms and add so many things together you forget why you cared what the log of 25 was. What you want is how to calculate them swiftly. Ideally, with as few calculations as possible. So here’s a set of articles I wrote, based on Berkeley’s book, about how to do that. Machines That Think About Logarithms sets out the question. It includes some talk about the kinds of logarithms and why we use each of them. Machines That Do Something About Logarithms sets out principles. These are all things that are generically true about logarithms, including about calculating logarithms. Machines That Give You Logarithms explains how to use those tools. And lays out how to get the base-ten logarithm for most numbers that you would like with a tiny bit of computing work. I showed off an example of getting the logarithm of 47.2286 using only three divisions, four additions, and a little bit of looking up stuff. Without Machines That Think About Logarithms closes it out. One catch with the algorithm described is that you need to work out some logarithms ahead of time and have them on hand, ready to look up. They’re not ones that you care about particularly for any problem, but they make it easier to find the logarithm you do want. This essay talks about which logarithms to calculate, in order to get the most accurate results for the logarithm you want, using the least custom work possible. And that’s the series! With that, in principle, you have a good foundation in case you need to reinvent numerical computing. ## No, You Can’t Say What 6/2(1+2) Equals I am made aware that a section of Twitter argues about how to evaluate an expression. There may be more than one of these going around, but the expression I’ve seen is: $6 \div 2\left(1 + 2\right) =$ Many people feel that the challenge is knowing the order of operations. This is reasonable. That is, that to evaluate arithmetic, you evaluate terms inside parentheses first. Then terms within exponentials. Then multiplication and division. Then addition and subtraction. This is often abbreviated as PEMDAS, and made into a mnemonic like “Please Excuse My Dear Aunt Sally”. That is fine as far as it goes. Many people likely start by adding the 1 and 2 within the parentheses, and that’s fair. Then they get: $6 \div 2(3) =$ Putting two quantities next to one another, as the 2 and the (3) are, means to multiply them. And then comes the disagreement: does this mean take $6\div 2$ and multiply that by 3, in which case the answer is 9? Or does it mean take 6 divided by $2\cdot 3$, in which case the answer is 1? And there is the trick. Depending on which way you choose to parse these instructions you get different answers. But you don’t get to do that, not and have arithmetic. So the answer is that this expression has no answer. The phrasing is ambiguous and can’t be resolved. I’m aware there are people who reject this answer. They picked up along the line somewhere a rule like “do multiplication and division from left to right”. And a similar rule for addition and subtraction. This is wrong, but understandable. The left-to-right “rule” is a decent heuristic, a guide to how to attack a problem too big to do at once. The rule works because multiplication-and-division associates. The quantity a-times-b, multiplied by c, has to be the same number as the quantity a multiplied by the quantity b-times-c. The rule also works for addition-and-subtraction because addition associates too. The quantity a-plus-b, plus the quantity c, has to be the same as the quantity a plus the quantity b-plus-c. This left-to-right “rule”, though, just helps you evaluate a meaningful expression. It would be just as valid to do all the multiplications-and-divisions from right-to-left. If you get different values working left-to-right from right-to-left, you have a meaningless expression. But you also start to see why mathematicians tend to avoid the $\div$ symbol. We understand, for example, $a \div b$ to mean $a \cdot \frac{1}{b}$. Carry that out and then there’s no ambiguity about $6 \cdot \frac{1}{2} \cdot 3 =$ I understand the desire to fix an ambiguity. Believe me. I’m a know-it-all; I only like ambiguities that enable logic-based jokes. (“Would you like ice cream or cake?” “Yes.”) But the rules that could remove the ambiguity in $6\div 2(1 + 2)$ also remove associativity from multiplication. Once you do that, you’re not doing arithmetic anymore. Resist the urge. (And the mnemonic is a bit dangerous. We can say division has the same priority as multiplication, but we also say “multiplication” first. I bet you can construct an ambiguous expression which would mislead someone who learned Please Excuse Dear Miss Sally Andrews.) And now a qualifier: computer languages will often impose doing a calculation in some order. Usually left-to-right. The microchips doing the work need to have some instructions. Spotting all possible ambiguous phrasings ahead of time is a challenge. But we accept our computers doing not-quite-actual-arithmetic. They’re able to do not-quite-actual-arithmetic much faster and more reliably than we can. This makes the compromise worthwhile. We need to remember the difference between what the computer does and the calculation we intend. And another qualifier: it is possible to do interesting mathematics with operations that aren’t associative. But if you are it’s in your research as a person with a postgraduate degree in mathematics. It’s possible it might fit in social media, but I would be surprised. It won’t draw great public attention, anyway. ## How to crumple paper I intend to post something inspired by the comics. I’m not ready just yet. Until then, though, I’d like to share a neat article published in Nature. It’s about paper. In particular, it’s about how paper crumples. When paper is crumpled, and flattened out again, it looks different. When it’s crumpled and flattened out again, it looks even more different. But you reach a point where crumping and flattening the paper stops making it look all that different. A model for the fragmentation kinetics of crumpled thin sheets, by Jovana Andrejevic, Lisa M Lee, Shmuel M Rubinstein, and Chris H Rycroft, tries to explain the process. The skeptical reader might say this is obvious. They’re invited to write a simulation that takes a set of fold lines and predicts which sides of the paper are angled out and which are angled in. The skeptical reader may also ask who cares about paper. It’s paper because many mathematics problems start from the kinds of things one sets one’s hands on. Anyone who’s seen a crack growing across their sidewalk, though — or across their countertop, or their grandfather’s desk — realizes there are things we don’t understand about how things break. And why they break that way. And, more generally, there’s a lot we don’t understand about how complicated “natural” shapes form. The big interest in this is how long molecules crumple up. The shapes of these govern how they behave, and it’d be nice to understand that more. The New York Times has an article explaining the paper, with more of the story of what the research is and why it’s important. This if you don’t feel comfortable reading formulas about compaction ratios or skipping over formulas to get to text again. ## Monte Carlo pioneer Arianna Wright Rosenbluth dead at 93 The New York Times carried an obituary for Dr Arianna Wright Rosenbluth. She died in December from Covid-19 and the United States’s mass-murderous handling of Covid-19. And she’s a person I am not sure I knew anything substantial about. I had known her name, but not anything more. This is a chance to correct that a bit. Rosenbluth was a PhD in physics (and an Olympics-qualified fencer). Her postdoctoral work was with the Atomic Energy Commission, bringing her to a position at Los Alamos National Laboratory in the early 1950s. And a moment in computer science that touches very many people’s work, my own included. This is in what we call Metropolis-Hastings Markov Chain Monte Carlo. Monte Carlo methods are numerical techniques that rely on randomness. The name references the casinos. Markov Chain refers to techniques that create a sequence of things. Each thing exists in some set of possibilities. If we’re talking about Markov Chain Monte Carlo this is usually an enormous set of possibilities, too many to deal with by hand, except for little tutorial problems. The trick is that what the next item in the sequence is depends on what the current item is, and nothing more. This may sound implausible — when does anything in the real world not depend on its history? — but the technique works well regardless. Metropolis-Hastings is a way of finding states that meet some condition well. Usually this is a maximum, or minimum, of some interesting property. The Metropolis-Hastings rule has the chance of going to an improved state, one with more of whatever the property we like, be 1, a certainty. The chance of going to a worsened state, with less of the property, be not zero. The worse the new state is, the less likely it is, but it’s never zero. The result is a sequence of states which, most of the time, improve whatever it is you’re looking for. It sometimes tries out some worse fits, in the hopes that this leads us to a better fit, for the same reason sometimes you have to go downhill to reach a larger hill. The technique works quite well at finding approximately-optimum states when it’s hard to find the best state, but it’s easy to judge which of two states is better. Also when you can have a computer do a lot of calculations, because it needs a lot of calculations. So here we come to Rosenbluth. She and her then-husband, according to an interview he gave in 2003, were the primary workers behind the 1953 paper that set out the technique. And, particularly, she wrote the MANIAC computer program which ran the algorithm. It’s important work and an uncounted number of mathematicians, physicists, chemists, biologists, economists, and other planners have followed. She would go on to study statistical mechanics problems, in particular simulations of molecules. It’s still a rich field of study. ## My All 2020 Mathematics A to Z: Velocity I’m happy to be back with long-form pieces. This week’s is another topic suggested by Mr Wu, of the Singapore Maths Tuition blog. # Velocity. This is easy. The velocity is the first derivative of the position. First derivative with respect to time, if you must know. That hardly needed an extra week to write. Yes, there’s more. There is always more. Velocity is important by itself. It’s also important for guiding us into new ideas. There are many. One idea is that it’s often the first good example of vectors. Many things can be vectors, as mathematicians see them. But the ones we think of most often are “some magnitude, in some direction”. The position of things, in space, we describe with vectors. But somehow velocity, the changes of positions, seems more significant. I suspect we often find static things below our interest. I remember as a physics major that my Intro to Mechanics instructor skipped Statics altogether. There are many important things, like bridges and roofs and roller coaster supports, that we find interesting because they don’t move. But the real Intro to Mechanics is stuff in motion. Balls rolling down inclined planes. Pendulums. Blocks on springs. Also planets. (And bridges and roofs and roller coaster supports wouldn’t work if they didn’t move a bit. It’s not much though.) So velocity shows us vectors. Anything could, in principle, be moving in any direction, with any speed. We can imagine a thing in motion inside a room that’s in motion, its net velocity being the sum of two vectors. And they show us derivatives. A compelling answer to “what does differentiation mean?” is “it’s the rate at which something changes”. Properly, we can take the derivative of any quantity with respect to any variable. But there are some that make sense to do, and position with respect to time is one. Anyone who’s tried to catch a ball understands the interest in knowing. We take derivatives with respect to time so often we have shorthands for it, by putting a ‘ mark after, or a dot above, the variable. So if x is the position (and it often is), then $x'$ is the velocity. If we want to emphasize we think of vectors, $\vec{x}$ is the position and $\vec{x}'$ the velocity. Velocity has another common shorthand. This is $v$, or if we want to emphasize its vector nature, $\vec{v}$. Why a name besides the good enough $\vec{x}'$? It helps us avoid misplacing a ‘ mark in our work, for one. And giving velocity a separate symbol encourages us to think of the velocity as independent from the position. It’s not — not exactly — independent. But knowing that a thing is in the lawn outside tells us nothing about how it’s moving. Velocity affects position, in a process so familiar we rarely consider how there’s parts we don’t understand about it. But velocity is also somehow also free of the position at an instant. Velocity also guides us into a first understanding of how to take derivatives. Thinking of the change in position over smaller and smaller time intervals gets us to the “instantaneous” velocity by doing only things we can imagine doing with a ruler and a stopwatch. Velocity has a velocity. $\vec{v}'$, also known as $\vec{a}$. Or, if we’re sure we won’t lose a ‘ mark, $\vec{x}''$. Once we are comfortable thinking of how position changes in time we can think of other changes. Velocity’s change in time we call acceleration. This is also a vector, more abstract than position or velocity. Multiply the acceleration by the mass of the thing accelerating and we have a vector called the “force”. That, we at least feel we understand, and can work with. Acceleration has a velocity too, a rate of change in time. It’s called the “jerk” by people telling you the change in acceleration in time is called the “jerk”. (I don’t see the term used in the wild, but admit my experience is limited.) And so on. We could, in principle, keep taking derivatives of the position and keep finding new changes. But most physics problems we find interesting use just a couple of derivatives of the position. We can label them, if we need, $\vec{x}^{(n)}$, where n is some big enough number like 4. We can bundle them in interesting ways, though. Come back to that mention of treating position and velocity of something as though they were independent coordinates. It’s a useful perspective. Imagine the rules about how particles interacting with one another and with their environment. These usually have explicit roles for position and velocity. (Granting this may reflect a selection bias. But these do cover enough interesting problems to fill a career.) So we create a new vector. It’s made of the positition and the velocity. We’d write it out as $(x, v)^T$. The superscript-T there, “transposition”, lets us use the tools of matrix algebra. This vector describes a point in phase space. Phase space is the collection of all the physically possible positions and velocities for the system. What’s the derivative, in time, of this point in phase space? Glad to say we can do this piece by piece. The derivative of a vector is the derivative of each component of a vector. So the derivative of $(x, v)^T$ is $(x', v')^T$, or, $(v, a)^T$. This acceleration itself depends on, normally, the positions and velocities. So we can describe this as $(v, f(x, v))^T$ for some function $f(x, v)$. You are surely impressed with this symbol-shuffling. You are less sure why this bother. The bother is a trick of ordinary differential equations. All differential equations are about how a function-to-be-determined and its derivatives relate to one another. In ordinary differential equations, the function-to-be-determined depends on a single variable. Usually it’s called x or t. There may be many derivatives of f. This symbol-shuffling rewriting takes away those higher-order derivatives. We rewrite the equation as a vector equation of just one order. There’s some point in phase space, and we know what its velocity is. That we do because in this form many problems can be written as a matrix problem: $\vec{x}' = A\vec{x}$. Or approximate our problem as a matrix problem. This lets us bring in linear algebra tools, and that’s worthwhile. It also lets us bring in numerical tools. Numerical mathematics has developed many methods to solve the ordinary differential equation $x' = f(x)$. Most of them extend to $\vec{x}' = f(\vec{x})$. The result is a classic mathematician’s trick. We can recast a problem as one we have better tools to solve. It calls on a more abstract idea of what a “velocity” might be. We can explain what the thing that’s “moving” and what it’s moving through are, given time. But the instincts we develop from watching ordinary things move help us in these new territories. This is also a classic mathematician’s trick. It may seem like all mathematicians do is develop tricks to extend what they already do. I can’t say this is wrong. Thank you all for reading and for putting up with my gap week. This and all of my 2020 A-to-Z essays should be at this link. All the essays from every A-to-Z series should be at this link. ## My All 2020 Mathematics A to Z: Big-O and Little-O Notation Mr Wu, author of the Singapore Maths Tuition blog, asked me to explain a technical term today. I thought that would be a fun, quick essay. I don’t learn very fast, do I? A note on style. I make reference here to “Big-O” and “Little-O”, capitalizing and hyphenating them. This is to give them visual presence as a name. In casual discussion they’re just read, or said, as the two words or word-and-a-letter. Often the Big- or Little- gets dropped and we just talk about O. An O, without further context, in my experience means Big-O. The part of me that wants smooth consistency in prose urges me to write “Little-o”, as the thing described is represented with a lowercase ‘o’. But Little-o sounds like a midway game or an Eyerly Aircraft Company amusement park ride. And I never achieve consistency in my prose anyway. Maybe for the book publication. Until I’m convinced another is better, though, “Little-O” it is. # Big-O and Little-O Notation. When I first went to college I had a campus post office box. I knew my box number. I also knew the length of the sluggish line for the combination lock code. The lock was a dial, lettered A through J. Being a young STEM-class idiot I thought, boy, would it actually be quicker to pick the lock than wait for the line? A three-letter combination, of ten options? That’s 1,000 possibilities. If I could try five a minute that’s, at worst, three hours 20 minutes. Combination might be anywhere in that set; I might get lucky. I could expect to spend 80 minutes picking my lock. I decided to wait in line instead, and good that I did. I was unaware lock settings might not be a letter, like ‘A’. It could be the midway point between adjacent letters, like ‘AB’. That meant there were eight times as many combinations as I estimated, and I could expect to spend over ten hours. Even the slow line was faster than that. It transpired that my combination had two of these midway letters. But that’s a little demonstration of algorithmic complexity. Also in cracking passwords by trial-and-error. Doubling the set of possible combination codes octuples the time it takes to break into the set. Making the combination longer would also work; each extra letter would multiply the cracking time by twenty. So you understand why your password should include “special characters” like punctuation, but most of all should be long. We’re often interested in how long to expect a task to take. Sometimes we’re interested in the typical time it takes. Often we’re interested in the longest it could ever take. If we have a deterministic algorithm, we can say. We can count how many steps it takes. Sometimes this is easy. If we want to add two two-digit numbers together we know: it will be, at most, three single-digit additions plus, maybe, writing down a carry. (To add 98 and 37 is adding 8 + 7 to get 15, to add 9 + 3 to get 12, and to take the carry from the 15, so, 1 + 12 to get 13, so we have 135.) We can get a good quarrel going about what “a single step” is. We can argue whether that carry into the hundreds column is really one more addition. But we can agree that there is some smallest bit of arithmetic work, and proceed from that. For any algorithm we have something that describes how big a thing we’re working on. It’s often ‘n’. If we need more than one variable to describe how big it is, ‘m’ gets called up next. If we’re estimating how long it takes to work on a number, ‘n’ is the number of digits in the number. If we’re thinking about a square matrix, ‘n’ is the number of rows and columns. If it’s a not-square matrix, then ‘n’ is the number of rows and ‘m’ the number of columns. Or vice-versa; it’s your matrix. If we’re looking for an item in a list, ‘n’ is the number of items in the list. If we’re looking to evaluate a polynomial, ‘n’ is the order of the polynomial. In normal circumstances we don’t work out how many steps some operation does take. It’s more useful to know that multiplying these two long numbers would take about 900 steps than that it would need only 816. And so this gives us an asymptotic estimate. We get an estimate of how much longer cracking the combination lock will take if there’s more letters to pick from. This allowing that some poor soul will get the combination A-B-C. There are a couple ways to describe how long this will take. The more common is the Big-O. This is just the letter, like you find between N and P. Since that’s easy, many have taken to using a fancy, vaguely cursive O, one that looks like $\mathcal{O}$. I agree it looks nice. Particularly, though, we write $\mathcal{O}(f(n))$, where f is some function. In practice, we’ll see functions like $\mathcal{O}(n)$ or $\mathcal{O}(n^2 \log(n))$ or $\mathcal{O}(n^3)$. Usually something simple like that. It can be tricky. There’s a scheme for multiplying large numbers together that’s $\mathcal{O}(n \cdot 2^{\sqrt{2 log (n)}} \cdot log(n))$. What you will not see is something like $\mathcal{O}(\sin (n))$, or $\mathcal{O}(n^3 - n^4)$ or such. This comes to what we mean by the Big-O. It’ll be convenient for me to have a name for the actual number of steps the algorithm takes. Let me call the function describing that g(n). Then g(n) is $\mathcal{O}(f(n))$ if once n gets big enough, g(n) is always less than C times f(n). Here c is some constant number. Could be 1. Could be 1,000,000. Could be 0.00001. Doesn’t matter; it’s some positive number. There’s some neat tricks to play here. For example, the function ‘$n$‘ is $\mathcal{O}(n)$. It’s also $\mathcal{O}(n^2)$ and $\mathcal{O}(n^9)$ and $\mathcal{O}(e^{n})$. The function ‘$n^2$‘ is also $\mathcal{O}(n^2)$ and those later terms, but it is not $\mathcal{O}(n)$. And you can see why $\mathcal{O}(\sin(n))$ is right out. There is also a Little-O notation. It, too, is an upper bound on the function. But it is a stricter bound, setting tighter restrictions on what g(n) is like. You ask how it is the stricter bound gets the minuscule letter. That is a fine question. I think it’s a quirk of history. Both symbols come to us through number theory. Big-O was developed first, published in 1894 by Paul Bachmann. Little-O was published in 1909 by Edmund Landau. Yes, the one with the short Hilbert-like list of number theory problems. In 1914 G H Hardy and John Edensor Littlewood would work on another measure and they used Ω to express it. (If you see the letter used for Big-O and Little-O as the Greek omicron, then you see why a related concept got called omega.) What makes the Little-O measure different is its sternness. g(n) is $o(f(n))$ if, for every positive number C, whenever n is large enough g(n) is less than or equal to C times f(n). I know that sounds almost the same. Here’s why it’s not. If g(n) is $\mathcal{O}(f(n))$, then you can go ahead and pick a C and find that, eventually, $g(n) \le C f(n)$. If g(n) is $o(f(n))$, then I, trying to sabotage you, can go ahead and pick a C, trying my best to spoil your bounds. But I will fail. Even if I pick, like a C of one millionth of a billionth of a trillionth, eventually f(n) will be so big that $g(n) \le C f(n)$. I can’t find a C small enough that f(n) doesn’t eventually outgrow it, and outgrow g(n). This implies some odd-looking stuff. Like, that the function n is not $o(n)$. But the function n is at least $o(n^2)$, and $o(n^9)$ and those other fun variations. Being Little-O compels you to be Big-O. Big-O is not compelled to be Little-O, although it can happen. These definitions, for Big-O and Little-O, I’ve laid out from algorithmic complexity. It’s implicitly about functions defined on the counting numbers. But there’s no reason I have to limit the ideas to that. I could define similar ideas for a function g(x), with domain the real numbers, and come up with an idea of being on the order of f(x). We make some adjustments to this. The important one is that, with algorithmic complexity, we assumed g(n) had to be a positive number. What would it even mean for something to take minus four steps to complete? But a regular old function might be zero or negative or change between negative and positive. So we look at the absolute value of g(x). Is there some value of C so that, when x is big enough, the absolute value of g(x) stays less than C times f(x)? If it does, then g(x) is $\mathcal{O}(f(x))$. Is it the case that for every positive number C it’s true that g(x) is less than C times f(x), once x is big enough? Then g(x) is $o(f(x))$. Fine, but why bother defining this? A compelling answer is that it gives us a way to describe how different a function is from an approximation to that function. We are always looking for approximations to functions because most functions are hard. We have a small set of functions we like to work with. Polynomials are great numerically. Exponentials and trig functions are great analytically. That’s about all the functions that are easy to work with. Big-O notation particularly lets us estimate how bad an error we make using the approximation. For example, the Runge-Kutta method numerically approximates solutions to ordinary differential equations. It does this by taking the information we have about the function at some point x to approximate its value at a point x + h. ‘h’ is some number. The difference between the actual answer and the Runge-Kutta approximation is $\mathcal{O}(h^4)$. We use this knowledge to make sure our error is tolerable. Also, we don’t usually care what the function is at x + h. It’s just what we can calculate. What we want is the function at some point a fair bit away from x, call it x + L. So we use our approximate knowledge of conditions at x + h to approximate the function at x + 2h. And use x + 2h to tell us about x + 3h, and from that x + 4h and so on, until we get to x + L. We’d like to have as few of these uninteresting intermediate points as we can, so look for as big an h as is safe. That context may be the more common one. We see it, particularly, in Taylor Series and other polynomial approximations. For example, the sine of a number is approximately: $\sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \frac{x^9}{9!} + \mathcal{O}(x^{11})$ This has consequences. It tells us, for example, that if x is about 0.1, this approximation is probably pretty good. So it is: the sine of 0.1 (radians) is about 0.0998334166468282 and that’s exactly what five terms here gives us. But it also warns that if x is about 10, this approximation may be gibberish. And so it is: the sine of 10.0 is about -0.5440 and the polynomial is about 1448.27. The connotation in using Big-O notation here is that we look for small h’s, and for $\mathcal{O}(x)$ to be a tiny number. It seems odd to use the same notation with a large independent variable and with a small one. The concept carries over, though, and helps us talk efficiently about this different problem. I hope this week to post the Playful Math Education Blog Carnival for September. Any educational or recreational or fun mathematics sites you know about would be greatly helpful to me and them. Thanks for your help. Lastly, I am open for mathematics topics starting with P, Q, and R to write about next month. I’ve basically chosen my ‘P’ subject, though I’d be happy to hear alternatives for ‘Q’ and ‘R’ yet. ## My All 2020 Mathematics A to Z: Gottfried Wilhelm Leibniz Today’s topic suggestion was suggested by bunnydoe. I know of a project bunnydoe runs, but not whether it should be publicized. It is another biographical piece. Biographies and complex numbers, that seems to be the theme of this year. # Gottfried Wilhelm Leibniz. The exact suggestion I got for L was “Leibniz, the inventor of Calculus”. I can’t in good conscience offer that. This isn’t to deny Leibniz’s critical role in calculus. We rely on many of the ideas he’d had for it. We especially use his notation. But there are few great big ideas that can be truly credited to an inventor, or even a team of inventors. Put aside the sorry and embarrassing priority dispute with Isaac Newton. Many mathematicians in the 16th and 17th century were working on how to improve the Archimedean “method of exhaustion”. This would find the areas inside select curves, integral calculus. Johannes Kepler worked out the areas of ellipse slices, albeit with considerable luck. Gilles Roberval tried working out the area inside a curve as the area of infinitely many narrow rectangular strips. We still learn integration from this. Pierre de Fermat recognized how tangents to a curve could find maximums and minimums of functions. This is a critical piece of differential calculus. Isaac Barrow, Evangelista Torricelli (of barometer fame), Pietro Mengoli, and Stephano Angeli all pushed mathematics towards calculus. James Gregory proved, in geometric form, the relationship between differentiation and integration. That relationship is the Fundamental Theorem of Calculus. This is not to denigrate Leibniz. We don’t dismiss the Wright Brothers though we know that without them, Alberto Santos-Dumont or Glenn Curtiss or Samuel Langley would have built a workable airplane anyway. We have Leibniz’s note, dated the 29th of October, 1675 (says Florian Cajori), writing out $\int l$ to mean the sum of all l’s. By mid-November he was integrating functions, and writing out his work as $\int f(x) dx$. Any mathematics or physics or chemistry or engineering major today would recognize that. A year later he was writing things like $d(x^n) = n x^{n - 1} dx$, which we’d also understand if not quite care to put that way. Though we use his notation and his basic tools we don’t exactly use Leibniz’s particular ideas of what calculus means. It’s been over three centuries since he published. It would be remarkable if he had gotten the concepts exactly and in the best of all possible forms. Much of Leibniz’s calculus builds on the idea of a differential. This is a quantity that’s smaller than any positive number but also larger than zero. How does that make sense? George Berkeley argued it made not a lick of sense. Mathematicians frowned, but conceded Berkeley was right. By the mid-19th century they had a rationale for differentials that avoided this weird sort of number. It’s hard to avoid the differential’s lure. The intuitive appeal of “imagine moving this thing a tiny bit” is always there. In science or engineering applications it’s almost mandatory. Few things we encounter in the real world have the kinds of discontinuity that create logic problems for differentials. Even in pure mathematics, we will look at a differential equation like $\frac{dy}{dx} = x$ and rewrite it as $dy = x dx$. Leibniz’s notation gives us the idea that taking derivatives is some kind of fraction. It isn’t, but in many problems we act as though it were. It works out often enough we forget that it might not. Better, though. From the 1960s Abraham Robinson and others worked out a different idea of what real numbers are. In that, differentials have a rigorous logical definition. We call the mathematics which uses this “non-standard analysis”. The name tells something of its use. This is not to call it wrong. It’s merely not what we learn first, or necessarily at all. And it is Leibniz’s differentials. 304 years after his death there is still a lot of mathematics he could plausibly recognize. There is still a lot of still-vital mathematics that he touched directly. Leibniz appears to be the first person to use the term “function”, for example, to describe that thing we’re plotting with a curve. He worked on systems of linear equations, and methods to find solutions if they exist. This technique is now called Gaussian elimination. We see the bundling of the equations’ coefficients he did as building a matrix and finding its determinant. We know that technique, today, as Cramer’s Rule, after Gabriel Cramer. The Japanese mathematician Seki Takakazu had discovered determinants before Leibniz, though. Leibniz tried to study a thing he called “analysis situs”, which two centuries on would be a name for topology. My reading tells me you can get a good fight going among mathematics historians by asking whether he was a pioneer in topology. So I’ll decline to take a side in that. In the 1680s he tried to create an algebra of thought, to turn reasoning into something like arithmetic. His goal was good: we see these ideas today as Boolean algebra, and concepts like conjunction and disjunction and negation and the empty set. Anyone studying logic knows these today. He’d also worked in something we can see as symbolic logic. Unfortunately for his reputation, the papers he wrote about that went unpublished until late in the 19th century. By then other mathematicians, like Gottlob Frege and Charles Sanders Peirce, had independently published the same ideas. We give Leibniz’ name to a particular series that tells us the value of π: $1 - \frac13 + \frac15 - \frac17 + \frac19 - \frac{1}{11} + \cdots = \frac{\pi}{4}$ (The Indian mathematician Madhava of Sangamagrama knew the formula this comes from by the 14th century. I don’t know whether Western Europe had gotten the news by the 17th century. I suspect it hadn’t.) The drawback to using this to figure out digits of π is that it takes forever to use. Taking ten decimal digits of π demands evaluating about five billion terms. That’s not hyperbole; it just takes like forever to get its work done. Which is something of a theme in Leibniz’s biography. He had a great many projects. Some of them even reached a conclusion. Many did not, and instead sprawled out with great ambition and sometimes insight before getting lost. Consider a practical one: he believed that the use of wind-driven propellers and water pumps could drain flooded mines. (Mines are always flooding.) In principle, he was right. But they all failed. Leibniz blamed deliberate obstruction by administrators and technicians. He even blamed workers afraid that new technologies would replace their jobs. Yet even in this failure he observed and had bracing new thoughts. The geology he learned in the mines project made him hypothesize that the Earth had been molten. I do not know the history of geology well enough to say whether this was significant to that field. It may have been another frustrating moment of insight (lucky or otherwise) ahead of its time but not connected to the mainstream of thought. Another project, tantalizing yet incomplete: the “stepped reckoner”, a mechanical arithmetic machine. The design was to do addition and subtraction, multiplication and division. It’s a breathtaking idea. It earned him election into the (British) Royal Society in 1673. But it never was quite complete, never getting carries to work fully automatically. He never did finish it, and lost friends with the Royal Society when he moved on to other projects. He had a note describing a machine that could do some algebraic operations. In the 1690s he had some designs for a machine that might, in theory, integrate differential equations. It’s a fantastic idea. At some point he also devised a cipher machine. I do not know if this is one that was ever used in its time. His greatest and longest-lasting unfinished project was for his employer, the House of Brunswick. Three successive Brunswick rulers were content to let Leibniz work on his many side projects. The one that Ernest Augustus wanted was a history of the Guelf family, in the House of Brunswick. One that went back to the time of Charlemagne or earlier if possible. The goal was to burnish the reputation of the house, which had just become a hereditary Elector of the Holy Roman Empire. (That is, they had just gotten to a new level of fun political intriguing. But they were at the bottom of that level.) Starting from 1687 Leibniz did good diligent work. He travelled throughout central Europe to find archival materials. He studied their context and meaning and relevance. He organized it. What he did not do, by his death in 1716, was write the thing. It is always difficult to understand another person. Moreso someone you know only through biography. And especially someone who lived in very different times. But I do see a particular an modern personality type here. We all know someone who will work so very hard getting prepared to do a project Right that it never gets done. You might be reading the words of one right now. Leibniz was a compulsive Society-organizer. He promoted ones in Brandenberg and Berlin and Dresden and Vienna and Saint Petersburg. None succeeded. It’s not obvious why. Leibniz was well-connected enough; he’s known to have over six hundred correspondents. Even for a time of great letter-writing, that’s a lot. But it does seem like something about him offended others. Failing to complete big projects, like the stepped reckoner or the History of the Guelf family, seems like some of that. Anyone who knows of calculus knows of the dispute about the Newton-versus-Leibniz priority dispute. Grant that Leibniz seems not to have much fueled the quarrel. (And that modern historians agree Leibniz did not steal calculus from Newton.) Just being at the center of Drama causes people to rate you poorly. There seems like there’s more, though. He was liked, for example, by the Electress Sophia of Hanover and her daughter Sophia Charlotte. These were the mother and the sister of Britain’s King George I. When George I ascended to the British throne he forbade Leibniz coming to London until at least one volume of the history was written. (The restriction seems fair, considering Leibniz was 27 years into the project by then.) There are pieces in his biography that suggest a person a bit too clever for his own good. His first salaried position, for example, was as secretary to a Nuremberg alchemical society. He did not know alchemy. He passed himself off as deeply learned, though. I don’t blame him. Nobody would ever pass a job interview if they didn’t pretend to have expertise. Here it seems to have worked. But consider, for example, his peace mission to Paris. Leibniz was born in the last years of the Thirty Years War. In that, the Great Powers of Europe battled each other in the German states. They destroyed Germany with a thoroughness not matched until World War II. Leibniz reasonably feared France’s King Louis XIV had designs on what was left of Germany. So his plan was to sell the French government on a plan of attacking Egypt and, from there, the Dutch East Indies. This falls short of an early-Enlightenment idea of rational world peace and a congress of nations. But anyone who plays grand strategy games recognizes the “let’s you and him fight” scheming. (The plan became irrelevant when France went to war with the Netherlands. The war did rope Brandenberg-Prussia, Cologne, Münster, and the Holy Roman Empire into the mess.) And I have not discussed Leibniz’s work in philosophy, outside his logic. He’s respected for the theory of monads, part of the long history of trying to explain how things can have qualities. Like many he tried to find a deductive-logic argument about whether God must exist. And he proposed the notion that the world that exists is the most nearly perfect that can possibly be. Everyone has been dragging him for that ever since he said it, and they don’t look ready to stop. It’s an unfair rap, even if it makes for funny spoofs of his writing. The optimal world may need to be badly defective in some ways. And this recognition inspires a question in me. Obviously Leibniz could come to this realization from thinking carefully about the world. But anyone working on optimization problems knows the more constraints you must satisfy, the less optimal your best-fit can be. Some things you might like may end up being lousy, because the overall maximum is more important. I have not seen anything to suggest Leibniz studied the mathematics of optimization theory. Is it possible he was working in things we now recognize as such, though? That he has notes in the things we would call Lagrange multipliers or such? I don’t know, and would like to know if anyone does. Leibniz’s funeral was unattended by any dignitary or courtier besides his personal secretary. The Royal Academy and the Berlin Academy of Sciences did not honor their member’s death. His grave was unmarked for a half-century. And yet historians of mathematics, philosophy, physics, engineering, psychology, social science, philology, and more keep finding his work, and finding it more advanced than one would expect. Leibniz’s legacy seems to be one always rising and emerging from shade, but never being quite where it should. And that’s enough for one day. All of the 2020 A-to-Z essays should be at this link. Both 2020 and all past A-to-Z essays should be at this link. And, as I am hosting the Playful Math Education Blog Carnival at the end of September, I am looking for any blogs, videos, books, anything educational or recreational or just interesting to read about. Thank you for your reading and your help. ## Meanwhile, in sandwich news This is a slight thing that crossed my reading yesterday. You might enjoy. The question is a silly one: what’s the “optimal” way to slice banana onto a peanut-butter-and-banana sandwich? Here’s Ethan Rosenthal’s answer. The specific problem this is put to is silly. The optimal peanut butter and banana sandwich is the one that satisfies your desire for a peanut butter and banana sandwich. However, the approach to the problem demonstrates good mathematics, and numerical mathematics, practices. Particularly it demonstrates defining just what your problem is, and what you mean by “optimal”, and how you can test that. And then developing a numerical model which can optimize it. And the specific question, how much of the sandwich can you cover with banana slices, one of actual interest. A good number of ideas in analysis involve thinking of cover sets: what is the smallest collection of these things which will completely cover this other thing? Concepts like this give us an idea of how to define area, also, as the smallest number of standard reference shapes which will cover the thing we’re interested in. The basic problem is practical too: if we wish to provide something, and have units like this which can cover some area, how can we arrange them so as to miss as little as possible? Or use as few of the units as possible? ## How To Multiply Numbers By Multiplying Other Numbers Instead I do read other people’s mathematics writing, even if I don’t do it enough. A couple days ago RJ Lipton and KW Regan’s Reductions And Jokes discussed how one can take a problem and rewrite it as a different problem. This is one of the standard mathematician’s tricks. The point to doing this is that you might have a better handle on the new problem. “Better” is an aesthetic judgement. It reflects whether the new problem is easier to work with. Along the way, they offer an example that surprised and delighted me, and that I wanted to share. It’s about multiplying whole numbers. Multiplication can take a fair while, as anyone who’s tried to do 38 times 23 by hand has found out. But we can speed that up. A multiplication table is a special case of a lookup table, a chunk of stored memory which has computed ahead of time all the multiplications someone is likely to do. Then instead of doing them, you just look them up. The catch is that a multiplication table takes memory. To do all the multiplications for whole numbers 1 through 10 you need … well, not 100 memory cells. But 55. To have 1 through 20 worked out ahead of time you need 210 memory cells. Can we do better? If addition and subtraction are easy enough to do? And if dividing by two is easy enough? Then, yes. Instead of working out every pair multiplication, work out the squares of the whole numbers. And then make use of this identity: $a \times b = \frac{1}{2}\left( \left(a + b\right)^2 - a^2 - b^2\right)$ And that delights me. It’s one of those relationships that’s sitting there, waiting for anyone who’s ever squared a binomial to notice. I don’t know that anyone actually uses this. But it’s fun to see multiplication worked out by a different yet practical way. ## Bob Newhart interviews Herman Hollerith Yesterday was the birthday of Herman Hollerith. His 40th since his birth in 1860. He’s renowned in computing circles. His work in automating the counting and of data made the United States’s 1890 Census possible. This is not the ordinary hyperbole: the 1880 Census’s data took eight years to fully collate. Hollerith’s tabulating machines took … well, six years for the full job, but they were keeping track of quite a bit of information. Hollerith’s system would go on to be used for other censuses, and also for general inventory and data-tracking purposes. His tabulating company would go on to be one of the original components of IBM. Cards, card readers, and card sorters with a clear lineage to this system would be used until fully electronic computers took over. (It’s commonly assumed that the traditional 80-character width of a text terminal traces to the 80-hole punch cards which became the standard. Programmers particularly love to tell that tale, ignoring early computing screens that had different lengths, particularly 72 characters. More plausibly 80 characters owes to two things: it’s a nice round number, and it’s close to the number of characters you can type on a standard sheet of paper with a normal typewriter font. So it’s about the “right” length, one that we’ve been trained to accept as enough text to read at a glance.) Well. In about 1970 IBM hired Bob Newhart to record a bit, for … fun, if that word applies to IBM. Part of the publicity for launching the famous System 370 machine. The structure echos the bit where Bob Newhart imagines being the first guy to hear of Sir Walter Raleigh’s importing of tobacco, and just how weird every bit of that is. In this bit, Newhart imagines talking on the phone with Herman Hollerith and hearing about just how this punched-card system is supposed to work. For decades, though, the film was reported lost. What I did not know until mentioning to a friend two days ago is: the film was found! And a decade ago! In a Swedish bank vault because that’s the way this sort of thing always happens. Which is a neat bit of historical rhyming: the original fine data from the first Hollerith census of 1890 is lost, most likely destroyed in 1933 or 1934. So, please let me share with you Bob Newhart hearing about Herman Hollerith’s system. The end appears to be cut off, and there are Swedish subtitles that might just give away a couple jokes, if you can’t help paying attention to them. Like a lot of comic work-for-hire it’s not Newhart’s best. It’s not going to displace the Voyage of the USS Codfish in my heart. There are a few spots to me where it seems like Newhart’s overlooked a good additional punch line, and I don’t know whether that reflects Newhart wanting to keep the piece from growing too long or too technical or what. It’s possible Newhart didn’t feel familiar enough with punch card technology to get too technical too. Newhart did work, briefly, as an accountant and might have had some reason to use the things. But I’m not aware of his telling any stories of doing so, and that seems a telling omission. Still, it’s great to see this bit has been preserved, and is available. And is a Bob Newhart routine about early computer technologies, somehow. ## My 2019 Mathematics A To Z: Versine Jacob Siehler suggested the term for today’s A to Z essay. The letter V turned up a great crop of subjects: velocity, suggested by Dina Yagodich, and variable, from goldenoj, were also great suggestions. But Siehler offered something almost designed to appeal to me: an obscure function that shone in the days before electronic computers could do work for us. There was no chance of my resisting. # Versine. A story about the comeuppance of a know-it-all who was not me. It was in mathematics class in high school. The teacher was explaining logic, and showing off diagrams. These would compute propositions very interesting to logic-diagram-class connecting symbols. These symbols meant logical AND and OR and NOT and so on. One of the students pointed out, you know, the only symbol you actually need is NAND. The teacher nodded; this was so. By the clever arrangement of enough NAND operations you could get the result of all the standard logic operations. He said he’d wait while the know-it-all tried it for any realistic problem. If we are able to do NAND we can construct an XOR. But we will understand what we are trying to do more clearly if we have an XOR in the kit. So the versine. It’s a (spherical) trigonometric function. The versine of an angle is the same value as 1 minus the cosine of the angle. This seems like a confused name; shouldn’t something called “versine” have, you know, a sine in its rule? Sure, and if you don’t like that 1 minus the cosine thing, you can instead use this. The versine of an angle is two times the square of the sine of half the angle. There is a vercosine, so you don’t need to worry about that. The vercosine is two times the square of the cosine of half the angle. That’s also equal to 1 plus the cosine of the angle. This is all fine, but what’s the point? We can see why it might be easier to say “versine of θ” than to say “2 sin(1/2 θ)”. But how is “versine of θ” easier than “one minus cosine of θ”? The strongest answer, at the risk of sounding old, is to ask back, you know we haven’t always done things the way we do them now, right? We have, these days, settled on an idea of what the important trigonometric functions are. Start with Cartesian coordinates on some flat space. Draw a circle of radius 1 and with center at the origin. Draw a horizontal line starting at the origin and going off in the positive-x-direction. Draw another line from the center and making an angle with respect to the horizontal line. That line intersects the circle somewhere. The x-coordinate of that point is the cosine of the angle. The y-coordinate of that point is the sine of the angle. What could be more sensible? That depends what you think sensible. We’re so used to drawing circles and making lines inside that we forget we can do other things. Here’s one. Start with a circle. Again with radius 1. Now chop an arc out of it. Pick up that arc and drop it down on the ground. How far does this arc reach, left to right? How high does it reach, top to bottom? Well, the arc you chopped out has some length. Let me call that length 2θ. That two makes it easier to put this in terms of familiar trig functions. How much space does this chopped and dropped arc cover, horizontally? That’s twice the sine of θ. How tall is this chopped and dropped arc? That’s the versine of θ. We are accustomed to thinking of the relationships between pieces of a circle like this in terms of angles inside the circle. Or of the relationships of the legs of triangles. It seems obviously useful. We even know many formulas relating sines and cosines and other major functions to each other. But it’s no less valid to look at arcs plucked out of a circle and the length of that arc and its width and its height. This might be more convenient, especially if we are often thinking about the outsides of circular things. Indeed, the oldest tables we in the Western tradition have of trigonometric functions list sines and versines. Cosines would come later. This partly answers why there should have ever been a versine. But we’ve had the cosine since Arabian mathematicians started thinking seriously about triangles. Why had we needed versine the last 1200 years? And why don’t we need it anymore? One answer here is that mention about the oldest tables of trigonometric functions. Or of less-old tables. Until recently, as things go, anyone who wanted to do much computing needed tables of common functions at many different values. These tables might not have the since we really need of, say, 1.17 degrees. But if the table had 1.1 and 1.2 we could get pretty close. So a table of versines could make computation easier. You can, for example, use it to find square roots of numbers. (This essay actually, implicitly, uses vercosines. But it’ll give you the hint how to find them using versines.) Which is great if we have a table of versines but not, somehow, exponentials and logarithms. Well, if we could only take one chart in and we know trigonometry is needed, we might focus on that. But trigonometry will be needed. One of the great fields of practical mathematics has long been navigation. We locate points on the globe using latitude and longitude. To find the distance between points is a messy calculation. The calculation becomes less longwinded, and more clear, written using versines. (Properly, it uses the haversine, which is one-half times the versine. It will not surprise you that a 19th-century English mathematician coined that name.) Having a neat formula is pleasant, but, you know. It’s navigators and surveyors using these formulas. They can deal with a lengthy formula. The typesetters publishing their books are already getting hazard pay. Why change a bunch of $\left(1 - \cos \left(\theta\right)\right)$ references to $hav \left(\theta\right)$ instead? We get a difference when it comes time to calculate. Like, pencil on paper. The cosine (sine, versine, haversine, whatever) of 1.17 degrees is a transcendental number. We do not have the paper to write that number out. We’ll write down instead enough digits until we get tired. 0.99979, say. Maybe 0.9998. To take 1 minus that number? That’s 0.00021. Maybe 0.0002. What’s the difference? It’s in the precision. 1.17 degrees is a measure that has three significant digits. 0.00021? That’s only two digits. 0.0002? That’s got only one digit. We’ve lost precision, and without even noticing it. Whatever calculations we’re making on this have grown error margins. Maybe we’re doing calculations for which this won’t matter. Do we know that, though? This reflects what we call numerical instability. You may have made only a slight error. But your calculation might magnify that error until it overwhelms your calculation. There isn’t any one fix for numerical instability. But there are some good general practices. Like, don’t divide a large number by a small one. Don’t add a tiny number to a large one. And don’t subtract two very-nearly-equal numbers. Calculating 1 minus the cosines of a small angle is subtracting a number that’s quite close to 1 from a number that is 1. You’re not guaranteed danger, but you are at greater risk. A table of versines, rather than one of cosines, can compensate for this. If you’re making a table of versines it’s because you know people need the versine of 1.17 degrees with some precision. You can list it as 2.08488 times 10-4, and trust them to use as much precision as they need. For the cosine table, 0.999792 will give cosine-users the same number of significant digits. But it shortchanges versine-users. And here we see a reason for the versine to go from minor but useful function to obscure function. Any modern computer calculates with floating point numbers. You can get fifteen or thirty or, if you really need, sixty digits of precision for the cosine of 1.17 degrees. So you can get twelve or twenty-seven or fifty-seven digits for the versine of 1.17 degrees. We don’t need to look this up in a table constructed by someone working out formulas carefully. This, I have to warn, doesn’t always work. Versine formulas for things like distance work pretty well with small angles. There are other angles for which they work badly. You have to calculate in different orders and maybe use other functions in that case. Part of numerical computing is selecting the way to compute the thing you want to do. Versines are for some kinds of problems a good way. There are other advantages versines offered back when computing was difficult. In spherical trigonometry calculations they can let one skip steps demanding squares and square roots. If you do need to take a square root, you have the assurance that the versine will be non-negative. You don’t have to check that you aren’t slipping complex-valued numbers into your computation. If you need to take a logarithm, similarly, you know you don’t have to deal with the log of a negative number. (You still have to do something to avoid taking the logarithm of zero, but we can’t have everything.) So this is what the versine offered. You could get precision that just using a cosine table wouldn’t necessarily offer. You could dodge numerical instabilities. You could save steps, in calculations and in thinking what to calculate. These are all good things. We can respect that. We enjoy now a computational abundance, which makes the things versine gave us seem like absurd penny-pinching. If computing were hard again, we might see the versine recovered from obscurity to, at least, having more special interest. Wikipedia tells me that there are still specialized uses for the versine, or for the haversine. Recent decades, apparently, have found useful tools for calculating lunar distances, and sight reductions. The lunar distance is the angular separation between the Moon and some other body in the sky. Sight reduction is calculating positions from the apparent positions of reference objects. These are not problems that I work on often. But I would appreciate that we are still finding ways to do them well. I mentioned that besides the versine there was a coversine and a haversine. There’s also a havercosine, and then some even more obscure functions with no less wonderful names like the exsecant. I cannot imagine needing a hacovercosine, except maybe to someday meet an obscure poetic meter, but I am happy to know such a thing is out there in case. A person on Wikipedia’s Talk page about the versine wished to know if we could define a vertangent and some other terms. We can, of course, but apparently no one has found a need for such a thing. If we find a problem that we would like to solve that they do well, this may change. Thank you for reading. This and the other essays for the Fall 2019 A to Z should appear at this link. We are coming up to the final four essays and I’m certainly excited by that. All the past A to Z essays ought to be at this link, and when I have a free afternoon to fix somethings, they will be. ## My 2019 Mathematics A To Z: Taylor Series Today’s A To Z term was nominated by APMA, author of the Everybody Makes DATA blog. It was a topic that delighted me to realize I could explain. Then it started to torment me as I realized there is a lot to explain here, and I had to pick something. So here’s where things ended up. # Taylor Series. In the mid-2000s I was teaching at a department being closed down. In its last semester I had to teach Computational Quantum Mechanics. The person who’d normally taught it had transferred to another department. But a few last majors wanted the old department’s version of the course, and this pressed me into the role. Teaching a course you don’t really know is a rush. It’s a semester of learning, and trying to think deeply enough that you can convey something to students. This while all the regular demands of the semester eat your time and working energy. And this in the leap of faith that the syllabus you made up, before you truly knew the subject, will be nearly enough right. And that you have not committed to teaching something you do not understand. So around mid-course I realized I needed to explain finding the wave function for a hydrogen atom with two electrons. The wave function is this probability distribution. You use it to find things like the probability a particle is in a certain area, or has a certain momentum. Things like that. A proton with one electron is as much as I’d ever done, as a physics major. We treat the proton as the center of the universe, immobile, and the electron hovers around that somewhere. Two electrons, though? A thing repelling your electron, and repelled by your electron, and neither of those having fixed positions? What the mathematics of that must look like terrified me. When I couldn’t procrastinate it farther I accepted my doom and read exactly what it was I should do. It turned out I had known what I needed for nearly twenty years already. Got it in high school. Of course I’m discussing Taylor Series. The equations were loaded down with symbols, yes. But at its core, the important stuff, was this old and trusted friend. The premise behind a Taylor Series is even older than that. It’s universal. If you want to do something complicated, try doing the simplest thing that looks at all like it. And then make that a little bit more like you want. And then a bit more. Keep making these little improvements until you’ve got it as right as you truly need. Put that vaguely, the idea describes Taylor series just as well as it describes making a video game or painting a state portrait. We can make it more specific, though. A series, in this context, means the sum of a sequence of things. This can be finitely many things. It can be infinitely many things. If the sum makes sense, we say the series converges. If the sum doesn’t, we say the series diverges. When we first learn about series, the sequences are all numbers. $1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots$, for example, which diverges. (It adds to a number bigger than any finite number.) Or $1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \cdots$, which converges. (It adds to $\frac{1}{6}\pi^2$.) In a Taylor Series, the terms are all polynomials. They’re simple polynomials. Let me call the independent variable ‘x’. Sometimes it’s ‘z’, for the reasons you would expect. (‘x’ usually implies we’re looking at real-valued functions. ‘z’ usually implies we’re looking at complex-valued functions. ‘t’ implies it’s a real-valued function with an independent variable that represents time.) Each of these terms is simple. Each term is the distance between x and a reference point, raised to a whole power, and multiplied by some coefficient. The reference point is the same for every term. What makes this potent is that we use, potentially, many terms. Infinitely many terms, if need be. Call the reference point ‘a’. Or if you prefer, x0. z0 if you want to work with z’s. You see the pattern. This ‘a’ is the “point of expansion”. The coefficients of each term depend on the original function at the point of expansion. The coefficient for the term that has $(x - a)$ is the first derivative of f, evaluated at a. The coefficient for the term that has $(x - a)^2$ is the second derivative of f, evaluated at a (times a number that’s the same for the squared-term for every Taylor Series). The coefficient for the term that has $(x - a)^3$ is the third derivative of f, evaluated at a (times a different number that’s the same for the cubed-term for every Taylor Series). You’ll never guess what the coefficient for the term with $(x - a)^{122,743}$ is. Nor will you ever care. The only reason you would wish to is to answer an exam question. The instructor will, in that case, have a function that’s either the sine or the cosine of x. The point of expansion will be 0, $\frac{\pi}{2}$, $\pi$, or $\frac{3\pi}{2}$. Otherwise you will trust that this is one of the terms of $(x - a)^n$, ‘n’ representing some counting number too great to be interesting. All the interesting work will be done with the Taylor series either truncated to a couple terms, or continued on to infinitely many. What a Taylor series offers is the chance to approximate a function we’re genuinely interested in with a polynomial. This is worth doing, usually, because polynomials are easier to work with. They have nice analytic properties. We can automate taking their derivatives and integrals. We can set a computer to calculate their value at some point, if we need that. We might have no idea how to start calculating the logarithm of 1.3. We certainly have an idea how to start calculating $0.3 - \frac{1}{2}(0.3^2) + \frac{1}{3}(0.3^3)$. (Yes, it’s 0.3. I’m using a Taylor series with a = 1 as the point of expansion.) The first couple terms tell us interesting things. Especially if we’re looking at a function that represents something physical. The first two terms tell us where an equilibrium might be. The next term tells us whether an equilibrium is stable or not. If it is stable, it tells us how perturbations, points near the equilibrium, behave. The first couple terms will describe a line, or a quadratic, or a cubic, some simple function like that. Usually adding more terms will make this Taylor series approximation a better fit to the original. There might be a larger region where the polynomial and the original function are close enough. Or the difference between the polynomial and the original function will be closer together on the same old region. We would really like that region to eventually grow to the whole domain of the original function. We can’t count on that, though. Roughly, the interval of convergence will stretch from ‘a’ to wherever the first weird thing happens. Weird things are, like, discontinuities. Vertical asymptotes. Anything you don’t like dealing with in the original function, the Taylor series will refuse to deal with. Outside that interval, the Taylor series diverges and we just can’t use it for anything meaningful. Which is almost supernaturally weird of them. The Taylor series uses information about the original function, but it’s all derivatives at a single point. Somehow the derivatives of, say, the logarithm of x around x = 1 give a hint that the logarithm of 0 is undefinable. And so they won’t help us calculate the logarithm of 3. Things can be weirder. There are functions that just break Taylor series altogether. Some are obvious. A function needs lots of derivatives at a point to have a good Taylor series approximation. So, many fractal curves won’t have a Taylor series approximation. These curves are all corners, points where they aren’t continuous or where derivatives don’t exist. Some are obviously designed to break Taylor series approximations. We can make a function that follows different rules if x is rational than if x is irrational. There’s no approximating that, and you’d blame the person who made such a function, not the Taylor series. It can be subtle. The function defined by the rule $f(x) = \exp{-\frac{1}{x^2}}$, with the note that if x is zero then f(x) is 0, seems to satisfy everything we’d look for. It’s a function that’s mostly near 1, that drops down to being near zero around where x = 0. But its Taylor series expansion around a = 0 is a horizontal line always at 0. The interval of convergence can be a single point, challenging our idea of what an interval is. That’s all right. If we can trust that we’re avoiding weird parts, Taylor series give us an outstanding new tool. Grant that the Taylor series describes a function with the same rule as our original function. The Taylor series is often easier to work with, especially if we’re working on differential equations. We can automate, or at least find formulas for, taking the derivative of a polynomial. Or adding together derivatives of polynomials. Often we can attack a differential equation too hard to solve otherwise by supposing the answer is a polynomial. This is essentially what that quantum mechanics problem used, and why the tool was so familiar when I was in a strange land. Roughly. What I was actually doing was treating the function I wanted as a power series. This is, like the Taylor series, the sum of a sequence of terms, all of which are $(x - a)^n$ times some coefficient. What makes it not a Taylor series is that the coefficients weren’t the derivatives of any function I knew to start. But the experience of Taylor series trained me to look at functions as things which could be approximated by polynomials. This gives us the hint to look at other series that approximate interesting functions. We get a host of these, with names like Laurent series and Fourier series and Chebyshev series and such. Laurent series look like Taylor series but we allow powers to be negative integers as well as positive ones. Fourier series do away with polynomials. They instead use trigonometric functions, sines and cosines. Chebyshev series build on polynomials, but not on pure powers. They’ll use orthogonal polynomials. These behave like perpendicular directions do. That orthogonality makes many numerical techniques behave better. The Taylor series is a great introduction to these tools. Its first several terms have good physical interpretations. Its calculation requires tools we learn early on in calculus. The habits of thought it teaches guides us even in unfamiliar territory. And I feel very relieved to be done with this. I often have a few false starts to an essay, but those are mostly before I commit words to text editor. This one had about four branches that now sit in my scrap file. I’m glad to have a deadline forcing me to just publish already. Thank you, though. This and the essays for the Fall 2019 A to Z should be at this link. Next week: the letters U and V. And all past A to Z essays ought to be at this link. ## My 2019 Mathematics A To Z: Quadrature I got a good nomination for a Q topic, thanks again to goldenoj. It was for Qualitative/Quantitative. Either would be a good topic, but they make a natural pairing. They describe the things mathematicians look for when modeling things. But ultimately I couldn’t find an angle that I liked. So rather than carry on with an essay that wasn’t working I went for a topic of my own. Might come back around to it, though, especially if nothing good presents itself for the letter X, which will probably need to be a wild card topic anyway. We like comparing sizes. I talked about that some with norms. We do the same with shapes, though. We’d like to know which one is bigger than another, and by how much. We rely on squares to do this for us. It could be any shape, but we in the western tradition chose squares. I don’t know why. My guess, unburdened by knowledge, is the ancient Greek tradition of looking at the shapes one can make with straightedge and compass. The easiest shape these tools make is, of course, circles. But it’s hard to find a circle with the same area as, say, any old triangle. Squares are probably a next-best thing. I don’t know why not equilateral triangles or hexagons. Again I would guess that the ancient Greeks had more rectangular or square rooms than the did triangles or hexagons, and went with what they knew. So that’s what lurks behind that word “quadrature”. It may be hard for us to judge whether this pentagon is bigger than that octagon. But if we find squares that are the same size as the pentagon and the octagon, great. We can spot which of the squares is bigger, and by how much. Straightedge-and-compass lets you find the quadrature for many shapes. Like, take a rectangle. Let me call that ABCD. Let’s say that AB is one of the long sides and BC one of the short sides. OK. Extend AB, outwards, to another point that I’ll call E. Pick E so that the length of BE is the same as the length of BC. Next, bisect the line segment AE. Call that point F. F is going to be the center of a new semicircle, one with radius FE. Draw that in, on the side of AE that’s opposite the point C. Because we are almost there. Extend the line segment CB upwards, until it touches this semicircle. Call the point where it touches G. The line segment BG is the side of a square with the same area as the original rectangle ABCD. If you know enough straightedge-and-compass geometry to do that bisection, you know enough to turn BG into a square. If you’re not sure why that’s the correct length, you can get there quickly. Use a little algebra and the Pythagorean theorem. Neat, yeah, I agree. Also neat is that you can use the same trick to find the area of a parallelogram. A parallelogram has the same area as a square with the same bases and height between them, you remember. So take your parallelogram, draw in some perpendiculars to share that off into a rectangle, and find the quadrature of that rectangle. you’ve got the quadrature of your parallelogram. Having the quadrature of a parallelogram lets you find the quadrature of any triangle. Pick one of the sides of the triangle as the base. You have a third point not on that base. Draw in the parallel to that base that goes through that third point. Then choose one of the other two sides. Draw the parallel to that side which goes through the other point. Look at that: you’ve got a parallelogram with twice the area of your original triangle. Bisect either the base or the height of this parallelogram, as you like. Then follow the rules for the quadrature of a parallelogram, and you have the quadrature of your triangle. Yes, you’re doing a lot of steps in-between the triangle you started with and the square you ended with. Those steps don’t count, not by this measure. Getting the results right matters. And here’s some more beauty. You can find the quadrature for any polygon. Remember how you can divide any polygon into triangles? Go ahead and do that. Find the quadrature for every one of those triangles then. And you can create a square that has an area as large as all those squares put together. I’ll refrain from saying quite how, because realizing how is such a delight, one of those moments that at least made me laugh at how of course that’s how. It’s through one of those things that even people who don’t know mathematics know about. With that background you understand why people thought the quadrature of the circle ought to be possible. Moreso when you know that the lune, a particular crescent-moon-like shape, can be squared. It looks so close to a half-circle that it’s obvious the rest should be possible. It’s not, and it took two thousand years and a completely different idea of geometry to prove it. But it sure looks like it should be possible. Along the way to modernity quadrature picked up a new role. This is as part of calculus. One of the legs of calculus is integration. There is an interpretation of what the (definite) integral of a function means so common that we sometimes forget it doesn’t have to be that. This is to say that the integral of a function is the area “underneath” the curve. That is, it’s the area bounded by the limits of integration, by the horizontal axis, and by the curve represented by the function. If the function is sometimes less than zero, within the limits of integration, we’ll say that the integral represents the “net area”. Then we allow that the net area might be less than zero. Then we ignore the scolding looks of the ancient Greek mathematicians. No matter. We love being able to find “the” integral of a function. This is a new function, and evaluating it tells us what this net area bounded by the limits of integration is. Finding this is “integration by quadrature”. At least in books published back when they wrote words like “to-day” or “coördinate”. My experience is that the term’s passed out of the vernacular, at least in North American Mathematician’s English. Anyway the real flaw is that there are, like, six functions we can find the integral for. For the rest, we have to make do with approximations. This gives us “numerical quadrature”, a phrase which still has some currency. And with my prologue about compass-and-straightedge quadrature you can see why it’s called that. Numerical integration schemes often rely on finding a polynomial with a part that looks like a graph of the function you’re interested in. The other edges look like the limits of the integration. Then the area of that polygon should be close to the area “underneath” this function. So it should be close to the integral of the function you want. And we’re old hands at how the quadrature of polygons, since we talked that out like five hundred words ago. Now, no person ever has or ever will do numerical quadrature by compass-and-straightedge on some function. So why call it “numerical quadrature” instead of just “numerical integration”? Style, for one. “Quadrature” as a word has a nice tone, clearly jargon but not threateningly alien. Also “numerical integration” often connotes the solving differential equations numerically. So it can clarify whether you’re evaluating integrals or solving differential equations. If you think that’s a distinction worth making. Evaluating integrals and solving differential equations are similar together anyway. And there is another adjective that often attaches to quadrature. This is Gaussian Quadrature. Gaussian Quadrature is, in principle, a fantastic way to do numerical integration perfectly. For some problems. For some cases. The insight which justifies it to me is one of those boring little theorems you run across in the chapter introducing How To Integrate. It runs something like this. Suppose ‘f’ is a continuous function, with domain the real numbers and range the real numbers. Suppose a and b are the limits of integration. Then there’s at least one point c, between a and b, for which: $\int_a^b f(x) dx = f(c) \cdot (b - a)$ So if you could pick the right c, any integration would be so easy. Evaluate the function for one point and multiply it by whatever b minus a is. The catch is, you don’t know what c is. Except there’s some cases where you kinda do. Like, if f is a line, rising or falling with a constant slope from a to b? Then have c be the midpoint of a and b. That won’t always work. Like, if f is a parabola on the region from a to b, then c is not going to be the midpoint. If f is a cubic, then the midpoint is probably not c. And so on. And if you don’t know what kind of function f is? There’s no guessing where c will be. But. If you decide you’re only trying to certain kinds of functions? Then you can do all right. If you decide you only want to integrate polynomials, for example, then … well, you’re not going to find a single point c for this. But what you can find is a set of points between a and b. Evaluate the function for those points. And then find a weighted average by rules I’m not getting into here. And that weighted average will be exactly that integral. Of course there’s limits. The Gaussian Quadrature of a function is only possible if you can evaluate the function at arbitrary points. If you’re trying to integrate, like, a set of sample data it’s inapplicable. The points you pick, and the weighting to use, depend on what kind of function you want to integrate. The results will be worse the less your function is like what you supposed. It’s tedious to find what these points are for a particular assumption of function. But you only have to do that once, or look it up, if you know (say) you’re going to use polynomials of degree up to six or something like that. And there are variations on this. They have names like the Chevyshev-Gauss Quadrature, or the Hermite-Gauss Quadrature, or the Jacobi-Gauss Quadrature. There are even some that don’t have Gauss’s name in them at all. Despite that, you can get through a lot of mathematics not talking about quadrature. The idea implicit in the name, that we’re looking to compare areas of different things by looking at squares, is obsolete. It made sense when we worked with numbers that depended on units. One would write about a shape’s area being four times another shape’s, or the length of its side some multiple of a reference length. We’ve grown comfortable thinking of raw numbers. It makes implicit the step where we divide the polygon’s area by the area of some standard reference unit square. This has advantages. We don’t need different vocabulary to think about integrating functions of one or two or ten independent variables. We don’t need wordy descriptions like “the area of this square is to the area of that as the second power of this square’s side is to the second power of that square’s side”. But it does mean we don’t see squares as intermediaries to understanding different shapes anymore. Thank you again for reading. This essay and all the others written for the Fall 2019 A to Z should be at this link. This should include, later this week, something for the letter R. And all of the A to Z essays ought to be at this link. ## My 2019 Mathematics A To Z: Linear Programming Today’s A To Z term is another proposed by @aajohannas. I couldn’t find a place to fit this in the essay proper. But it’s too good to leave out. The simplex method, discussed within, traces to George Dantzig. He’d been planning methods for the US Army Air Force during the Second World War. Dantzig is a person you have heard about, if you’ve heard any mathematical urban legends. In 1939 he was late to Jerzy Neyman’s class. He took two statistics problems on the board to be homework. He found them “harder than usual”, but solved them in a couple days and turned in the late homework hoping Neyman would be understanding. They weren’t homework. They were examples of famously unsolved problems. Within weeks Neyman had written one of the solutions up for publication. When he needed a thesis topic Neyman advised him to just put what he already had in a binder. It’s the stuff every grad student dreams of. The story mutated. It picked up some glurge to become a narrative about positive thinking. And mutated further, into the movie Good Will Hunting. The story gets better, for my purposes. The simplex method can be understood as one of those homework problems. Dantzig describes some of that in this 1987 essay about the origins of the method. The essay is worth reading to understand some of how people come to think of good mathematics. # Linear Programming. Every three days one of the comic strips I read has the elderly main character talk about how they never used algebra. This is my hyperbole. But mathematics has got the reputation for being difficult and inapplicable to everyday life. We’ll concede using arithmetic, when we get angry at the fast food cashier who hands back our two pennies before giving change for our $6.77 hummus wrap. But otherwise, who knows what an elliptic integral is, and whether it’s working properly? Linear programming does not have this problem. In part, this is because it lacks a reputation. But those who have heard of it, acknowledge it as immensely practical mathematics. It is about something a particular kind of human always finds compelling. That is how to do a thing best. There are several kinds of “best”. There is doing a thing in as little time as possible. Or for as little effort as possible. For the greatest profit. For the highest capacity. For the best score. For the least risk. The goals have a thousand names, none of which we need to know. They all mean the same thing. They mean “the thing we wish to optimize”. To optimize has two directions, which are one. The optimum is either the maximum or the minimum. To be good at finding a maximum is to be good at finding a minimum. It’s obvious why we call this “programming”; obviously, we leave the work of finding answers to a computer. It’s a spurious reason. The “programming” here comes from an independent sense of the word. It means more about finding a plan. Think of “programming” a night’s entertainment, so that every performer gets their turn, all scene changes have time to be done, you don’t put two comedians right after the other, and you accommodate the performer who has to leave early and the performer who’ll get in an hour late. Linear programming problems are often about finding how to do as well as possible given various priorities. All right. At least the “linear” part is obvious. A mathematics problem is “linear” when it’s something we can reasonably expect to solve. This is not the technical meaning. Technically what it means is we’re looking at a function something like: $ax + by + cz$ Here, x, y, and z are the independent variables. We don’t know their values but wish to. a, b, and c are coefficients. These values are set to some constant for the problem, but they might be something else for other problems. They’re allowed to be positive or negative or even zero. If a coefficient is zero, then the matching variable doesn’t affect matters at all. The corresponding value can be anything at all, within the constraints. I’ve written this for three variables, as an example and because ‘x’ and ‘y’ and ‘z’ are comfortable, familiar variables. There can be fewer. There can be more. There almost always are. Two- and three-variable problems will teach you how to do this kind of problem. They’re too simple to be interesting, usually. To avoid committing to a particular number of variables we can use indices. $x_j$ for values of j from 1 up to N. Or we can bundle all these values together into a vector, and write everything as $\vec{x}$. This has a particular advantage since when we can write the coefficients as a vector too. Then we use the notation of linear algebra, and write that we hope to maximize the value of: $\vec{c}^T\vec{x}$ (The superscript T means “transpose”. As a linear algebra problem we’d usually think of writing a vector as a tall column of things. By transposing that we write a long row of things. By transposing we can use the notation of matrix multiplication.) This is the objective function. Objective here in the sense of goal; it’s the thing we want to find the best possible value of. We have constraints. These represent limits on the variables. The variables are always things that come in limited supply. There’s no allocating more money than the budget allows, nor putting more people on staff than work for the company. Often these constraints interact. Perhaps not only is there only so much staff, but no one person can work more than a set number of days in a row. Something like that. That’s all right. We can write all these constraints as a matrix equation. An inequality, properly. We can bundle all the constraints into a big matrix named A, and demand: $A\vec{x} \le \vec{b}$ Also, traditionally, we suppose that every component of $\vec{x}$ is non-negative. That is, positive, or at lowest, zero. This reflects the field’s core problems of figuring how to allocate resources. There’s no allocating less than zero of something. But we need some bounds. This is easiest to see with a two-dimensional problem. Try it yourself: draw a pair of axes on a sheet of paper. Now put in a constraint. Doesn’t matter what. The constraint’s edge is a straight line, which you can draw at any position and any angle you like. This includes horizontal and vertical. Shade in one side of the constraint. Whatever you shade in is the “feasible region”, the sets of values allowed under the constraint. Now draw in another line, another constraint. Shade in one side or the other of that. Draw in yet another line, another constraint. Shade in one side or another of that. The “feasible region” is whatever points have taken on all these shades. If you were lucky, this is a bounded region, a triangle. If you weren’t lucky, it’s not bounded. It’s maybe got some corners but goes off to the edge of the page where you stopped shading things in. So adding that every component of $\vec{x}$ is at least as big as zero is a backstop. It means we’ll usually get a feasible region with a finite volume. What was the last project you worked on that had no upper limits for anything, just minimums you had to satisfy? Anyway if you know you need something to be allowed less than zero go ahead. We’ll work it out. The important thing is there’s finite bounds on all the variables. I didn’t see the bounds you drew. It’s possible you have a triangle with all three shades inside. But it’s also possible you picked the other sides to shade, and you have an annulus, with no region having more than two shades in it. This can happen. It means it’s impossible to satisfy all the constraints at once. At least one of them has to give. You may be reminded of the sign taped to the wall of your mechanics’ about picking two of good-fast-cheap. But impossibility is at least easy. What if there is a feasible region? Well, we have reason to hope. The optimum has to be somewhere inside the region, that’s clear enough. And it even has to be on the edge of the region. If you’re not seeing why, think of a simple example, like, finding the maximum of $2x + y$, inside the square where x is between 0 and 2 and y is between 0 and 3. Suppose you had a putative maximum on the inside, like, where x was 1 and y was 2. What happens if you increase x a tiny bit? If you increase y by twice that? No, it’s only on the edges you can get a maximum that can’t be locally bettered. And only on the corners of the edges, at that. (This doesn’t prove the case. But it is what the proof gets at.) So the problem sounds simple then! We just have to try out all the vertices and pick the maximum (or minimum) from them all. OK, and here’s where we start getting into trouble. With two variables and, like, three constraints? That’s easy enough. That’s like five points to evaluate? We can do that. We never need to do that. If someone’s hiring you to test five combinations I admire your hustle and need you to start getting me consulting work. A real problem will have many variables and many constraints. The feasible region will most often look like a multifaceted gemstone. It’ll extend into more than three dimensions, usually. It’s all right if you just imagine the three, as long as the gemstone is complicated enough. Because now we’ve got lots of vertices. Maybe more than we really want to deal with. So what’s there to do? The basic approach, the one that’s foundational to the field, is the simplex method. A “simplex” is a triangle. In three dimensions, anyway. In four dimensions it’s a tetrahedron. In two dimensions it’s a line segment. Generally, however many dimensions of space you have? The simplex is the simplest thing that fills up volume in your space. You know how you can turn any polygon into a bunch of triangles? Just by connecting enough vertices together? You can turn a polyhedron into a bunch of tetrahedrons, by adding faces that connect trios of vertices. And for polyhedron-like shapes in more dimensions? We call those polytopes. Polytopes we can turn into a bunch of simplexes. So this is why it’s the “simplex method”. Any one simplex it’s easy to check the vertices on. And we can turn the polytope into a bunch of simplexes. And we can ignore all the interior vertices of the simplexes. So here’s the simplex method. First, break your polytope up into simplexes. Next, pick any simplex; doesn’t matter which. Pick any outside vertex of that simplex. This is the first viable possible solution. It’s most likely wrong. That’s okay. We’ll make it better. Because there are other vertices on this simplex. And there are other simplexes, adjacent to that first, which share this vertex. Test the vertices that share an edge with this one. Is there one that improves the objective function? Probably. Is there a best one of those in this simplex? Sure. So now that’s our second viable possible solution. If we had to give an answer right now, that would be our best guess. But this new vertex, this new tentative solution? It shares edges with other vertices, across several simplexes. So look at these new neighbors. Are any of them an improvement? Which one of them is the best improvement? Move over there. That’s our next tentative solution. You see where this is going. Keep at this. Eventually it’ll wind to a conclusion. Usually this works great. If you have, like, 8 constraints, you can usually expect to get your answer in from 16 to 24 iterations. If you have 20 constraints, expect an answer in from 40 to 60 iterations. This is doing pretty well. But it might take a while. It’s possible for the method to “stall” a while, often because one or more of the variables is at its constraint boundary. Or the division of polytope into simplexes got unlucky, and it’s hard to get to better solutions. Or there might be a string of vertices that are all at, or near, the same value, so the simplex method can’t resolve where to “go” next. In the worst possible case, the simplex method takes a number of iterations that grows exponentially with the number of constraints. This, yes, is very bad. It doesn’t typically happen. It’s a numerical algorithm. There’s some problem to spoil any numerical algorithm. You may have complaints. Like, the world is complicated. Why are we only looking at linear objective functions? Or, why only look at linear constraints? Well, if you really need to do that? Go ahead, but that’s not linear programming anymore. Think hard about whether you really need that, though. Linear anything is usually simpler than nonlinear anything. I mean, if your optimization function absolutely has to have $y^2$ in it? Could we just say you have a new variable $w$ that just happens to be equal to the square of y? Will that work? If you have to have the sine of z? Are you sure that z isn’t going to get outside the region where the sine of z is pretty close to just being z? Can you check? Maybe you have, and there’s just nothing for it. That’s all right. This is why optimization is a living field of study. It demands judgement and thought and all that hard work. Thank you for reading. This and all the other Fall 2019 A To Z posts should be at this link. They should be joined next week by letters ‘M’ and ‘N’. Also next week I hope to open for nominations for the next set of letters. All of my past A To Z essays should be available at this link. ## My 2019 Mathematics A To Z: Julia set Today’s A To Z term is my pick again. So I choose the Julia Set. This is named for Gaston Julia, one of the pioneers in chaos theory and fractals. He was born earlier than you imagine. No, earlier than that: he was born in 1893. The early 20th century saw amazing work done. We think of chaos theory and fractals as modern things, things that require vast computing power to understand. The computers help, yes. But the foundational work was done more than a century ago. Some of these pioneering mathematicians may have been able to get some numerical computing done. But many did not. They would have to do the hard work of thinking about things which they could not visualize. Things which surely did not look like they imagined. # Julia set. We think of things as moving. Even static things we consider as implying movement. Else we’d think it odd to ask, “Where does that road go?” This carries over to abstract things, like mathematical functions. A function is a domain, a range, and a rule matching things in the domain to things in the range. It “moves” things as much as a dictionary moves words. Yet we still think of a function as expressing motion. A common way for mathematicians to write functions uses little arrows, and describes what’s done as “mapping”. We might write $f: D \rightarrow R$. This is a general idea. We’re expressing that it maps things in the set D to things in the set R. We can use the notation to write something more specific. If ‘z’ is in the set D, we might write $f : z \rightarrow z^2 + \frac{1}{2}$. This describes the rule that matches things in the domain to things in the range. $f(2)$ represents the evaluation of this rule at a specific point, the one where the independent variable has the value ‘2’. $f(z)$ represents the evaluation of this rule at a specific point without committing to what that point is. $f(D)$ represents a collection of points. It’s the set you get by evaluating the rule at every point in D. And it’s not bad to think of motion. Many functions are models of things that move. Particles in space. Fluids in a room. Populations changing in time. Signal strengths varying with a sensor’s position. Often we’ll calculate the development of something iteratively, too. If the domain and the range of a function are the same set? There’s no reason that we can’t take our z, evaluate f(z), and then take whatever that thing is and evaluate f(f(z)). And again. And again. My age cohort, at least, learned to do this almost instinctively when we discovered you could take the result on a calculator and hit a function again. Calculate something and keep hitting square root; you get a string of numbers that eventually settle on 1. Or you started at zero. Calculate something and keep hitting square; you settle at either 0, 1, or grow to infinity. Hitting sine over and over … well, that was interesting since you might settle on 0 or some other, weird number. Same with tangent. Cosine you wouldn’t settle down to zero. Serious mathematicians look at this stuff too, though. Take any set ‘D’, and find what its image is, f(D). Then iterate this, figuring out what f(f(D)) is. Then f(f(f(D))). f(f(f(f(D)))). And so on. What happens if you keep doing this? Like, forever? We can say some things, at least. Even without knowing what f is. There could be a part of D that all these many iterations of f will send out to infinity. There could be a part of D that all these many iterations will send to some fixed point. And there could be a part of D that just keeps getting shuffled around without ever finishing. Some of these might not exist. Like, $f: z \rightarrow z + 4$ doesn’t have any fixed points or shuffled-around points. It sends everything off to infinity. $f: z \rightarrow \frac{1}{10} z$ has only a fixed point; nothing from it goes off to infinity and nothing’s shuffled back and forth. $f: z \rightarrow -z$ has a fixed point and a lot of points that shuffle back and forth. Thinking about these fixed points and these shuffling points gets us Julia Sets. These sets are the fixed points and shuffling-around points for certain kinds of functions. These functions are ones that have domain and range of the complex-valued numbers. Complex-valued numbers are the sum of a real number plus an imaginary number. A real number is just what it says on the tin. An imaginary number is a real number multiplied by $\imath$. What is $\imath$? It’s the imaginary unit. It has the neat property that $\imath^2 = -1$. That’s all we need to know about it. Oh, also, zero times $\imath$ is zero again. So if you really want, you can say all real numbers are complex numbers; they’re just themselves plus $0 \imath$. Complex-valued functions are worth a lot of study in their own right. Better, they’re easier to study (at the introductory level) than real-valued functions are. This is such a relief to the mathematics major. And now let me explain some little nagging weird thing. I’ve been using ‘z’ to represent the independent variable here. You know, using it as if it were ‘x’. This is a convention mathematicians use, when working with complex-valued numbers. An arbitrary complex-valued number tends to be called ‘z’. We haven’t forgotten x, though. We just in this context use ‘x’ to mean “the real part of z”. We also use “y” to carry information about the imaginary part of z. When we write ‘z’ we hold in trust an ‘x’ and ‘y’ for which $z = x + y\imath$. This all comes in handy. But we still don’t have Julia Sets for every complex-valued function. We need it to be a rational function. The name evokes rational numbers, but that doesn’t seem like much guidance. $f:z \rightarrow \frac{3}{5}$ is a rational function. It seems too boring to be worth studying, though, and it is. A “rational function” is a function that’s one polynomial divided by another polynomial. This whether they’re real-valued or complex-valued polynomials. So. Start with an ‘f’ that’s one complex-valued polynomial divided by another complex-valued polynomial. Start with the domain D, all of the complex-valued numbers. Find f(D). And f(f(D)). And f(f(f(D))). And so on. If you iterated this ‘f’ without limit, what’s the set of points that never go off to infinity? That’s the Julia Set for that function ‘f’. There are some famous Julia sets, though. There are the Julia sets that we heard about during the great fractal boom of the 1980s. This was when computers got cheap enough, and their graphic abilities good enough, to automate the calculation of points in these sets. At least to approximate the points in these sets. And these are based on some nice, easy-to-understand functions. First, you have to pick a constant C. This C is drawn from the complex-valued numbers. But that can still be, like, ½, if that’s what interests you. For whatever your C is? Define this function: $f_C: z \rightarrow z^2 + C$ And that’s it. Yes, this is a rational function. The numerator function is $z^2 + C$. The denominator function is $1$. This produces many different patterns. If you picked C = 0, you get a circle. Good on you for starting out with something you could double-check. If you picked C = -2? You get a long skinny line, again, easy enough to check. If you picked C = -1? Well, now you have a nice interesting weird shape, several bulging ovals with peninsulas of other bulging ovals all over. Pick other numbers. Pick numbers with interesting imaginary components. You get pinwheels. You get jagged streaks of lightning. You can even get separate islands, whole clouds of disjoint threatening-looking blobs. There is some guessing what you’ll get. If you work out a Julia Set for a particular C, you’ll see a similar-looking Julia Set for a different C that’s very close to it. This is a comfort. You can create a Julia Set for any rational function. I’ve only ever seen anyone actually do it for functions that look like what we already had. $z^3 + C$. Sometimes $z^4 + C$. I suppose once, in high school, I might have tried $z^5 + C$ but I don’t remember what it looked like. If someone’s done, say, $\frac{1}{z^2 + C}$ please write in and let me know what it looks like. The Julia Set has a famous partner. Maybe the most famous fractal of them all, the Mandelbrot Set. That’s the strange blobby sea surrounded by lightning bolts that you see on the cover of every pop mathematics book from the 80s and 90s. If a C gives us a Julia Set that’s one single, contiguous patch? Then that C is in the Mandelbrot Set. Also vice-versa. The ideas behind these sets are old. Julia’s paper about the iterations of rational functions first appeared in 1918. Julia died in 1978, the same year that the first computer rendering of the Mandelbrot set was done. I haven’t been able to find whether that rendering existed before his death. Nor have I decided which I would think the better sequence. Thanks for reading. All of Fall 2019 A To Z posts should be at this link. And next week I hope to get to the letters ‘K’ and ‘L’. Sunday, yes, I hope to get back to the comics. ## My 2019 Mathematics A To Z: Hamiltonian Today’s A To Z term is another I drew from Mr Wu, of the Singapore Math Tuition blog. It gives me more chances to discuss differential equations and mathematical physics, too. The Hamiltonian we name for Sir William Rowan Hamilton, the 19th century Irish mathematical physicists who worked on everything. You might have encountered his name from hearing about quaternions. Or for coining the terms “scalar” and “tensor”. Or for work in graph theory. There’s more. He did work in Fourier analysis, which is what you get into when you feel at ease with Fourier series. And then wild stuff combining matrices and rings. He’s not quite one of those people where there’s a Hamilton’s Theorem for every field of mathematics you might be interested in. It’s close, though. # Hamiltonian. When you first learn about physics you learn about forces and accelerations and stuff. When you major in physics you learn to avoid dealing with forces and accelerations and stuff. It’s not explicit. But you get trained to look, so far as possible, away from vectors. Look to scalars. Look to single numbers that somehow encode your problem. A great example of this is the Lagrangian. It’s built on “generalized coordinates”, which are not necessarily, like, position and velocity and all. They include the things that describe your system. This can be positions. It’s often angles. The Lagrangian shines in problems where it matters that something rotates. Or if you need to work with polar coordinates or spherical coordinates or anything non-rectangular. The Lagrangian is, in your general coordinates, equal to the kinetic energy minus the potential energy. It’ll be a function. It’ll depend on your coordinates and on the derivative-with-respect-to-time of your coordinates. You can take partial derivatives of the Lagrangian. This tells how the coordinates, and the change-in-time of your coordinates should change over time. The Hamiltonian is a similar way of working out mechanics problems. The Hamiltonian function isn’t anything so primitive as the kinetic energy minus the potential energy. No, the Hamiltonian is the kinetic energy plus the potential energy. Totally different in idea. From that description you maybe guessed you can transfer from the Lagrangian to the Hamiltonian. Maybe vice-versa. Yes, you can, although we use the term “transform”. Specifically a “Legendre transform”. We can use any coordinates we like, just as with Lagrangian mechanics. And, as with the Lagrangian, we can find how coordinates change over time. The change of any coordinate depends on the partial derivative of the Hamiltonian with respect to a particular other coordinate. This other coordinate is its “conjugate”. (It may either be this derivative, or minus one times this derivative. By the time you’re doing work in the field you’ll know which.) That conjugate coordinate is the important thing. It’s why we muck around with Hamiltonians when Lagrangians are so similar. In ordinary, common coordinate systems these conjugate coordinates form nice pairs. In Cartesian coordinates, the conjugate to a particle’s position is its momentum, and vice-versa. In polar coordinates, the conjugate to the angular velocity is the angular momentum. These are nice-sounding pairs. But that’s our good luck. These happen to match stuff we already think is important. In general coordinates one or more of a pair can be some fusion of variables we don’t have a word for and would never care about. Sometimes it gets weird. In the problem of vortices swirling around each other on an infinitely great plane? The horizontal position is conjugate to the vertical position. Velocity doesn’t enter into it. For vortices on the sphere the longitude is conjugate to the cosine of the latitude. What’s valuable about these pairings is that they make a “symplectic manifold”. A manifold is a patch of space where stuff works like normal Euclidean geometry does. In this case, the space is in “phase space”. This is the collection of all the possible combinations of all the variables that could ever turn up. Every particular moment of a mechanical system matches some point in phase space. Its evolution over time traces out a path in that space. Call it a trajectory or an orbit as you like. We get good things from looking at the geometry that this symplectic manifold implies. For example, if we know that one variable doesn’t appear in the Hamiltonian, then its conjugate’s value never changes. This is almost the kindest thing you can do for a mathematical physicist. But more. A famous theorem by Emmy Noether tells us that symmetries in the Hamiltonian match with conservation laws in the physics. Time-invariance, for example — time not appearing in the Hamiltonian — gives us the conservation of energy. If only distances between things, not absolute positions, matter, then we get conservation of linear momentum. Stuff like that. To find conservation laws in physics problems is the kindest thing you can do for a mathematical physicist. The Hamiltonian was born out of planetary physics. These are problems easy to understand and, apart from the case of one star with one planet orbiting each other, impossible to solve exactly. That’s all right. The formalism applies to all kinds of problems. They’re very good at handling particles that interact with each other and maybe some potential energy. This is a lot of stuff. More, the approach extends naturally to quantum mechanics. It takes some damage along the way. We can’t talk about “the” position or “the” momentum of anything quantum-mechanical. But what we get when we look at quantum mechanics looks very much like what Hamiltonians do. We can calculate things which are quantum quite well by using these tools. This though they came from questions like why Saturn’s rings haven’t fallen part and whether the Earth will stay around its present orbit. It holds surprising power, too. Notice that the Hamiltonian is the kinetic energy of a system plus its potential energy. For a lot of physics problems that’s all the energy there is. That is, the value of the Hamiltonian for some set of coordinates is the total energy of the system at that time. And, if there’s no energy lost to friction or heat or whatever? Then that’s the total energy of the system for all time. Here’s where this becomes almost practical. We often want to do a numerical simulation of a physics problem. Generically, we do this by looking up what all the values of all the coordinates are at some starting time t0. Then we calculate how fast these coordinates are changing with time. We pick a small change in time, Δ t. Then we say that at time t0 plus Δ t, the coordinates are whatever they started at plus Δ t times that rate of change. And then we repeat, figuring out how fast the coordinates are changing now, at this position and time. The trouble is we always make some mistake, and once we’ve made a mistake, we’re going to keep on making mistakes. We can do some clever stuff to make the smallest error possible figuring out where to go, but it’ll still happen. Usually, we stick to calculations where the error won’t mess up our results. But when we look at stuff like whether the Earth will stay around its present orbit? We can’t make each step good enough for that. Unless we get to thinking about the Hamiltonian, and our symplectic variables. The actual system traces out a path in phase space. Everyone on that path the Hamiltonian is a particular value, the energy of the system. So use the regular methods to project most of the variables to the new time, t0 + Δ t. But the rest? Pick the values that make the Hamiltonian work out right. Also momentum and angular momentum and other stuff we know get conserved. We’ll still make an error. But it’s a different kind of error. It’ll project to a point that’s maybe in the wrong place on the trajectory. But it’s on the trajectory. (OK, it’s near the trajectory. Suppose the real energy is, oh, the square root of 5. The computer simulation will have an energy of 2.23607. This is close but not exactly the same. That’s all right. Each step will stay close to the real energy.) So what we’ll get is a projection of the Earth’s orbit that maybe puts it in the wrong place in its orbit. Putting the planet on the opposite side of the sun from Venus when we ought to see Venus transiting the Sun. That’s all right, if what we’re interested in is whether Venus and Earth are still in the solar system. There’s a special cost for this. If there weren’t we’d use it all the time. The cost is computational complexity. It’s pricey enough that you haven’t heard about these “symplectic integrators” before. That’s all right. These are the kinds of things open to us once we look long into the Hamiltonian. This wraps up my big essay-writing for the week. I will pluck some older essays out of obscurity to re-share tomorrow and Saturday. All of Fall 2019 A To Z posts should be at this link. Next week should have the letter I on Tuesday and J on Thursday. All of my A To Z essays should be available at this link. And I am still interested in topics I might use for the letters K through N. Thank you. ## My 2019 Mathematics A To Z: Buffon’s Needle Today’s A To Z term was suggested by Peter Mander. Mander authors CarnotCycle, which when I first joined WordPress was one of the few blogs discussing thermodynamics in any detail. When I last checked it still was, which is a shame. Thermodynamics is a fascinating field. It’s as deeply weird and counter-intuitive and important as quantum mechanics. Yet its principles are as familiar as a mug of warm tea on a chilly day. Mander writes at a more technical level than I usually do. But if you’re comfortable with calculus, or if you’re comfortable nodding at a line and agreeing that he wouldn’t fib to you about a thing like calculus, it’s worth reading. # Buffon’s Needle. I’ve written of my fondness for boredom. A bored mind is not one lacking stimulation. It is one stimulated by anything, however petty. And in petty things we can find great surprises. I do not know what caused Georges-Louis Leclerc, Comte de Buffon, to discover the needle problem named for him. It seems like something born of a bored but active mind. Buffon had an active mind: he was one of Europe’s most important naturalists of the 1700s. He also worked in mathematics, and astronomy, and optics. It shows what one can do with an engaged mind and a large inheritance from one’s childless uncle who’s the tax farmer for all Sicily. The problem, though. Imagine dropping a needle on a floor that has equally spaced parallel lines. What is the probability that the needle will land on any of the lines? It could occur to anyone with a wood floor who’s dropped a thing. (There is a similar problem which would occur to anyone with a tile floor.) They have only to be ready to ask the question. Buffon did this in 1733. He had it solved by 1777. We, with several centuries’ insight into probability and calculus, need less than 44 years to solve the question. Let me use L as the length of the needle. And d as the spacing of the parallel lines. If the needle’s length is less than the spacing then this is an easy formula to write, and not too hard to calculate. The probability, P, of the needle crossing some line is: $P = \frac{2}{\pi}\frac{L}{d}$ I won’t derive it rigorously. You don’t need me for that. The interesting question is whether this formula makes sense. That L and d are in it? Yes, that makes sense. The length of the needle and the gap between lines have to be in there. More, the probability has to have the ratio between the two. There’s different ways to argue this. Dimensional analysis convinces me, at least. Probability is a pure number. L is a measurement of length; d is a measurement of length. To get a pure number starting with L and d means one of them has to divide into the other. That L is in the numerator and d the denominator makes sense. A tiny needle has a tiny chance of crossing a line. A large needle has a large chance. That $\frac{L}{d}$ is raised to the first power, rather than the second or third or such … well, that’s fair. A needle twice as long having twice the chance of crossing a line? That sounds more likely than a needle twice as long having four times the chance, or eight times the chance. Does the 2 belong there? Hard to say. 2 seems like a harmless enough number. It appears in many respectable formulas. That π, though … That π … π comes to us from circles. We see it in calculations about circles and spheres all the time. We’re doing a problem with lines and line segments. What business does π have showing up? We can find reasons. One way is to look at a similar problem. Imagine dropping a disc on these lines. What’s the chance the disc falls across some line? That’s the chance that the center of the disc is less than one radius from any of the lines. What if the disc has an equal chance of landing anywhere on the floor? Then it has a probability of $\frac{L}{d}$ of crossing a line. If the radius is smaller than the distance between lines, anyway. If the radius is larger than that, the probability is 1. Now draw a diameter line on this disc. What’s the chance that this diameter line crosses this floor line? That depends on a couple things. Whether the center of the disc is near enough a floor line. And what angle the diameter line makes with respect to the floor lines. If the diameter line is parallel the floor line there’s almost no chance. If the diameter line is perpendicular to the floor line there’s the best possible chance. But that angle might be anything. Let me call that angle θ. The diameter line crosses the floor line if the diameter times the sine of θ is less than half the distance between floor lines. … Oh. Sine. Sine and cosine and all the trigonometry functions we get from studying circles, and how to draw triangles within circles. And this diameter-line problem looks the same as the needle problem. So that’s where π comes from. I’m being figurative. I don’t think one can make a rigorous declaration that the π in the probability formula “comes from” this sine, any more than you can declare that the square-ness of a shape comes from any one side. But it gives a reason to believe that π belongs in the probability. If the needle’s longer than the gap between floor lines, if $L > d$, there’s still a probability that the needle crosses at least one line. It never becomes certain. No matter how long the needle is it could fall parallel to all the floor lines and miss them all. The probability is instead: $P = \frac{2}{\pi}\left(\frac{L}{d} - \sqrt{\left(\frac{L}{d}\right)^2 - 1} + \sec^{-1}\left(\frac{L}{d}\right)\right)$ Here $\sec^{-1}$ is the world-famous arcsecant function. That is, it’s whatever angle has as its secant the number $\frac{L}{d}$. I don’t mean to insult you. I’m being kind to the person reading this first thing in the morning. I’m not going to try justifying this formula. You can play with numbers, though. You’ll see that if $\frac{L}{d}$ is a little bit bigger than 1, the probability is a little more than what you get if $\frac{L}{d}$ is a little smaller than 1. This is reassuring. The exciting thing is arithmetic, though. Use the probability of a needle crossing a line, for short needles. You can re-write it as this: $\pi = 2\frac{L}{d}\frac{1}{P}$ L and d you can find by measuring needles and the lines. P you can estimate. Drop a needle many times over. Count how many times you drop it, and how many times it crosses a line. P is roughly the number of crossings divided by the number of needle drops. Doing this gives you a way to estimate π. This gives you something to talk about on Pi Day. It’s a rubbish way to find π. It’s a lot of work, plus you have to sweep needles off the floor. Well, you can do it in simulation and avoid the risk of stepping on an overlooked needle. But it takes a lot of needle-drops to get good results. To be certain you’ve calculated the first two decimal points correctly requires 3,380,000 needle-drops. Yes, yes. You could get lucky and happen to hit on an estimate of 3.14 for π with fewer needle-drops. But if you were sincerely trying to calculate the digits of π this way? If you did not know what they were? You would need the three and a third million tries to be confident you had the number correct. So this result is, as a practical matter, useless. It’s a heady concept, though. We think casually of randomness as … randomness. Unpredictability. Sometimes we will speak of the Law of Large Numbers. This is several theorems in probability. They all point to the same result. That if some event has (say) a probability of one-third of happening, then given 30 million chances, it will happen quite close to 10 million times. This π result is another casting of the Law of Large Numbers, and of the apparent paradox that true unpredictability is itself predictable. There is no way to predict whether any one dropped needle will cross any line. It doesn’t even matter whether any one needle crosses any line. An enormous number of needles, tossed without fear or favor, will fall in ways that embed π. The same π you get from comparing the circumference of a circle to its diameter. The same π you get from looking at the arc-cosine of a negative one. I suppose we could use this also to calculate the value of 2, but that somehow seems to touch lesser majesties. Thank you again for reading. All of the Fall 2019 A To Z posts should be at this link. This year’s and all past A To Z sequences should be at this link. I’ve made my picks for next week’s topics, and am fooling myself into thinking I have a rough outline for them already. But I’m still open for suggestions for the letters E through H and appreciate suggestions. ## My 2019 Mathematics A To Z: Abacus Today’s A To Z term is the Abacus. It was suggested by aajohannas, on Twitter as @aajohannas. Particularly asked for was how to use an abacus. The abacus has been used by a great many cultures over thousands of years. So it’s hard to say that there is any one right way to use it. I’m going to get into a way to use it to compute, any more than there is a right way to use a hammer. There are many hammers, and many things to hammer. But there are similarities between all hammers, and the ways to use them as hammers are similar. So learning one kind, and one way to use that kind, can be a useful start. # Abacus. I taught at the National University of Singapore in the first half of the 2000s. At the student union was this sheltered overhang formed by a stairwell. Underneath it, partly exposed to the elements (a common building style there) was a convenience store. Up front were the things with high turnover, snacks and pop and daily newspapers, that sort of thing. In the back, beyond the register, in the areas that the rain, the only non-gentle element, couldn’t reach even whipped by wind, were other things. Miscellaneous things. Exam bluebooks faded with age and dust. Good-luck cat statues colonized by spiderwebs. Unlabelled power cables for obsolete electronics. Once when browsing through this I encountered two things that I bought as badges of office. One was a slide rule, a proper twelve-inch one. I’d had one already, a$2 six-inch-long one I’d gotten as an undergraduate from a convenience store the university had already decided to evict. The NUS one was a slide rule you could do actual work on. Another was a soroban, a compact Japanese abacus, in a patterned cardboard box a half-inch too short to hold it. I got both. For the novelty, yes. Also, I taught Computational Science. I felt it appropriate to have these iconic human computing devices. But do I use them? Other than for decoration? … No, not really. I have too many calculators to need them. Also I am annoyed that while I can lay my hands on the slide rule I have put the soroban somewhere so logical and safe I can’t find it. A couple photographs would improve this essay. Too bad. Do I know how to use them? If I find them? The slide rule, sure. If you know that a slide rule works via logarithms, and you play with it a little? You know how to use a slide rule. At least a little, after a bit of experimentation and playing with the three times table. The abacus, though? How do you use that? In childhood I heard about abacuses. That there’s a series of parallel rods, each with beads on them. Four placed below a center beam, one placed above. Sometimes two placed above. That the lower beads on a rod represent one each. That the upper bead represents five. That some people can do arithmetic on that faster than others can an electric calculator. And that was about all I got, or at least retained. How to do this arithmetic never penetrated my brain. I imagined, well, addition must be easy. Say you wanted to do three plus six, well, move three lower beads up to the center bar. Then slide one lower and one upper bead, for six, to the center bar, and read that off. Right? The bizarre thing is my naive childhood idea is right. At least in the big picture. Let each rod represent one of the numbers in base-ten style. It’s anachronistic to the abacus’s origins to speak of a ones rod, a tens rod, a hundreds rod, and so on. So what? We’re using this tool today. We can use the ideas of base ten to make our understanding easier. Pick a row of beads that you want to represent the ones. The row to the left of that represents tens. To the left of that, hundreds. To the right of the ones is the one-tenths, and the one-hundredths, and so on. This goes on to however far your need and however big your abacus is. Move beads to the center to represent numbers you want. If you want ’21’, slide two lower beads up in the tens column and one lower bead in the ones column. If you want ’38’, slide three lower beads up in the tends column and one upper and three lower beads in the ones column. To add two numbers, slide more beads representing those numbers toward the center bar. To subtract, slide beads away. Multiplication and division were beyond my young imagination. I’ll let them wait a bit. There are complications. The complications are for good reason. When you master them, they make computation swifter. But you pay for that later speed with more time spent learning. This is a trade we make, and keep making, in computational mathematics. We make a process more reliable, more speedy, by making it less obvious. Some of this isn’t too difficult. Like, work in one direction so far as possible. It’s easy to suppose this is better than jumping around from, say, the thousands digit to the tens to the hundreds to the ones. The advice I’ve read says work from the left to the right, that is, from the highest place to the lowest. Arithmetic as I learned it works from the ones to the tens to the hundreds, but this seems wiser. The most significant digits get calculated first this way. It’s usually more important to know the answer is closer to 2,000 than to 3,000 than to know that the answer ends in an 8 rather than a 6. Some of this is subtle. This is to cope with practical problems. Suppose you want to add 5 to 6? There aren’t that many beads on any row. A Chinese abacus, which has two beads on the upper part, could cope with this particular problem. It’s going to be in trouble when you want to add 8 to 9, though. That’s not unique to an abacus. Any numerical computing technique can be broken by some problem. This is why it’s never enough to calculate; we still have to think. For example, thinking will let us handle this five plus six difficulty. Consider this: four plus one is five. So four and one are “complementary numbers”, with respect to five. Similarly, three and two are five’s complementary numbers. So if we need to add four to a number, that’s equivalent to adding five and subtracting one. If we need to add two, that’s equivalent to adding five and subtracting three. This will get us through some shortages in bead count. And consider this: four plus six is ten. So four and six are ten-complementary numbers. Similarly, three and seven are ten’s complementary numbers. Two and eight. One and nine. This gets us through much of the rest of the shortage. So here’s how this works. Suppose we have 35, and wish to add 6 to it. There aren’t the beads to add six to the ones column. So? Instead subtract the complement of six. That is, add ten and subtract four. In moving across the rows, when you reach the tens, slide one lower bead up, making the abacus represent 45. Then from the ones column subtract four. that is, slide the upper bead away from the center bar, and slide the complement to four, one bead, up to the center. And now the abacus represents 41, just like it ought. If you’re experienced enough you can reduce some of these operations, sliding the beads above and below the center bar at once. Or sliding a bead in the tens and another in the ones column at once. Don’t fret doing this. Worry about making correct steps. You’ll speed up with practice. I remember advice from a typesetting manual I collected once: “strive for consistent, regular keystrokes. Speed comes with practice. Errors are time-consuming to correct”. This is, mutatis mutandis, always good advice. Subtraction works like addition. This should surprise few. If you have the beads in place, just remove them: four minus two takes no particular insight. If there aren’t enough beads? Fall back on complements. If you wish to do 35 minus 6? Set up 35, and calculate 35 minus 10 plus 4. When you get to the tens rod, slide one bead down; this leaves you with 25. Then in the ones column, slide four beads up. This leaves you with 29. I’m so glad these seem to be working out. Doing longer additions and subtractions takes more rows, but not actually more work. 35.2 plus 6.4 is the same work as 35 plus 6 and 2 plus 4, each of which you, in principle, know how to do. 35.2 minus 6.4 is a bit more fuss. When you get to the 2 minus 4 bit you have to do that addition-of-complements stuff. But that’s not any new work. Where the decimal point goes is something you have to keep track of. As with the slide rule, the magnitude of these numbers is notional. Your fingers move the same way to add 352 and 64 as they will 0.352 and 0.064. That’s convenient. Multiplication gets more tedious. It demands paying attention to where the decimal point is. Just like the slide rule demands, come to think of it. You’ll need columns on the abacus for both the multiplicands and the product. And you’ll do a lot of adding up. But at heart? 2038 times 3.7 amounts to doing eight multiplications. 8 times 7, 3 times 7, 0 times 7 (OK, that one’s easy), 2 times 7, 3 times 7, 3 times 3, 0 times 3 (again, easy), and 2 times 3. And then add up these results in the correct columns. This may be tedious, but it’s not hard. It does mean the abacus doesn’t spare you having to know some times tables. I mean, you could use the abacus to work out 8 times 7 by adding seven to itself over and over, but that’s time-consuming. You can save time, and calculation steps, by memorization. By knowing some answers ahead of time. Totton Heffelfinger and Gary Flom’s page, from which I’m drawing almost all my practical advice, offers a good notation of lettering the rods of the abacus, A, B, C, D, and so on. To multiply, say, 352 by 64 start by putting the 64 on rods BC. Set the 352 on rods EFG. We’ll get the answer with rod K as the ones column. 2 times 4 is 8; put that on rod K. 5 times 4 is 20; add that to rods IJ. 3 times 4 is 12; add that to rods HI. 2 times 6 is 12; add that to rods IJ. 5 times 6 is 30; add that to rods HI. 3 times 6 is 18; add that to rods GH. All going well this should add up to 22,528, spread out along rods GHIJK. I can see right away at least the 8 is correct. You would do the same physical steps to multiply, oh, 3.52 by 0.0064. You have to take care of the decimal place yourself, though. I see you, in the back there, growing suspicious. I’ll come around to this. Don’t worry. Division is … oh, I have to fess up. Division is not something I feel comfortable with. I can read the instructions and repeat the examples given. I haven’t done it enough to have that flash where I understand the point of things. I recognize what’s happening. It’s the work of division as done by hand. You know, 821 divided by 56 worked out by, well, 56 goes into 82 once with a remainder of 26. Then drop down the 1 to make this 261. 56 goes into 261 … oh, it would be so nice if it went five times, but it doesn’t. It goes in four times, with a remainder of 37. I can walk you through the steps but all I am truly doing is trying to keep up with Totton Heffelfinger and Gary Flom’s instructions here. There are, I read, also schemes to calculate square roots on the abacus. I don’t know that there are cube-root schemes also. I would bet on there being such, though. Never mind, though. The suspicious thing I expect you’ve noticed is the steps being done. They’re represented as sliding beads along rods, yes. But the meaning of these steps? They’re the same steps you would do by doing arithmetic on paper. Sliding two beads and then two more beads up to the center bar isn’t any different from looking at 2 + 2 and representing that as 4. All this ten’s-complement stuff to subtract one number from another is just … well, I learned it as subtraction by “borrowing”. I don’t know the present techniques but I’m sure they’re at heart the same. But the work is eerily like what you would do on paper, using Arabic numerals. The slide rule uses a logarithm-based ruler. This makes the addition of distances along the slides match the multiplication of the values of the rulers. What does the abacus do to help us compute? Why use an abacus? What an abacus gives us is memory. It stores numbers. It lets us break a big problem into a series of small problems. It lets us keep a partial computation while we work through those steps. We don’t add 35.2 to 6.4. We add 3 to 0 and 5 to 6 and 2 to 4. We don’t multiply 2038 by 3.7. We multiply 8 by 7, and 8 by 3, and 3 by 7, and 3 by 3, and so on. And this is most of numerical computing, even today. We describe what we want to do as these high-level operations. But the computation is a lot of calculations, each one of them simple. We use some memory to hold partially completed results. Memory, the ability to store results, lets us change hard problems into long strings of simple ones. We do more things the way the abacus encourages. We even use those complementary numbers. Not five’s or ten’s complements, not with binary arithmetic computers. Two’s complement arithmetic makes it possible to subtract, or write negative numbers, in ways that are easy to calculate. That there are a set number of rods even has its parallel in modern computing. When representing a real number on the computer we have only so many decimal places. (Yes, yes, binary digit places.) At least unless we use a weird data structure. This affects our calculations. There are numbers we can’t represent perfectly, such as one-third. We need to think about whether this affects what we are using our calculation for. There are major differences between a digital computer and a person using the abacus. But the processes are similar. This may help us to understand why computational science works the way it does. It may at least help us understand those contests in the 1950s where the abacus user was faster than the calculator user. But no, I confess, I only use mine for decoration, or will when I find it again. Thank you for reading. All the Fall 2019 A To Z posts should be at this link. Furthermore, both this year’s and all past A To Z sequences should be at this link. And I am still soliciting subjects for the first third of the alphabet. ## How To Find Logarithms Without Using Powerful Computers I got to remembering an old sequence of mine, and wanted to share it for my current audience. A couple years ago I read a 1949-published book about numerical computing. And it addressed a problem I knew existed but hadn’t put much thought into. That is, how to calculate the logarithm of a number? Logarithms … well, we maybe don’t need them so much now. But they were indispensable for computing for a very long time. They turn the difficult work of multiplication and division into the easier work of addition and subtraction. They turn the really hard work of exponentiation into the easier work of multiplication. So they’re great to have. But how to get them? And, particularly, how to get them if you have a computing device that’s able to do work, but not very much work? Machines That Think About Logarithms sets out the question, including mentioning Edmund Callis Berkeley’s book that got me started on this. And some talk about the kinds of logarithms and why we use each of them. Machines That Do Something About Logarithms sets out some principles. These are all things that are generically true about logarithms, including about calculating logarithms. They’re just the principles that were put into clever play by Harvard’s IBM Automatic Sequence-Controlled Calculator in the 1940s. Machines That Give You Logarithms explains how to use those tools. And lays out how to get the base-ten logarithm for most numbers that you would like with a tiny bit of computing work. I showed off an example of getting the logarithm of 47.2286 using only three divisions, four additions, and a little bit of looking up stuff. Without Machines That Think About Logarithms closes out the cycle. One catch with the algorithm described is that you need to work out some logarithms ahead of time and have them on hand, ready to look up. They’re not ones that you care about particularly for any problem, but they make it easier to find the logarithm you do want. This essay talks about which logarithms to calculate, in order to get the most accurate results for the logarithm you want, using the least custom work possible. And there we go. Logarithms are still indispensable for mathematical work, although I realize not so much because we ever care what the logarithm of 47.2286 or any other arbitrary number is. Logarithms have some nice analytic properties, though, and they make other work easier to do. So they’re still in use, but for different problems. ## My 2018 Mathematics A To Z: Witch of Agnesi Nobody had a suggested topic starting with ‘W’ for me! So I’ll take that as a free choice, and get lightly autobiogrpahical. # Witch of Agnesi. I know I encountered the Witch of Agnesi while in middle school. Eighth grade, if I’m not mistaken. It was a footnote in a textbook. I don’t remember much of the textbook. What I mostly remember of the course was how much I did not fit with the teacher. The only relief from boredom that year was the month we had a substitute and the occasional interesting footnote. It was in a chapter about graphing equations. That is, finding curves whose points have coordinates that satisfy some equation. In a bit of relief from lines and parabolas the footnote offered this: $y = \frac{8a^3}{x^2 + 4a^2}$ In a weird tantalizing moment the footnote didn’t offer a picture. Or say what an ‘a’ was doing in there. In retrospect I recognize ‘a’ as a parameter, and that different values of it give different but related shapes. No hint what the ‘8’ or the ‘4’ were doing there. Nor why ‘a’ gets raised to the third power in the numerator or the second in the denominator. I did my best with the tools I had at the time. Picked a nice easy boring ‘a’. Picked out values of ‘x’ and found the corresponding ‘y’ which made the equation true, and tried connecting the dots. The result didn’t look anything like a witch. Nor a witch’s hat. It was one of a handful of biographical notes in the book. These were a little attempt to add some historical context to mathematics. It wasn’t much. But it was an attempt to show that mathematics came from people. Including, here, from Maria Gaëtana Agnesi. She was, I’m certain, the only woman mentioned in the textbook I’ve otherwise completely forgotten. We have few names of ancient mathematicians. Those we have are often compilers like Euclid whose fame obliterated the people whose work they explained. Or they’re like Pythagoras, credited with discoveries by people who obliterated their own identities. In later times we have the mathematics done by, mostly, people whose social positions gave them time to write mathematics results. So we see centuries where every mathematician is doing it as their side hustle to being a priest or lawyer or physician or combination of these. Women don’t get the chance to stand out here. Today of course we can name many women who did, and do, mathematics. We can name Emmy Noether, Ada Lovelace, and Marie-Sophie Germain. Challenged to do a bit more, we can offer Florence Nightingale and Sofia Kovalevskaya. Well, and also Grace Hopper and Margaret Hamilton if we decide computer scientists count. Katherine Johnson looks likely to make that cut. But in any case none of these people are known for work understandable in a pre-algebra textbook. This must be why Agnesi earned a place in this book. She’s among the earliest women we can specifically credit with doing noteworthy mathematics. (Also physics, but that’s off point for me.) Her curve might be a little advanced for that textbook’s intended audience. But it’s not far off, and pondering questions like “why $8a^3$? Why not $a^3$?” is more pleasant, to a certain personality, than pondering what a directrix might be and why we might use one. The equation might be a lousy way to visualize the curve described. The curve is one of that group of interesting shapes you get by constructions. That is, following some novel process. Constructions are fun. They’re almost a craft project. For this we start with a circle. And two parallel tangent lines. Without loss of generality, suppose they’re horizontal, so, there’s lines at the top and the bottom of the curve. Take one of the two tangent points. Again without loss of generality, let’s say the bottom one. Draw a line from that point over to the other line. Anywhere on the other line. There’s a point where the line you drew intersects the circle. There’s another point where it intersects the other parallel line. We’ll find a new point by combining pieces of these two points. The point is on the same horizontal as wherever your line intersects the circle. It’s on the same vertical as wherever your line intersects the other parallel line. This point is on the Witch of Agnesi curve. Now draw another line. Again, starting from the lower tangent point and going up to the other parallel line. Again it intersects the circle somewhere. This gives another point on the Witch of Agnesi curve. Draw another line. Another intersection with the circle, another intersection with the opposite parallel line. Another point on the Witch of Agnesi curve. And so on. Keep doing this. When you’ve drawn all the lines that reach from the tangent point to the other line, you’ll have generated the full Witch of Agnesi curve. This takes more work than writing out $y = \frac{8a^3}{x^2 + 4a^2}$, yes. But it’s more fun. It makes for neat animations. And I think it prepares us to expect the shape of the curve. It’s a neat curve. Between it and the lower parallel line is an area four times that of the circle that generated it. The shape is one we would get from looking at the derivative of the arctangent. So there’s some reasons someone working in calculus might find it interesting. And people did. Pierre de Fermat studied it, and found this area. Isaac Newton and Luigi Guido Grandi studied the shape, using this circle-and-parallel-lines construction. Maria Agnesi’s name attached to it after she published a calculus textbook which examined this curve. She showed, according to people who present themselves as having read her book, the curve and how to find it. And she showed its equation and found the vertex and asymptote line and the inflection points. The inflection points, here, are where the curve chances from being cupped upward to cupping downward, or vice-versa. It’s a neat function. It’s got some uses. It’s a natural smooth-hill shape, for example. So this makes a good generic landscape feature if you’re modeling the flow over a surface. I read that solitary waves can have this curve’s shape, too. And the curve turns up as a probability distribution. Take a fixed point. Pick lines at random that pass through this point. See where those lines reach a separate, straight line. Some regions are more likely to be intersected than are others. Chart how often any particular line is the new intersection point. That chart will (given some assumptions I ask you to pretend you agree with) be a Witch of Agnesi curve. This might not surprise you. It seems inevitable from the circle-and-intersecting-line construction process. And that’s nice enough. As a distribution it looks like the usual Gaussian bell curve. It’s different, though. And it’s different in strange ways. Like, for a probability distribution we can find an expected value. That’s … well, what it sounds like. But this is the strange probability distribution for which the law of large numbers does not work. Imagine an experiment that produces real numbers, with the frequency of each number given by this distribution. Run the experiment zillions of times. What’s the mean value of all the zillions of generated numbers? And it … doesn’t … have one. I mean, we know it ought to, it should be the center of that hill. But the calculations for that don’t work right. Taking a bigger sample makes the sample mean jump around more, not less, the way every other distribution should work. It’s a weird idea. Imagine carving a block of wood in the shape of this curve, with a horizontal lower bound and the Witch of Agnesi curve as the upper bound. Where would it balance? … The normal mathematical tools don’t say, even though the shape has an obvious line of symmetry. And a finite area. You don’t get this kind of weirdness with parabolas. (Yes, you’ll get a balancing point if you actually carve a real one. This is because you work with finitely-long blocks of wood. Imagine you had a block of wood infinite in length. Then you would see some strange behavior.) It teaches us more strange things, though. Consider interpolations, that is, taking a couple data points and fitting a curve to them. We usually start out looking for polynomials when we interpolate data points. This is because everything is polynomials. Toss in more data points. We need a higher-order polynomial, but we can usually fit all the given points. But sometimes polynomials won’t work. A problem called Runge’s Phenomenon can happen, where the more data points you have the worse your polynomial interpolation is. The Witch of Agnesi curve is one of those. Carl Runge used points on this curve, and trying to fit polynomials to those points, to discover the problem. More data and higher-order polynomials make for worse interpolations. You get curves that look less and less like the original Witch. Runge is himself famous to mathematicians, known for “Runge-Kutta”. That’s a family of techniques to solve differential equations numerically. I don’t know whether Runge came to the weirdness of the Witch of Agnesi curve from considering how errors build in numerical integration. I can imagine it, though. The topics feel related to me. I understand how none of this could fit that textbook’s slender footnote. I’m not sure any of the really good parts of the Witch of Agnesi could even fit thematically in that textbook. At least beyond the fact of its interesting name, which any good blog about the curve will explain. That there was no picture, and that the equation was beyond what the textbook had been describing, made it a challenge. Maybe not seeing what the shape was teased the mathematician out of this bored student. And next is ‘X’. Will I take Mr Wu’s suggestion and use that to describe something “extreme”? Or will I take another topic or suggestion? We’ll see on Friday, barring unpleasant surprises. Thanks for reading. ## I Don’t Have Any Good Ideas For Finding Cube Roots By Trigonometry So I did a bit of thinking. There’s a prosthaphaeretic rule that lets you calculate square roots using nothing more than trigonometric functions. Is there one that lets you calculate cube roots? And I don’t know. I don’t see where there is one. I may be overlooking an approach, though. Let me outline what I’ve thought out. First is square roots. It’s possible to find the square root of a number between 0 and 1 using arc-cosine and cosine functions. This is done by using a trigonometric identity called the double-angle formula. This formula, normally, you use if you know the cosine of a particular angle named θ and want the cosine of double that angle: $\cos\left(2\theta\right) = 2 \cos^2\left(\theta\right) - 1$ If we suppose the number whose square we want is $\cos^2\left(\theta\right)$ then we can find $\cos\left(\theta\right)$. The calculation on the right-hand side of this is easy; double your number and subtract one. Then to the lookup table; find the angle whose cosine is that number. That angle is two times θ. So divide that angle in two. Cosine of that is, well, $\cos\left(\theta\right)$ and most people would agree that’s a square root of $\cos^2\left(\theta\right)$ without any further work. Why can’t I do the same thing with a triple-angle formula? … Well, here’s my choices among the normal trig functions: $\cos\left(3\theta\right) = 4 \cos^3\left(\theta\right) - 3\cos\left(\theta\right)$ $\sin\left(3\theta\right) = 3 \sin\left(\theta\right) - 4\sin^3\left(\theta\right)$ $\tan\left(3\theta\right) = \frac{3 \tan\left(\theta\right) - \tan^3\left(\theta\right)}{1 - 3 \tan^2\left(\theta\right)}$ Yes, I see you in the corner, hopping up and down and asking about the cosecant. It’s not any better. Trust me. So you see the problem here. The number whose cube root I want has to be the $\cos^3\left(\theta\right)$. Or the cube of the sine of theta, or the cube of the tangent of theta. Whatever. The trouble is I don’t see a way to calculate cosine (sine, tangent) of 3θ, or 3 times the cosine (etc) of θ. Nor to get some other simple expression out of that. I can get mixtures of the cosine of 3θ plus the cosine of θ, sure. But that doesn’t help me figure out what θ is. Can it be worked out? Oh, sure, yes. There’s absolutely approximation schemes that would let me find a value of θ which makes true, say, $4 \cos^3\left(\theta\right) - 3 \cos\left(\theta\right) = 0.5$ But: is there a way takes less work than some ordinary method of calculating a cube root? Even if you allow some work to be done by someone else ahead of time, such as by computing a table of trig functions? … If there is, I don’t see it. So there’s another point in favor of logarithms. Finding a cube root using a logarithm table is no harder than finding a square root, or any other root. If you’re using trig tables, you can find a square root, or a fourth root, or an eighth root. Cube roots, if I’m not missing something, are beyond us. So are, I imagine, fifth roots and sixth roots and seventh roots and so on. I could protest that I have never in my life cared what the seventh root of a thing is, but it would sound like a declaration of sour grapes. Too bad. If I have missed something, it’s probably obvious. Please go ahead and tell me what it is. ## How To Calculate A Square Root By A Method You Will Never Actually Use Sunday’s comics post got me thinking about ways to calculate square roots besides using the square root function on a calculator. I wondered if I could find my own little approach. Maybe something that isn’t iterative. Iterative methods are great in that they tend to forgive numerical errors. All numerical calculations carry errors with them. But they can involve a lot of calculation and, in principle, never finish. You just give up when you think the answer is good enough. A non-iterative method carries the promise that things will, someday, end. And I found one! It’s a neat little way to find the square root of a number between 0 and 1. Call the number ‘S’, as in square. I’ll give you the square root from it. Here’s how. First, take S. Multiply S by two. Then subtract 1 from this. Next. Find the angle — I shall call it 2A — whose cosine is this number 2S – 1. You have 2A? Great. Divide that in two, so that you get the angle A. Now take the cosine of A. This will be the (positive) square root of S. (You can find the negative square root by taking minus this.) Let me show it in action. Let’s say you want the square root of 0.25. So let S = 0.25. And then 2S – 1 is two times 0.25 (which is 0.50) minus 1. That’s -0.50. What angle has cosine of -0.50? Well, that’s an angle of 2 π / 3 radians. Mathematicians think in radians. People think in degrees. And you can do that too. This is 120 degrees. Divide this by two. That’s an angle of π / 3 radians, or 60 degrees. The cosine of π / 3 is 0.5. And, indeed, 0.5 is the square root of 0.25. I hear you protesting already: what if we want the square root of something larger than 1? Like, how is this any good in finding the square root of 81? Well, if we add a little step before and after this work, we’re in good shape. Here’s what. So we start with some number larger than 1. Say, 81. Fine. Divide it by 100. If it’s still larger than 100, divide it again, and again, until you get a number smaller than 1. Keep track of how many times you did this. In this case, 81 just has to be divided by 100 the one time. That gives us 0.81, a number which is smaller than 1. Twice 0.81 minus 1 is equal to 0.62. The angle which has 0.81 as cosine is roughly 0.90205. Half this angle is about 0.45103. And the cosine of 0.45103 is 0.9. This is looking good, but obviously 0.9 is no square root of 81. Ah, but? We divided 81 by 100 to get it smaller than 1. So we balance that by multiplying 0.9 by 10 to get it back larger than 1. If we had divided by 100 twice to start with, we’d multiply by 10 twice to finish. If we had divided by 100 six times to start with, we’d multiply by 10 six times to finish. Yes, 10 is the square root of 100. You see what’s going on here. (And if you want the square root of a tiny number, something smaller than 0.01, it’s not a bad idea to multiply it by 100, maybe several times over. Then calculate the square root, and divide the result by 10 a matching number of times. It’s hard to calculate with very big or with very small numbers. If you must calculate, do it on very medium numbers. This is one of those little things you learn in numerical mathematics.) So maybe now you’re convinced this works. You may not be convinced of why this works. What I’m using here is a trigonometric identity, one of the angle-doubling formulas. Its heart is this identity. It’s familiar to students whose Intro to Trigonometry class is making them finally, irrecoverably hate mathematics: $\cos\left(2\theta\right) = 2 \cos^2\left(\theta\right) - 1$ Here, I let ‘S’ be the squared number, $\cos^2\left(\theta\right)$. So then anything I do to find $\cos\left(\theta\right)$ gets me the square root. The algebra here is straightforward. Since ‘S’ is that cosine-squared thing, all I have to do is double it, subtract one, and then find what angle 2θ has that number as cosine. Then the cosine of θ has to be the square root. Oh, yeah, all right. There’s an extra little objection. In what world is it easier to take an arc-cosine (to figure out what 2θ is) and then later to take a cosine? … And the answer is, well, any world where you’ve already got a table printed out of cosines of angles and don’t have a calculator on hand. This would be a common condition through to about 1975. And not all that ridiculous through to about 1990. This is an example of a prosthaphaeretic rule. These are calculation tools. They’re used to convert multiplication or division problems into addition and subtraction. The idea is exactly like that of logarithms and exponents. Using trig functions predates logarithms. People knew about sines and cosines long before they knew about logarithms and exponentials. But the impulse is the same. And you might, if you squint, see in my little method here an echo of what you’d do more easily with a logarithm table. If you had a log table, you’d calculate $\exp\left(\frac{1}{2}\log\left(S\right)\right)$ instead. But if you don’t have a log table, and only have a table of cosines, you can calculate $\cos\left(\frac{1}{2}\arccos\left(2 S - 1 \right)\right)$ at least. Is this easier than normal methods of finding square roots? … If you have a table of cosines, yes. Definitely. You have to scale the number into range (divide by 100 some) do an easy multiplication (S times 2), an easy subtraction (minus 1), a table lookup (arccosine), an easy division (divide by 2), another table lookup (cosine), and scale the number up again (multiply by 10 some). That’s all. Seven steps, and two of them are reading. Two of the rest are multiplying or dividing by 10’s. Using logarithm tables has it beat, yes, at five steps (two that are scaling, two that are reading, one that’s dividing by 2). But if you can’t find your table of logarithms, and do have a table of cosines, you’re set. This may not be practical, since who has a table of cosines anymore? Who hasn’t also got a calculator that does square roots faster? But it delighted me to work this scheme out. Give me a while and maybe I’ll think about cube roots. ## Reading the Comics, October 4, 2016: Split Week Edition Part 1 The last week in mathematically themed comics was a pleasant one. By “a pleasant one” I mean Comic Strip Master Command sent enough comics out that I feel comfortable splitting them across two essays. Look for the other half of the past week’s strips in a couple days at a very similar URL. Mac King and Bill King’s Magic in a Minute feature for the 2nd shows off a bit of number-pattern wonder. Set numbers in order on a four-by-four grid and select four as directed and add them up. You get the same number every time. It’s a cute trick. I would not be surprised if there’s some good group theory questions underlying this, like about what different ways one could arrange the numbers 1 through 16. Or for what other size grids the pattern will work for: 2 by 2? (Obviously.) 3 by 3? 5 by 5? 6 by 6? I’m not saying I actually have been having fun doing this. I just sense there’s fun to be had there. Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 2nd is based on one of those weirdnesses of the way computers add. I remember in the 90s being on a Java mailing list. Routinely it would draw questions from people worried that something was very wrong, as adding 0.01 to a running total repeatedly wouldn’t get to exactly 1.00. Java was working correctly, in that it was doing what the specifications said. It’s just the specifications didn’t work quite like new programmers expected. What’s going on here is the same problem you get if you write down 1/3 as 0.333. You know that 1/3 plus 1/3 plus 1/3 ought to be 1 exactly. But 0.333 plus 0.333 plus 0.333 is 0.999. 1/3 is really a little bit more than 0.333, but we skip that part because it’s convenient to use only a few points past the decimal. Computers normally represent real-valued numbers with a scheme called floating point representation. At heart, that’s representing numbers with a couple of digits. Enough that we don’t normally see the difference between the number we want and the number the computer represents. Every number base has some rational numbers it can’t represent exactly using finitely many digits. Our normal base ten, for example, has “one-third” and “two-third”. Floating point arithmetic is built on base two, and that has some problems with tenths and hundredths and thousandths. That’s embarrassing but in the main harmless. Programmers learn about these problems and how to handle them. And if they ask the mathematicians we tell them how to write code so as to keep these floating-point errors from growing uncontrollably. If they ask nice. Random Acts of Nancy for the 3rd is a panel from Ernie Bushmiller’s Nancy. That panel’s from the 23rd of November, 1946. And it just uses mathematics in passing, arithmetic serving the role of most of Nancy’s homework. There’s a bit of spelling (I suppose) in there too, which probably just represents what’s going to read most cleanly. Random Acts is curated by Ernie Bushmiller fans Guy Gilchrist (who draws the current Nancy) and John Lotshaw. Thom Bluemel’s Birdbrains for the 4th depicts the discovery of a new highest number. When humans discovered ‘1’ is, I would imagine, probably unknowable. Given the number sense that animals have it’s probably something that predates humans, that it’s something we’re evolved to recognize and understand. A single stroke for 1 seems to be a common symbol for the number. I’ve read histories claiming that a culture’s symbol for ‘1’ is often what they use for any kind of tally mark. Obviously nothing in human cultures is truly universal. But when I look at number symbols other than the Arabic and Roman schemes I’m used to, it is usually the symbol for ‘1’ that feels familiar. Then I get to the Thai numeral and shrug at my helplessness. Bill Amend’s FoxTrot Classics for the 4th is a rerun of the strip from the 11th of October, 2005. And it’s made for mathematics people to clip out and post on the walls. Jason and Marcus are in their traditional nerdly way calling out sequences of numbers. Jason’s is the Fibonacci Sequence, which is as famous as mathematics sequences get. That’s the sequence of numbers in which every number is the sum of the previous two terms. You can start that sequence with 0 and 1, or with 1 and 1, or with 1 and 2. It doesn’t matter. Marcus calls out the Perrin Sequence, which I neve heard of before either. It’s like the Fibonacci Sequence. Each term in it is the sum of two other terms. Specifically, each term is the sum of the second-previous and the third-previous terms. And it starts with the numbers 3, 0, and 2. The sequence is named for François Perrin, who described it in 1899, and that’s as much as I know about him. The sequence describes some interesting stuff. Take n points and put them in a ‘cycle graph’, which looks to the untrained eye like a polygon with n corners and n sides. You can pick subsets of those points. A maximal independent set is the biggest subset you can make that doesn’t fit into another subset. And the number of these maximal independent sets in a cyclic graph is the n-th number in the Perrin sequence. I admit this seems like a nice but not compelling thing to know. But I’m not a cyclic graph kind of person so what do I know? Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 4th is the anthropomorphic numerals joke for this essay and I was starting to worry we wouldn’t get one. ## Theorem Thursday: A First Fixed Point Theorem I’m going to let the Mean Value Theorem slide a while. I feel more like a Fixed Point Theorem today. As with the Mean Value Theorem there’s several of these. Here I’ll start with an easy one. # The Fixed Point Theorem. Back when the world and I were young I would play with electronic calculators. They encouraged play. They made it so easy to enter a number and hit an operation, and then hit that operation again, and again and again. Patterns appeared. Start with, say, ‘2’ and hit the ‘squared’ button, the smaller ‘2’ raised up from the key’s baseline. You got 4. And again: 16. And again: 256. And again and again and you got ever-huger numbers. This happened whenever you started from a number bigger than 1. Start from something smaller than 1, however tiny, and it dwindled down to zero, whatever you tried. Start at ‘1’ and it just stays there. The results were similar if you started with negative numbers. The first squaring put you in positive numbers and everything carried on as before. This sort of thing happened a lot. Keep hitting the mysterious ‘exp’ and the numbers would keep growing forever. Keep hitting ‘sqrt’; if you started above 1, the numbers dwindled to 1. Start below and the numbers rise to 1. Or you started at zero, but who’s boring enough to do that? ‘log’ would start with positive numbers and keep dropping until it turned into a negative number. The next step was the calculator’s protest we were unleashing madness on the world. But you didn’t always get zero, one, infinity, or madness, from repeatedly hitting the calculator button. Sometimes, some functions, you’d get an interesting number. If you picked any old number and hit cosine over and over the digits would eventually settle down to around 0.739085. Or -0.739085. Cosine’s great. Tangent … tangent is weird. Tangent does all sorts of bizarre stuff. But at least cosine is there, giving us this interesting number. (Something you might wonder: this is the cosine of an angle measured in radians, which is how mathematicians naturally think of angles. Normal people measure angles in degrees, and that will have a different fixed point. We write both the cosine-in-radians and the cosine-in-degrees using the shorthand ‘cos’. We get away with this because people who are confused by this are too embarrassed to call us out on it. If we’re thoughtful we write, say, ‘cos x’ for radians and ‘cos x°’ for degrees. This makes the difference obvious. It doesn’t really, but at least we gave some hint to the reader.) This all is an example of a fixed point theorem. Fixed point theorems turn up in a lot of fields. They were most impressed upon me in dynamical systems, studying how a complex system changes in time. A fixed point, for these problems, is an equilibrium. It’s where things aren’t changed by a process. You can see where that’s interesting. In this series I haven’t stated theorems exactly much, and I haven’t given them real proofs. But this is an easy one to state and to prove. Start off with a function, which I’ll name ‘f’, because yes that is exactly how much effort goes in to naming functions. It has as a domain the interval [a, b] for some real numbers ‘a’ and ‘b’. And it has as rang the same interval, [a, b]. It might use the whole range; it might use only a subset of it. And we have to require that f is continuous. Then there has to be at least one fixed point. There must be at last one number ‘c’, somewhere in the interval [a, b], for which f(c) equals c. There may be more than one; we don’t say anything about how many there are. And it can happen that c is equal to a. Or that c equals b. We don’t know that it is or that it isn’t. We just know there’s at least one ‘c’ that makes f(c) equal c. You get that in my various examples. If the function f has the rule that any given x is matched to x2, then we do get two fixed points: f(0) = 02 = 0, and, f(1) = 12 = 1. Or if f has the rule that any given x is matched to the square root of x, then again we have: $f(0) = \sqrt{0} = 0$ and $f(1) = \sqrt{1} = 1$. Same old boring fixed points. The cosine is a little more interesting. For that we have $f(0.739085...) = \cos\left(0.739085...\right) = 0.739085...$. How to prove it? The easiest way I know is to summon the Intermediate Value Theorem. Since I wrote a couple hundred words about that a few weeks ago I can assume you to understand it perfectly and have no question about how it makes this problem easy. I don’t even need to go on, do I? … Yeah, fair enough. Well, here’s how to do it. We’ll take the original function f and create, based on it, a new function. We’ll dig deep in the alphabet and name that ‘g’. It has the same domain as f, [a, b]. Its range is … oh, well, something in the real numbers. Don’t care. The wonder comes from the rule we use. The rule for ‘g’ is this: match the given number ‘x’ with the number ‘f(x) – x’. That is, g(a) equals whatever f(a) would be, minus a. g(b) equals whatever f(b) would be, minus b. We’re allowed to define a function in terms of some other function, as long as the symbols are meaningful. But we aren’t doing anything wrong like dividing by zero or taking the logarithm of a negative number or asking for f where it isn’t defined. You might protest that we don’t know what the rule for f is. We’re told there is one, and that it’s a continuous function, but nothing more. So how can I say I’ve defined g in terms of a function I don’t know? In the first place, I already know everything about f that I need to. I know it’s a continuous function defined on the interval [a, b]. I won’t use any more than that about it. And that’s great. A theorem that doesn’t require knowing much about a function is one that applies to more functions. It’s like the difference between being able to say something true of all living things in North America, and being able to say something true of all persons born in Redbank, New Jersey, on the 18th of February, 1944, who are presently between 68 and 70 inches tall and working on their rock operas. Both things may be true, but one of those things you probably use more. In the second place, suppose I gave you a specific rule for f. Let me say, oh, f matches x with the arccosecant of x. Are you feeling any more enlightened now? Didn’t think so. Back to g. Here’s some things we can say for sure about it. g is a function defined on the interval [a, b]. That’s how we set it up. Next point: g is a continuous function on the interval [a, b]. Remember, g is just the function f, which was continuous, minus x, which is also continuous. The difference of two continuous functions is still going to be continuous. (This is obvious, although it may take some considered thinking to realize why it is obvious.) Now some interesting stuff. What is g(a)? Well, it’s whatever number f(a) is minus a. I can’t tell you what number that is. But I can tell you this: it’s not negative. Remember that f(a) has to be some number in the interval [a, b]. That is, it’s got to be no smaller than a. So the smallest f(a) can be is equal to a, in which case f(a) minus a is zero. And f(a) might be larger than a, in which case f(a) minus a is positive. So g(a) is either zero or a positive number. (If you’ve just realized where I’m going and gasped in delight, well done. If you haven’t, don’t worry. You will. You’re just out of practice.) What about g(b)? Since I don’t know what f(b) is, I can’t tell you what specific number it is. But I can tell you it’s not a positive number. The reasoning is just like above: f(b) is some number on the interval [a, b]. So the biggest number f(b) can equal is b. And in that case f(b) minus b is zero. If f(b) is any smaller than b, then f(b) minus b is negative. So g(b) is either zero or a negative number. (Smiling at this? Good job. If you aren’t, again, not to worry. This sort of argument is not the kind of thing you do in Boring Algebra. It takes time and practice to think this way.) And now the Intermediate Value Theorem works. g(a) is a positive number. g(b) is a negative number. g is continuous from a to b. Therefore, there must be some number ‘c’, between a and b, for which g(c) equals zero. And remember what g(c) means: f(c) – c equals 0. Therefore f(c) has to equal c. There has to be a fixed point. And some tidying up. Like I said, g(a) might be positive. It might also be zero. But if g(a) is zero, then f(a) – a = 0. So a would be a fixed point. And similarly if g(b) is zero, then f(b) – b = 0. So then b would be a fixed point. The important thing is there must be at least some fixed point. Now that calculator play starts taking on purposeful shape. Squaring a number could find a fixed point only if you started with a number from -1 to 1. The square of a number outside this range, such as ‘2’, would be bigger than you started with, and the Fixed Point Theorem doesn’t apply. Similarly with exponentials. But square roots? The square root of any number from 0 to a positive number ‘b’ is a number between 0 and ‘b’, at least as long as b was bigger than 1. So there was a fixed point, at 1. The cosine of a real number is some number between -1 and 1, and the cosines of all the numbers between -1 and 1 are themselves between -1 and 1. The Fixed Point Theorem applies. Tangent isn’t a continuous function. And the calculator play never settles on anything. As with the Intermediate Value Theorem, this is an existence proof. It guarantees there is a fixed point. It doesn’t tell us how to find one. Calculator play does, though. Start from any old number that looks promising and work out f for that number. Then take that and put it back into f. And again. And again. This is known as “fixed point iteration”. It won’t give you the exact answer. Not usually, anyway. In some freak cases it will. But what it will give, provided some extra conditions are satisfied, is a sequence of values that get closer and closer to the fixed point. When you’re close enough, then you stop calculating. How do you know you’re close enough? If you know something about the original f you can work out some logically rigorous estimates. Or you just keep calculating until all the decimal points you want stop changing between iterations. That’s not logically sound, but it’s easy to program. That won’t always work. It’ll only work if the function f is differentiable on the interval (a, b). That is, it can’t have corners. And there have to be limits on how fast the function changes on the interval (a, b). If the function changes too fast, iteration can’t be guaranteed to work. But often if we’re interested in a function at all then these conditions will be true, or we can think of a related function that for which they are true. And even if it works it won’t always work well. It can take an enormous pile of calculations to get near the fixed point. But this is why we have computers, and why we can leave them to work overnight. And yet such a simple idea works. It appears in ancient times, in a formula for finding the square root of an arbitrary positive number ‘N’. (Find the fixed point for $f(x) = \frac{1}{2}\left(\frac{N}{x} + x\right)$). It creeps into problems that don’t look like fixed points. Calculus students learn of something called the Newton-Raphson Iteration. It finds roots, points where a function f(x) equals zero. Mathematics majors learn of numerical methods to solve ordinary differential equations. The most stable of these are again fixed-point iteration schemes, albeit in disguise. ## Theorem Thursday: One Mean Value Theorem Of Many For this week I have something I want to follow up on. We’ll see if I make it that far. # The Mean Value Theorem. My subject line disagrees with the header just above here. I want to talk about the Mean Value Theorem. It’s one of those things that turns up in freshman calculus and then again in Analysis. It’s introduced as “the” Mean Value Theorem. But like many things in calculus it comes in several forms. So I figure to talk about one of them here, and another form in a while, when I’ve had time to make up drawings. Calculus can split effortlessly into two kinds of things. One is differential calculus. This is the study of continuity and smoothness. It studies how a quantity changes if someting affecting it changes. It tells us how to optimize things. It tells us how to approximate complicated functions with simpler ones. Usually polynomials. It leads us to differential equations, problems in which the rate at which something changes depends on what value the thing has. The other kind is integral calculus. This is the study of shapes and areas. It studies how infinitely many things, all infinitely small, add together. It tells us what the net change in things are. It tells us how to go from information about every point in a volume to information about the whole volume. They aren’t really separate. Each kind informs the other, and gives us tools to use in studying the other. And they are almost mirrors of one another. Differentials and integrals are not quite inverses, but they come quite close. And as a result most of the important stuff you learn in differential calculus has an echo in integral calculus. The Mean Value Theorem is among them. The Mean Value Theorem is a rule about functions. In this case it’s functions with a domain that’s an interval of the real numbers. I’ll use ‘a’ as the name for the smallest number in the domain and ‘b’ as the largest number. People talking about the Mean Value Theorem often do. The range is also the real numbers, although it doesn’t matter which ones. I’ll call the function ‘f’ in accord with a longrunning tradition of not working too hard to name functions. What does matter is that ‘f’ is continuous on the interval [a, b]. I’ve described what ‘continuous’ means before. It means that here too. And we need one more thing. The function f has to be differentiable on the interval (a, b). You maybe noticed that before I wrote [a, b], and here I just wrote (a, b). There’s a difference here. We need the function to be continuous on the “closed” interval [a, b]. That is, it’s got to be continuous for ‘a’, for ‘b’, and for every point in-between. But we only need the function to be differentiable on the “open” interval (a, b). That is, it’s got to be continuous for all the points in-between ‘a’ and ‘b’. If it happens to be differentiable for ‘a’, or for ‘b’, or for both, that’s great. But we won’t turn away a function f for not being differentiable at those points. Only the interior. That sort of distinction between stuff true on the interior and stuff true on the boundaries is common. This is why mathematicians have words for “including the boundaries” (“closed”) and “never minding the boundaries” (“open”). As to what “differentiable” is … A function is differentiable at a point if you can take its derivative at that point. I’m sure that clears everything up. There are many ways to describe what differentiability is. One that’s not too bad is to imagine zooming way in on the curve representing a function. If you start with a big old wobbly function it waves all around. But pick a point. Zoom in on that. Does the function stay all wobbly, or does it get more steady, more straight? Keep zooming in. Does it get even straighter still? If you zoomed in over and over again on the curve at some point, would it look almost exactly like a straight line? If it does, then the function is differentiable at that point. It has a derivative there. The derivative’s value is whatever the slope of that line is. The slope is that thing you remember from taking Boring Algebra in high school. That rise-over-run thing. But this derivative is a great thing to know. You could approximate the original function with a straight line, with slope equal to that derivative. Close to that point, you’ll make a small enough error nobody has to worry about it. That there will be this straight line approximation isn’t true for every function. Here’s an example. Picture a line that goes up and then takes a 90-degree turn to go back down again. Look at the corner. However close you zoom in on the corner, there’s going to be a corner. It’s never going to look like a straight line; there’s a 90-degree angle there. It can be a smaller angle if you like, but any sort of corner breaks this differentiability. This is a point where the function isn’t differentiable. There are functions that are nothing but corners. They can be differentiable nowhere, or only at a tiny set of points that can be ignored. (A set of measure zero, as the dialect would put it.) Mathematicians discovered this over the course of the 19th century. They got into some good arguments about how that can even make sense. It can get worse. Also found in the 19th century were functions that are continuous only at a single point. This smashes just about everyone’s intuition. But we can’t find a definition of continuity that’s as useful as the one we use now and avoids that problem. So we accept that it implies some pathological conclusions and carry on as best we can. Now I get to the Mean Value Theorem in its differential calculus pelage. It starts with the endpoints, ‘a’ and ‘b’, and the values of the function at those points, ‘f(a)’ and ‘f(b)’. And from here it’s easiest to figure what’s going on if you imagine the plot of a generic function f. I recommend drawing one. Just make sure you draw it without lifting the pen from paper, and without including any corners anywhere. Something wiggly. Draw the line that connects the ends of the wiggly graph. Formally, we’re adding the line segment that connects the points with coordinates (a, f(a)) and (b, f(b)). That’s coordinate pairs, not intervals. That’s clear in the minds of the mathematicians who don’t see why not to use parentheses over and over like this. (We are short on good grouping symbols like parentheses and brackets and braces.) Per the Mean Value Theorem, there is at least one point whose derivative is the same as the slope of that line segment. If you were to slide the line up or down, without changing its orientation, you’d find something wonderful. Most of the time this line intersects the curve, crossing from above to below or vice-versa. But there’ll be at least one point where the shifted line is “tangent”, where it just touches the original curve. Close to that touching point, the “tangent point”, the shifted line and the curve blend together and can’t be easily told apart. As long as the function is differentiable on the open interval (a, b), and continuous on the closed interval [a, b], this will be true. You might convince yourself of it by drawing a couple of curves and taking a straightedge to the results. This is an existence theorem. Like the Intermediate Value Theorem, it doesn’t tell us which point, or points, make the thing we’re interested in true. It just promises us that there is some point that does it. So it gets used in other proofs. It lets us mix information about intervals and information about points. It’s tempting to try using it numerically. It looks as if it justifies a common differential-calculus trick. Suppose we want to know the value of the derivative at a point. We could pick a little interval around that point and find the endpoints. And then find the slope of the line segment connecting the endpoints. And won’t that be close enough to the derivative at the point we care about? Well. Um. No, we really can’t be sure about that. We don’t have any idea what interval might make the derivative of the point we care about equal to this line-segment slope. The Mean Value Theorem won’t tell us. It won’t even tell us if there exists an interval that would let that trick work. We can’t invoke the Mean Value Theorem to let us get away with that. Often, though, we can get away with it. Differentiable functions do have to follow some rules. Among them is that if you do pick a small enough interval then approximations that look like this will work all right. If the function flutters around a lot, we need a smaller interval. But a lot of the functions we’re interested in don’t flutter around that much. So we can get away with it. And there’s some grounds to trust in getting away with it. The Mean Value Theorem isn’t any part of the grounds. It just looks so much like it ought to be. I hope on a later Thursday to look at an integral-calculus form of the Mean Value Theorem. ## What’s The Shortest Proof I’ve Done? I didn’t figure to have a bookend for last week’s “What’s The Longest Proof I’ve Done? question. I don’t keep track of these things, after all. And the length of a proof must be a fluid concept. If I show something is a direct consequence of a previous theorem, is the proof’s length the two lines of new material? Or is it all the proof of the previous theorem plus two new lines? I would think the shortest proof I’d done was showing that the logarithm of 1 is zero. This would be starting from the definition of the natural logarithm of a number x as the definite integral of 1/t on the interval from 1 to x. But that requires a bunch of analysis to support the proof. And the Intermediate Value Theorem. Does that stuff count? Why or why not? But this happened to cross my desk: The Shortest-Known Paper Published in a Serious Math Journal: Two Succinct Sentences, an essay by Dan Colman. It reprints a paper by L J Lander and T R Parkin which appeared in the Bulletin of the American Mathematical Society in 1966. It’s about Euler’s Sums of Powers Conjecture. This is a spinoff of Fermat’s Last Theorem. Leonhard Euler observed that you need at least two whole numbers so that their squares add up to a square. And you need three cubes of whole numbers to add up to the cube of a whole number. Euler speculated you needed four whole numbers so that their fourth powers add up to a fourth power, five whole numbers so that their fifth powers add up to a fifth power, and so on. And it’s not so. Lander and Parkin found that this conjecture is false. They did it the new old-fashioned way: they set a computer to test cases. And they found four whole numbers whose fifth powers add up to a fifth power. So the quite short paper answers a long-standing question, and would be hard to beat for accessibility. There is another famous short proof sometimes credited as the most wordless mathematical presentation. Frank Nelson Cole gave it on the 31st of October, 1903. It was about the Mersenne number 267-1, or in human notation, 147,573,952,589,676,412,927. It was already known the number wasn’t prime. (People wondered because numbers of the form 2n-1 often lead us to perfect numbers. And those are interesting.) But nobody knew which factors it was. Cole gave his talk by going up to the board, working out 267-1, and then moving to the other side of the board. There he wrote out 193,707,721 × 761,838,257,287, and showed what that was. Then, per legend, he sat down without ever saying a word, and took in the standing ovation. I don’t want to cast aspersions on a great story like that. But mathematics is full of great stories that aren’t quite so. And I notice that one of Cole’s doctoral students was Eric Temple Bell. Bell gave us a great many tales of mathematics history that are grand and great stories that just weren’t so. So I want it noted that I don’t know where we get this story from, or how it may have changed in the retellings. But Cole’s proof is correct, at least according to Octave. So not every proof is too long to fit in the universe. But then I notice that Mathworld’s page regarding the Euler Sum of Powers Conjecture doesn’t cite the 1966 paper. It cites instead Lander and Parkin’s “A Counterexample to Euler’s Sum of Powers Conjecture” from Mathematics of Computation volume 21, number 97, of 1967. There the paper has grown to three pages, although it’s only a couple paragraphs of one page and three lines of citation on the third. It’s not so easy to read either, but it does explain how they set about searching for counterexamples. But it may give you some better idea of how numerical mathematicians find things. ## Theorem Thursday: What Is Cramer’s Rule? KnotTheorist asked for this one during my appeal for theorems to discuss. And I’m taking an open interpretation of what a “theorem” is. I can do a rule. # Cramer’s Rule I first learned of Cramer’s Rule in the way I expect most people do. It was an algebra course. I mean high school algebra. By high school algebra I mean you spend roughly eight hundred years learning ways to solve for x or to plot y versus x. Then take a pause for polar coordinates and matrices. Then you go back to finding both x and y. Cramer’s Rule came up in the context of solving simultaneous equations. You have more than one variable. So x and y. Maybe z. Maybe even a w, before whoever set up the problem gives up and renames everything x1 and x2 and x62 and all that. You also have more than one equation. In fact, you have exactly as many equations as you have variables. Are there any sets of values those variables can have which make all those variable true simultaneously? Thus the imaginative name “simultaneous equations” or the search for “simultaneous solutions”. If all the equations are linear then we can always say whether there’s simultaneous solutions. By “linear” we mean what we always mean in mathematics, which is, “something we can handle”. But more exactly it means the equations have x and y and whatever other variables only to the first power. No x-squared or square roots of y or tangents of z or anything. (The equations are also allowed to omit a variable. That is, if you have one equation with x, y, and z, and another with just x and z, and another with just y and z, that’s fine. We pretend the missing variable is there and just multiplied by zero, and proceed as before.) One way to find these solutions is with Cramer’s Rule. Cramer’s Rule sets up some matrices based on the system of equations. If the system has two equations, it sets up three matrices. If the system has three equations, it sets up four matrices. If the system has twelve equations, it sets up thirteen matrices. You see the pattern here. And then you can take the determinant of each of these matrices. Dividing the determinant of one of these matrices by another one tells you what value of x makes all the equations true. Dividing the determinant of another matrix by the determinant of one of these matrices tells you which values of y makes all the equations true. And so on. The Rule tells you which determinants to use. It also says what it means if the determinant you want to divide by equals zero. It means there’s either no set of simultaneous solutions or there’s infinitely many solutions. This gets dropped on us students in the vain effort to convince us knowing how to calculate determinants is worth it. It’s not that determinants aren’t worth knowing. It’s just that they don’t seem to tell us anything we care about. Not until we get into mappings and calculus and differential equations and other mathematics-major stuff. We never see it in high school. And the hard part of determinants is that for all the cool stuff they tell us, they take forever to calculate. The determinant for a matrix with two rows and two columns isn’t bad. Three rows and three columns is getting bad. Four rows and four columns is awful. The determinant for a matrix with five rows and five columns you only ever calculate if you’ve made your teacher extremely cross with you. So there’s the genius and the first problem with Cramer’s Rule. It takes a lot of calculating. Many any errors along the way with the calculation and your work is wrong. And worse, it won’t be wrong in an obvious way. You can find the error only by going over every single step and hoping to catch the spot where you, somehow, got 36 times -7 minus 21 times -8 wrong. The second problem is nobody in high school algebra mentions why systems of linear equations should be interesting to solve. Oh, maybe they’ll explain how this is the work you do to figure out where two straight lines intersect. But that just shifts the “and we care because … ?” problem back one step. Later on we might come to understand the lines represent cases where something we’re interested in is true, or where it changes from true to false. This sort of simultaneous-solution problem turns up naturally in optimization problems. These are problems where you try to find a maximum subject to some constraints. Or find a minimum. Maximums and minimums are the same thing when you think about them long enough. If all the constraints can be satisfied at once and you get a maximum (or minimum, whatever), great! If they can’t … Well, you can study how close it’s possible to get, and what happens if you loosen one or more constraint. That’s worth knowing about. The third problem with Cramer’s Rule is that, as a method, it kind of sucks. We can be convinced that simultaneous linear equations are worth solving, or at least that we have to solve them to get out of High School Algebra. And we have computers. They can grind away and work out thirteen determinants of twelve-row-by-twelve-column matrices. They might even get an answer back before the end of the term. (The amount of work needed for a determinant grows scary fast as the matrix gets bigger.) But all that work might be meaningless. The trouble is that Cramer’s Rule is numerically unstable. Before I even explain what that is you already sense it’s a bad thing. Think of all the good things in your life you’ve heard described as unstable. Fair enough. But here’s what we mean by numerically unstable. Is 1/3 equal to 0.3333333? No, and we know that. But is it close enough? Sure, most of the time. Suppose we need a third of sixty million. 0.3333333 times 60,000,000 equals 19,999,998. That’s a little off of the correct 20,000,000. But I bet you wouldn’t even notice the difference if nobody pointed it out to you. Even if you did notice it you might write off the difference. “If we must, make up the difference out of petty cash”, you might declare, as if that were quite sensible in the context. And that’s so because this multiplication is numerically stable. Make a small error in either term and you get a proportional error in the result. A small mistake will — well, maybe it won’t stay small, necessarily. But it’ll not grow too fast too quickly. So now you know intuitively what an unstable calculation is. This is one in which a small error doesn’t necessarily stay proportionally small. It might grow huge, arbitrarily huge, and in few calculations. So your answer might be computed just fine, but actually be meaningless. This isn’t because of a flaw in the computer per se. That is, it’s working as designed. It’s just that we might need, effectively, infinitely many digits of precision for the result to be correct. You see where there may be problems achieving that. Cramer’s Rule isn’t guaranteed to be nonsense, and that’s a relief. But it is vulnerable to this. You can set up problems that look harmless but which the computer can’t do. And that’s surely the worst of all worlds, since we wouldn’t bother calculating them numerically if it weren’t too hard to do by hand. (Let me direct the reader who’s unintimidated by mathematical jargon, and who likes seeing a good Wikipedia Editors quarrel, to the Cramer’s Rule Talk Page. Specifically to the section “Cramer’s Rule is useless.”) I don’t want to get too down on Cramer’s Rule. It’s not like the numerical instability hurts every problem you might use it on. And you can, at the cost of some more work, detect whether a particular set of equations will have instabilities. That requires a lot of calculation but if we have the computer to do the work fine. Let it. And a computer can limit its numerical instabilities if it can do symbolic manipulations. That is, if it can use the idea of “one-third” rather than 0.3333333. The software package Mathematica, for example, does symbolic manipulations very well. You can shed many numerical-instability problems, although you gain the problem of paying for a copy of Mathematica. If you just care about, or just need, one of the variables then what the heck. Cramer’s Rule lets you solve for just one or just some of the variables. That seems like a niche application to me, but it is there. And the Rule re-emerges in pure analysis, where numerical instability doesn’t matter. When we look to differential equations, for example, we often find solutions are combinations of several independent component functions. Bases, in fact. Testing whether we have found independent bases can be done through a thing called the Wronskian. That’s a way that Cramer’s Rule appears in differential equations. Wikipedia also asserts the use of Cramer’s Rule in differential geometry. I believe that’s a true statement, and that it will be reflected in many mechanics problems. In these we can use our knowledge that, say, energy and angular momentum of a system are constant values to tell us something of how positions and velocities depend on each other. But I admit I’m not well-read in differential geometry. That’s something which has indeed caused me pain in my scholarly life. I don’t know whether differential geometers thank Cramer’s Rule for this insight or whether they’re just glad to have got all that out of the way. (See the above Wikipedia Editors quarrel.) I admit for all this talk about Cramer’s Rule I haven’t said what it is. Not in enough detail to pass your high school algebra class. That’s all right. It’s easy to find. MathWorld has the rule in pretty simple form. Mathworld does forget to define what it means by the vector d. (It’s the vector with components d1, d2, et cetera.) But that’s enough technical detail. If you need to calculate something using it, you can probably look closer at the problem and see if you can do it another way instead. Or you’re in high school algebra and just have to slog through it. It’s all right. Eventually you can put x and y aside and do geometry. ## A Leap Day 2016 Mathematics A To Z: Polynomials I have another request for today’s Leap Day Mathematics A To Z term. Gaurish asked for something exciting. This should be less challenging than Dedekind Domains. I hope. ## Polynomials. Polynomials are everything. Everything in mathematics, anyway. If humans study it, it’s a polynomial. If we know anything about a mathematical construct, it’s because we ran across it while trying to understand polynomials. I exaggerate. A tiny bit. Maybe by three percent. But polynomials are big. They’re easy to recognize. We can get them in pre-algebra. We make them out of a set of numbers called coefficients and one or more variables. The coefficients are usually either real numbers or complex-valued numbers. The variables we usually allow to be either real or complex-valued numbers. We take each coefficient and multiply it by some power of each variable. And we add all that up. So, polynomials are things that look like these things: $x^2 - 2x + 1$ $12 x^4 + 2\pi x^2 y^3 - 4x^3 y - \sqrt{6}$ $\ln(2) + \frac{1}{2}\left(x - 2\right) - \frac{1}{2 \cdot 2^2}\left(x - 2\right)^2 + \frac{1}{2 \cdot 2^3}\left(x - 2\right)^3 - \frac{1}{2 \cdot 2^4}\left(x - 2\right)^4 + \cdots$ $a_n x^n + a_{n - 1}x^{n - 1} + a_{n - 2}x^{n - 2} + \cdots + a_2 x^2 + a_1 x^1 + a_0$ The first polynomial maybe looks nice and comfortable. The second may look a little threatening, what with it having two variables and a square root in it, but it’s not too weird. The third is an infinitely long polynomial; you’re supposed to keep going on in that pattern, adding even more terms. The last is a generic representation of a polynomial. Each number a0, a1, a2, et cetera is some coefficient that we in principle know. It’s a good way of representing a polynomial when we want to work with it but don’t want to tie ourselves down to a particular example. The highest power we raise a variable to we call the degree of the polynomial. A second-degree polynomial, for example, has an x2 in it, but not an x3 or x4 or x18 or anything like that. A third-degree polynomial has an x3, but not x to any higher powers. Degree is a useful way of saying roughly how long a polynomial is, so it appears all over discussions of polynomials. But why do we like polynomials? Why like them so much that MathWorld lists 1,163 pages that mention polynomials? It’s because they’re great. They do everything we’d ever want to do and they’re great at it. We can add them together as easily as we add regular old numbers. We can subtract them as well. We can multiply and divide them. There’s even prime polynomials, just like there are prime numbers. They take longer to work out, but they’re not harder. And they do great stuff in advanced mathematics too. In calculus we want to take derivatives of functions. Polynomials, we always can. We get another polynomial out of that. So we can keep taking derivatives, as many as we need. (We might need a lot of them.) We can integrate too. The integration produces another polynomial. So we can keep doing that as long as we need too. (We need to do this a lot, too.) This lets us solve so many problems in calculus, which is about how functions work. It also lets us solve so many problems in differential equations, which is about systems whose change depends on the current state of things. That’s great for analyzing polynomials, but what about things that aren’t polynomials? Well, if a function is continuous, then it might as well be a polynomial. To be a little more exact, we can set a margin of error. And we can always find polynomials that are less than that margin of error away from the original function. The original function might be annoying to deal with. The polynomial that’s as close to it as we want, though, isn’t. Not every function is continuous. Most of them aren’t. But most of the functions we want to do work with are, or at least are continuous in stretches. Polynomials let us understand the functions that describe most real stuff. Nice for mathematicians, all right, but how about for real uses? How about for calculations? Oh, polynomials are just magnificent. You know why? Because you can evaluate any polynomial as soon as you can add and multiply. (Also subtract, but we think of that as addition.) Remember, x4 just means “x times x times x times x”, four of those x’s in the product. All these polynomials are easy to evaluate. Even better, we don’t have to evaluate them. We can automate away the evaluation. It’s easy to set a calculator doing this work, and it will do it without complaint and with few unforeseeable mistakes. Now remember that thing where we can make a polynomial close enough to any continuous function? And we can always set a calculator to evaluate a polynomial? Guess that this means about continuous functions. We have a tool that lets us calculate stuff we would want to know. Things like arccosines and logarithms and Bessel functions and all that. And we get nice easy to understand numbers out of them. For example, that third polynomial I gave you above? That’s not just infinitely long. It’s also a polynomial that approximates the natural logarithm. Pick a positive number x that’s between 0 and 4 and put it in that polynomial. Calculate terms and add them up. You’ll get closer and closer to the natural logarithm of that number. You’ll get there faster if you pick a number near 2, but you’ll eventually get there for whatever number you pick. (Calculus will tell us why x has to be between 0 and 4. Don’t worry about it for now.) So through polynomials we can understand functions, analytically and numerically. And they keep revealing things to us. We discovered complex-valued numbers because we wanted to find roots, values of x that make a polynomial of x equal to zero. Some formulas worked well for third- and fourth-degree polynomials. (They look like the quadratic formula, which solves second-degree polynomials. The big difference is nobody remembers what they are without looking them up.) But the formulas sometimes called for things that looked like square roots of negative numbers. Absurd! But if you carried on as if these square roots of negative numbers meant something, you got meaningful answers. And correct answers. We wanted formulas to solve fifth- and higher-degree polynomials exactly. We can do this with second and third and fourth-degree polynomials, after all. It turns out we can’t. Oh, we can solve some of them exactly. The attempt to understand why, though, helped us create and shape group theory, the study of things that look like but aren’t numbers. Polynomials go on, sneaking into everything. We can look at a square matrix and discover its characteristic polynomial. This allows us to find beautifully-named things like eigenvalues and eigenvectors. These reveal secrets of the matrix’s structure. We can find polynomials in the formulas that describe how many ways to split up a group of things into a smaller number of sets. We can find polynomials that describe how networks of things are connected. We can find polynomials that describe how a knot is tied. We can even find polynomials that distinguish between a knot and the knot’s reflection in the mirror. Polynomials are everything. ## Terrible And Less-Terrible Things with Pi We are coming around “Pi Day”, the 14th of March, again. I don’t figure to have anything thematically appropriate for the day. I figure to continue the Leap Day 2016 Mathematics A To Z, and I don’t tend to do a whole two posts in a single day. Two just seems like so many, doesn’t it? But I would like to point people who’re interested in some π-related stuff to what I posted last year. Those posts were: • Calculating Pi Terribly, in which I show a way to work out the value of π that’s fun and would take forever. I mean, yes, properly speaking they all take forever, but this takes forever just to get a couple of digits right. It might be fun to play with but don’t use this to get your digits of π. Really. • Calculating Pi Less Terribly, in which I show a way to do better. This doesn’t lend itself to any fun side projects. It’s just calculations. But it gets you accurate digits a lot faster.
# How to measure the minimum width or height of several boxes? I know we can get the maximum width or height of several boxes via: \setbox0=\vbox{\hbox{a}\hbox{b}\hbox{c}} The maximum width is \the\wd0 \setbox0=\hbox{\hbox{a}\hbox{b}\hbox{c}} The maximum height is \the\ht0 But how to get the minimum width or height? - The idea is simple: set the boxes and measure them. \documentclass{article} \makeatletter \newcommand{\settominwidth}[1]{\saltyegg@settomin{\wd}{#1}} \newcommand{\settominheight}[1]{\saltyegg@settomin{\ht}{#1}} \newcommand{\settomindepth}[1]{\saltyegg@settomin{\dp}{#1}} \newcommand{\saltyegg@settomin}[3]{% #2\maxdimen \@for\next:=#3\do{% \sbox\z@{\next}% \ifdim#1\z@<#2% #2=#1\z@ \fi}% } \makeatother \newlength{\saltyeggtest} \begin{document} \settominwidth{\saltyeggtest}{a,b,c,f} \the\saltyeggtest \settominheight{\saltyeggtest}{a,b,c,f} \the\saltyeggtest \settomindepth{\saltyeggtest}{a,b,c,f} \the\saltyeggtest \end{document} - Here is at least one straight-forward way without abstraction into a macro: \newdimen\minwd % what needs to happen in order to find the minimum width inner hbox in: % \hbox{\hbox{first}\hbox{second}\hbox{third}} \leavevmode % otherwise hboxes stack \setbox0\hbox{first}% \minwd=\wd0 \box0 \setbox0\hbox{second}% \ifdim\wd0<\minwd \minwd=\wd0 \fi \box0 \setbox0\hbox{third}% \ifdim\wd0<\minwd \minwd=\wd0 \fi \box0 minwd = \the\minwd \bye In another answer I have a made a macro for a similar thing. - If you need an expandable solution, and the boxes are already there (thus avoiding the non-expandable step of putting some material into boxes), and use a tex engine with e-TeX extensions enabled: % compile with etex (or pdftex, etc...) as this requires e-TeX extensions % \input xint.sty \def\minimalwidthofboxes #1{% \dimexpr\xintiMinof {\xintApply{\number\wd\firstofone}{#1}}sp\relax } \long\def\firstofone #1{#1}% \long in case \firstofone already exists and was % declared long % why \firstofone? because \xintApply\macro{{item1}..{item2}} does % \macro{item1}, hence here this would give \number\wd{\bA} which is illicit, we % want \number\wd\bA without braces (besides, on the other hand, it doesn't % matter if the list contains the single token \bA or the braced token {\bA}) %% EXAMPLES \newbox\bA \newbox\bB \newbox\bC \setbox\bA\hbox{Aah} \setbox\bB\hbox{BB} \setbox\bC\hbox{CCC} \the\minimalwidthofboxes {\bA\bB\bC}\ % or equivalently {{\bA}{\bB}{\bC}} is the minimal width among \the\wd\bA, \the\wd\bB, \the\wd\bC. \newbox\bD \setbox\bD\hbox{bb} \the\minimalwidthofboxes {{\bA}{\bB}{\bC}{\bD}} is the minimal width among \the\wd\bA, \the\wd\bB, \the\wd\bC\ and \the\wd\bD. \bye -
# Degree of the extension $\mathbb{Q}(\sqrt[5]{7}+\sqrt[5]{49})$ The original question is find the degree of the irreducible polynomial of $$3+\sqrt[5]{7}+\sqrt[5]{49}$$, but it's equivalent to find $$[\mathbb{Q}(3+\sqrt[5]{7}+\sqrt[5]{49}):\mathbb{Q}]$$. $$\mathbb{Q}(3+\sqrt[5]{7}+\sqrt[5]{49})=\mathbb{Q}(\sqrt[5]{7}+\sqrt[5]{49})=\mathbb{Q}(\sqrt[5]{7}+(\sqrt[5]{7})^{2})$$. But i don't know how to continue. If the question was $$\mathbb{Q}(\sqrt[3]{7}+(\sqrt[3]{7})^{2})$$: $$\alpha=\sqrt[3]{7}+(\sqrt[3]{7})^{2}$$ $$\alpha^{3}=7+3*7\sqrt[3]{7}+3*7(\sqrt[3]{7})^{2}+49$$ $$\alpha^{3}=21\alpha+56$$ $$X^{3}-21X-56$$ Irreducible, because Eisenstein's criterion with $$p=7$$. But this method doesn't work with $$n=5$$, or at least it isn't clear to find the relation. I think there should be another method to solve this more easily. Since $$\mathbb Q\subseteq \mathbb{Q}(\sqrt[5]{7}+\sqrt[5]{49}) \subseteq \mathbb{Q}(\sqrt[5]{7})$$ and $$[\mathbb{Q}(\sqrt[5]{7}):\mathbb Q]=5$$, we must $$[\mathbb{Q}(\sqrt[5]{7}+\sqrt[5]{49}):\mathbb Q] =1$$ or $$5$$. But $$\sqrt[5]{7}+\sqrt[5]{49}$$ is not rational, for otherwise $$\sqrt[5]7$$ will be the root of some polynomial of the form $$x^2+x-q$$, where $$q\in\mathbb Q$$, contradicting the fact that $$x^5-7$$ is irreducible. Hint. Let us call your element $$\alpha$$. It is in the field $$\mathbf{Q}(\sqrt[5]{7})$$, which is of degree $$5$$ over $$\mathbf{Q}$$. As $$5$$ is prime, it suffices by the tower law to prove that $$\alpha\not\in \mathbf{Q}$$ to conclude that $$\alpha$$ also has degree $$5$$ over $$\mathbf{Q}$$.
## Radial probability density / quantum numbers. In my notes for a module on atomic and molecular physics it has this statement: "For a given n the probability density of finding e- near the nucleus decreases as l increases, because the centrifugal barrier pushes the e- out. So the low-l orbitals are called penetrating." I just want to clear a few things up to make sure I understand this correctly. In a text book I found a good set of graphs comparing different combinations of n and l: http://img.photobucket.com/albums/v319/Adwodon/IMG.jpg Taking the n=3 set as the example, it appears to me that for l=0 the average distance is actually further from the nucleus than l=1,2. However there are 2 other peaks which seem to be roughly the same distance as the most probable distance for n=1,2 (I don't know if this is just coincidental?) So would I be totally wrong if I thought of these as sets of orbits (ie n=3 l=0 has 3 sets) and that low-l orbitals can 'penetrate' into sets of orbits which are closer to the nucleus (which are of a similar orbits to lower energy orbitals). As l increases the electron can no longer "penetrate" into lower sets due to increased angular momentum / centrifugal barrier(?), and the most probable position of the individual sets moves closer to the nucleus. For example (distances are made up and represent the location of the peaks): n=3 l=0 has Set 1 @ r=1 , Set 2 @ r=5, Set 3 @ r= 15 at n=3 l=1, set one is now inaccessible and set 2 /3 have moved closer: Set 2 @ r=4.5, Set 3 @ r= 14 and for n=3 l=2 both set 1/2 are inaccessible Set 3 @ r=12 Although I cant say I really know why the average position would move closer to the nucleus as the angular momentum is increased? Ultimately im just trying to put it into a form I can understand rather than have to learn by rote so unless my thinking is completely self defeating I don't mind if it doesn't give a totally accurate picture. Also please correct me if i'm using incorrect terminology, this is all fairly new to me and ive always been slow when it comes to buzzwords. PhysOrg.com physics news on PhysOrg.com >> A quantum simulator for magnetic materials>> Atomic-scale investigations solve key puzzle of LED efficiency>> Error sought & found: State-of-the-art measurement technique optimised Recognitions: Science Advisor Quote by adwodon In my notes for a module on atomic and molecular physics it has this statement: "For a given n the probability density of finding e- near the nucleus decreases as l increases, because the centrifugal barrier pushes the e- out. So the low-l orbitals are called penetrating." I just want to clear a few things up to make sure I understand this correctly. In a text book I found a good set of graphs comparing different combinations of n and l: http://img.photobucket.com/albums/v319/Adwodon/IMG.jpg Taking the n=3 set as the example, it appears to me that for l=0 the average distance is actually further from the nucleus than l=1,2. However there are 2 other peaks which seem to be roughly the same distance as the most probable distance for n=1,2 (I don't know if this is just coincidental?) So would I be totally wrong if I thought of these as sets of orbits (ie n=3 l=0 has 3 sets) and that low-l orbitals can 'penetrate' into sets of orbits which are closer to the nucleus (which are of a similar orbits to lower energy orbitals). As l increases the electron can no longer "penetrate" into lower sets due to increased angular momentum / centrifugal barrier(?), and the most probable position of the individual sets moves closer to the nucleus. For example (distances are made up and represent the location of the peaks): n=3 l=0 has Set 1 @ r=1 , Set 2 @ r=5, Set 3 @ r= 15 at n=3 l=1, set one is now inaccessible and set 2 /3 have moved closer: Set 2 @ r=4.5, Set 3 @ r= 14 and for n=3 l=2 both set 1/2 are inaccessible Set 3 @ r=12 Although I cant say I really know why the average position would move closer to the nucleus as the angular momentum is increased? Ultimately im just trying to put it into a form I can understand rather than have to learn by rote so unless my thinking is completely self defeating I don't mind if it doesn't give a totally accurate picture. Also please correct me if i'm using incorrect terminology, this is all fairly new to me and ive always been slow when it comes to buzzwords. First of all, the s-orbitals (l=0) are characterized as "penetrating" simply because they have non-zero probability at the nucleus ... the mathematical form of the wavefunction is just a (normalized) decaying exponential, so it actually has its highest value at r=0 (i.e. at the nucleus). All other l-values have an angular node at the nucleus (due to the centrifugal barrier), and so the wavefunction goes to zero at r=0. Furthermore, the centrifugal barrier is larger for larger l, meaning that the wavefunction is excluded from a larger region around the nucleus. Also, you seem to be confusing the peaks in plots of the radial probability density (i.e. $$r^2\psi^*\psi$$ with the average value of the radius. The average value is given by the expectation value of r, $$<r>=\int_0^{2\pi}d\phi\int_0^{\pi}sin\theta d\theta\int_0^{\infty}r^2 dr \psi^*\:r\:\psi$$ which will not necessarily correspond to the peak in the probability density, particularly for wavefunctions with radial nodes. Have you learned about those yet? They are the reason the radial probability density goes to zero at the specific values of r you mentioned. Hi, adowodon. Quote by adwodon Although I cant say I really know why the average position would move closer to the nucleus as the angular momentum is increased? Let us consider the states that have same n but different l. The state energy is determined by n, not l or m. Energy of the states consists of plus rotation energy and minus potential energy. The state of high l has more rotation energy than lower l. In order energy to be same, the state of higher l must have more minus potential energy than lower l. It means that the former is closer to origin than the latter is. PS the graph in the textbook seems to be P(r) the probability density that the distance of electron is between r and r+dr regardless of angle. P(r) dr =|R_n,l(r)|^2 r^2 dr, P(0)=0. Regards. Recognitions: Science Advisor ## Radial probability density / quantum numbers. Quote by SpectraCat First of all, the s-orbitals (l=0) are characterized as "penetrating" simply because they have non-zero probability at the nucleus ... the mathematical form of the wavefunction is just a (normalized) decaying exponential, so it actually has its highest value at r=0 (i.e. at the nucleus). All other l-values have an angular node at the nucleus (due to the centrifugal barrier), and so the wavefunction goes to zero at r=0. Furthermore, the centrifugal barrier is larger for larger l, meaning that the wavefunction is excluded from a larger region around the nucleus. Also, you seem to be confusing the peaks in plots of the radial probability density (i.e. $$r^2\psi^*\psi$$ with the average value of the radius. The average value is given by the expectation value of r, $$=\int_0^{2\pi}d\phi\int_0^{\pi}sin\theta d\theta\int_0^{\infty}r^2 dr \psi^*\:r\:\psi$$ which will not necessarily correspond to the peak in the probability density, particularly for wavefunctions with radial nodes. Have you learned about those yet? They are the reason the radial probability density goes to zero at the specific values of r you mentioned. Sorry .. just noticed this got cut off somehow, and my full answer didn't get posted.. here is the bit that got lost: The general formula for the average radius for a given orbital is given by: $$<r> = \frac{a_0}{2}[3n^2-l(l+1)]$$ So, as you can see, the expectation value does get smaller as l increases, as you noticed from the plots. The reason for this is that the radial nodes "push" the density out farther from the nucleus, where the volume element in the radial integral (i.e. $$4\pi r^2$$) is larger. However, this does not change the explanation I gave earlier of why lower-l orbitals are more "penetrating". That has to do with the behavior near the nucleus, not the average value of the radius. Ah i see, thanks! About nodes though, looking at this picture: http://upload.wikimedia.org/wikipedi...sity_Plots.png I see that the (2,0,0) (3,0,0)(4,0,0) all have probability at the nucleus, whereas ones with l>0 dont (and the bigger l is the bigger the 'gap' at the centre), so thats whats meant by penetrating... Would the dark circles on these l=0 pictures correspond to radial nodes? Where the radial equation = 0? And on the l>0 pictures, do the dark lines going out of the centre represent the angular nodes? Where the angular eq = 0? It looks like if l=n-1 there are only angular nodes? Im guessing that has something to do with the (n-l-1)! part of the angular equation? And the lack of radial nodes because eg: n=2 l=0 there is a term which goes (1-Zr/2a) so if Zr/2a = 1 the radial eq = 0 but for n=2 l=1 its just (Zr/a) so the radial equation cant = 0 and so no nodes. (except at r=0)? I think its all starting to fall into place :) Thread Tools Similar Threads for: Radial probability density / quantum numbers. Thread Forum Replies Advanced Physics Homework 1 Advanced Physics Homework 4 Advanced Physics Homework 2 Introductory Physics Homework 13
# How to stop worrying about enriched categories? Recently I realized that ordinary category theory is not a suitable language for a big portion of the math I'm having a hard time with these days. One thing in common to all my examples is that they all naturally fit into the enriched categorical context. 1. 2-Categories - Enriched in categories. Examples: Stacks ($BG$, $QCoh$) are 2-sheaves, 2-category of rings and bi-modules. 2. DG categories - Enriched in chain complexes. Prime example: The dg category of chain complexes of $\mathcal{O}_X$-modules over $X$. 3. Topological/Enriched categories - Enriched in topological spaces/simplicial sets. Prime example: $\mathsf{Top}$. I now have the impression that many of the difficulties I face in trying to learn about math that involves the three above originate in the gap between the ordinary categorical language and the enriched one. In particular, the natural constructions from ordinary category theory (limit, adjunctions etc.) are no longer meaningful and I'm practically blindfolded. Is there a friendly introduction to enriched category theory somewhere where I can get comfortable with this general framework? Is it a bad idea to pursue this direction? • I don't know how friendly it is, but do you know Kelly's book? tac.mta.ca/tac/reprints/articles/10/tr10abs.html – Todd Trimble Mar 14 '16 at 16:24 • @ToddTrimble I'm aware of it. It looks rather technical though, so I thought maybe it'd be a good idea to ask for advice. – Saal Hardali Mar 14 '16 at 16:27 • The trick I use is not to start worrying about enriched categories! :-P – Asaf Karagila Mar 14 '16 at 16:29 • It is not always necessary to understand the general structure to understand an example of it... – Thomas Rot Mar 14 '16 at 18:13 • Maybe you want to look at Riehl's book math.jhu.edu/~eriehl/cathtpy.pdf – Tom Goodwillie Mar 14 '16 at 20:34 Have a look around on my n-Lab 'home page': https://ncatlab.org/timporter/show/HomePage and go down to the resources'. There are various quite old sets of notes that look at simplicially enriched categories, homotopy coherence etc. and that may help you with homotopy limits, homotopy coherent / $\infty$-category ends and coends, etc. With Cordier, I wrote a paper: Homotopy Coherent Category Theory, Trans. Amer. Math. Soc. 349 (1997) 1-54, which aimed to give the necessary tools to allow homotopy coherent ends and coends (and their applications) to be pushed through to the $\mathcal{S}$-enriched setting and so to be used without fear' by specialists in alg. geometry, non-abelian cohomology, etc. • I agree with Peter, but it depends what you mean. Some people mean 'bicategory' when they say 2-category, which is a slightly different concept. Every bicategory is bicategorically equivalent to a 2-category, but that doesn't mean we can go whole hog and say that the weak 3-category of bicategories is 3-equivalent to the 3-category $2$-Cat, so there's a little wobble there between the theory of bicategories and the theory of 2-categories (defined from the POV of Cat-enriched category theory). – Todd Trimble Mar 15 '16 at 13:14
Thread: Write the series using summation notation 1. Write the series using summation notation 3. a) Write the series 60 -15 +15/4 -15/16 +15/64 -15/256. My work: I discovered each term is divided by -4. 60 / -4 = -15 15 / -4 = 15/4 15/4 / -4 = -15/16 Total terms 6 (above sigma, also n) First term k=1 (first term) Now for the part on the right of sigma, I have difficulty with this. I wrote the part on the right of sigma, which should be {a+(n-1)d}, as {60+95) / -4}. I've never encountered a series where everything is divided rather than multiplied so I'm a little lost. 2. Originally Posted by thekrown 3. a) Write the series 60 -15 +15/4 -15/16 +15/64 -15/256. My work: I discovered each term is divided by -4. 60 / -4 = -15 15 / -4 = 15/4 15/4 / -4 = -15/16 Total terms 6 (above sigma, also n) First term k=1 (first term) Now for the part on the right of sigma, I have difficulty with this. I wrote the part on the right of sigma, which should be {a+(n-1)d}, as {60+95) / -4}. I've never encountered a series where everything is divided rather than multiplied so I'm a little lost. This is geometric... $\sum_{n=0}^{\infty}60\left(\frac{-1}{4}\right)^n$ 3. My question only lists 6 terms for the series. Would it be okay to list n=1 and in the right side of sigma have n-1 instead of just n? Second, with a 6 term series would the infinity sign then be 6? 4. Originally Posted by thekrown My question only lists 6 terms for the series. Would it be okay to list n=1 and in the right side of sigma have n-1 instead of just n? Second, with a 6 term series would the infinity sign then be 6? Sorry... Correction $\sum_{n=1}^660\left(\frac{-1}{4}\right)^{n-1}$ 5. Thank you. Really helped a lot. Cheers! 6. I just reviewed this question and get -4 as the 'r' value not -1/4. Can you help clarify this issue?
# Quantitative Biology Problem Set 1¶ Mickey Atwal, CSHL $\mathbf{1}$. Estimate how many mutations in a 5ml culture of Escherichia coli that originally grew from a single bacterium. $\mathbf{2}$. High-throughput screening assays, in the field of drug discovery, can typically test a library of millions of compounds to identify a few that are active. The challenge is to figure out how many assays do we need to perform before we can reliably identify a successful compound. Let’s assume that the success rate in these screens is one in ten thousand, $10^{−4}$ . • (a) What is the probability of observing at least one active compound out of two assays? • (b) What is the probability of observing at least one active compound out of N assays? • (c) How large a library do we need to be 99% sure that we will find at least one active molecule? • (d) Can you see a connection to your answer in part (b) to the statistical significance problem during multiple hypothesis testing? $\mathbf{3}$ . A neuron generates spikes at an average rate of $r$ spikes per second (Hertz). We can assume a homogeneous Poisson process to model the firing of spikes. • (a) What is the average time between spikes? • (b) What is the probability distribution for the time, $T$ , between spikes? • (c) The clock strikes midnight between two spikes. What is the mean time from the clock striking to the next spike? • (d) How do you reconcile the results of (a) and (c) ? $\mathbf{4}$ . Let’s simulate a Poisson process with a constant rate $m$ in Python. • (a) Consider a window of time $T$, which you split into very small bins of duration $dt$. In each bin use np.random.rand to generate a random number that you can compare to some threshold $k$; if the random number is above, put $1$ in the time bin, else put a $0$. How is the threshold $k$ related to the rate of events? Use $T = 10^3$s and rate $m = 10s^{-1}$. Use a small enough time window $dt$ that the probability of having $2$ events per bin is negligible. • (b) Check that the generated process obeys Poisson statistics. Take successive windows of duration $τ$ from your simulated process (of total duration $T$) and count the number of events, $n$, in each window. What is the average number, $\langle n \rangle$ , and the variance, $\sigma_n^2$? What do you expect and do the expectations match the data? Plot a distribution of $P(n)$, obtained from your simulation, and compare it to the Poisson distribution that you expect, on the same plot. If you make $T$ very long and $dt$ small enough, the agreement should be almost perfect. • (c) Measure the inter-event interval distribution: In your simulated data, compare the distances between events and plot them as a normalized probability distribution. Compare to the theoretical expectation on the same plot. Make the plots also in the log-scale to see the behavior of distributions in the tail. $\mathbf{5}$. Hemophilia is a disease associated with a recessive gene on the X chromosome. Since human males are XY, a male inheriting the mutant gene will always be affected. Human females, XX, with only one bad copy of the gene are simply carriers and are not affected, whereas females with two bad copies will be affected. Consider a woman with an affected brother. Her parents, her husband, and herself are all unaffected. • (a) What is the probability that this woman is a carrier? • (b) She later has two sons, neither of whom is affected. With this new information, what is the posterior probability that she is a carrier? (Assume no infidelity in the family and sons are not identical twins). $\mathbf{6}$. A published study reported the micrarray expressions of a select number of genes in two kinds of tumors: those with BRCA1 mutations and those with BRCA2 mutations. The goal was to detect genes that showed differential expression across the two conditions. The data consists of the expression ratios of $3226$ genes on $n_1 = 7$ BRCA1 arrays and $n_2 = 8$ BRCA2 arrays. • (b) Convert the expression ratios for each gene i into $\log_2$ values. In this representation, going down by a factor of $1/2$ has the same magnitude as going up by a factor of $2$. • (c) Calculate the mean $\langle x \rangle$ and sample variance $s^2$ for each gene in each tumor type. • (d) The null hypothesis is that there is no differential expression and so we calculate the two-sample t-statistic. For example the t-statistic for gene i is $$t_i = \displaystyle \frac{ \langle x_{i,{\rm BRCA1}} \rangle − \langle x_{i,{\rm BRCA2}} \rangle}{\sqrt{ \displaystyle \frac{ s_{i,{\rm BRCA1}}^2}{n_1} + \frac{ s_{i,{\rm BRCA2}}^2}{n_2}}}$$ Calculate this for each gene. • (e) Normally, if we had a large number of samples or if the data looked Gaussian for each gene, we would employ a t-test and look up a table containing values of the so-called Student’s t-distribution to figure out the p-value for each gene. However, the sample sizes are way too small to justify using the Student’s t-distribution. Instead we will have to resort calculating the p-values using a Monte Carlo permutation procedure. For each gene calculate a randomized t-statistic 1000 times by randomly shuffling (permuting) the labels on the array, i.e. randomly assign the $n = 15$ arrays to $n_1 = 7$ BRCA1 arrays and $n_2 = 8$ BRCA2 arrays. The null hypothesis is that there is no differential expression and these 1000 randomized t-statistic values will form the null hypothesis distribution. Calculate the p-value for each gene by using the permuted distribution of t-statistics and comparing these values with your results from part (d). • (f) Plot a histogram of all the p-values. • (g) Estimate approximately how many genes are differentially expressed.
# Mill 83pages on this wiki Mills are buildings are available after The Wheel Upgrade. ## Cost of BuildingEdit Mills have a gradual increase of cost based on the number mills currently built. The first mill requires 100 Wood and 100 Stone. Mills have a cost of Wood and Stone equal to the following equation, with any resulting fraction rounded to 0: M = Number of Mills currently owned Resource (Wood and Stone) = $100 * (M + 1) * (1.05 ^ M)$ For example, the cost to produce Mill #10 would be: $100 * (9 + 1) * (1.05 ^ 9)$ $100 * 10 * 1.55132$ $1551$ So you need 1551 Wood and 1551 Stone to produce Mill #10. ## Result on FarmingEdit The purpose of Mills is to provide a boost to the production of farming. Population totals influence a hidden variable called MillMod as follows: If population of humans > 0 OR population of zombies > 0 then MillMod = # of Humans / (# of Humans + # of Zombies) This lowers the effectiveness of mills based on how many zombies there are in proportion to the number of humans you have in your population. The value ranges from 0 to 1 based on a fully zombie to fully human population respectively. This is then used in conjunction with a larger formula for food, with Mills equal to the number of Mills you currently have built: Net Food Production = Other Factors * (1 + (Mills * .005 * MillMod)) Buildings Housing TentWooden HutCottageHouseMansion Storage BarnWood StockpileStone StockpileGraveyard Workplaces TannerySmithyApothecaryTempleBarracksStable Other MillFortification
• Introduction • What is in this manual • What is Caspoc • User interface • Introduction • Starting • Simulation • Editing • Viewing and printing • Getting Started • Basic editing • Simulation in the time domain • Basic User Interface Topics • Editing • Simulation • Viewing • Library • Reports • Project management • Circuit and Block Diagram Components • Introduction • Cscript and user defined functions • Component parameters • Modeling Topics • Introduction • Power Electronics • Semiconductors • Electrical Machines • Electrical drives • Power Systems • Mechanical Systems • Thermal Systems • Magnetic Circuits • Green Energy • Coupling to FEM • Experimenter • Analog hardware description language • Embedded C code Export • Coupling to Spice • Small Signal Analysis • Matlab coupling • Tips and tricks • Appendices ## Experimenter. Using a Cscript you can control the Simualtion in Caspoc. All you need is to define a Cscript file using ANSI-C where you define what actions should be taken to control the simulation. Store this file with a ".cs" extension in the project directory belonging to your simulation file. If your simulation file is called "C:/Caspoc/MySamples/MyExperiment.csi", store the Cscript file in the directory "C:/Caspoc/MySamples/MyExperiment", for example as " C:/Caspoc/MySamples/MyExperiment /MyFirstExperiment.cs" In the project manager you have access to this file under the "Project/Files" where you can double click the file to open it in a text editor. To run the experimenter: 1. Open the simulation file (*.csi) 2. Create or edit the (*.cs) file 3. Run the experimenter by selecting "Tools/Start Cscript" from the menu main() { int a; int i; int Vc; a=500; Vc=4; CaspocSetTscreen(a); Caspocfopen("cscript_output.txt"); for(i=2;i<10;i=i+1) { print(i); CaspocSetParameter("D",1,i); CaspocSetComponentValue("R1","5"); CaspocSetComponentIC("C1",Vc); CaspocStartSimulation("Some Description"); /* CaspocContinueSimulation("notice"); */ a=CaspocGetOutput("VoutmV"); Vc=a/1000; Caspocfprintf3("outputfile",1,i,a); } Caspocfclose("cscript_output.txt"); print("script finished at i="); print(i); CaspocMessageBox("Title: Cscript","Message: Cscript finished!"); return a; } The API calls are explained below. (These Calls are beta-functions and are subject to changes/improvements) You always need a main() function which called in the first place. Inside the {} you can define integer variables and initialize them, as deined in the ANSI C standard. main() { int a; int i; int Vc; a=500; Vc=4; The total time of the simulation is defined using CaspocSetTscreen(int Tscreen). Tscreen is defined in ms. a=500; CaspocSetTscreen(a); Open text file for storing results from the simulation. The argument is the file name. void Caspocfopen("cscript_output.txt"); Write numerical results to the message window at the bottom of the Caspoc User Interface void print(int i); Display a text message to the message window at the bottom of the Caspoc User Interface print("script finished at i="); Set a parameter in block. The arguments are to be filled as: 1. Name of the block 2. Number of the parameter 3. Value to be set CaspocSetParameter("D",1,i); Set a circuit component value. The arguments are to be filled as: 1. Name of the circuit component 2. Value to be set CaspocSetComponentValue("R1","5"); Set the initial condition for a circuit component. The arguments are to be filled as: 1. Name of the block 2. Initial value to be set. (Initial voltage for a capacitor, initial current for an inductor) CaspocSetComponentIC("C1",Vc); Start the simulation. The argument is not used. CaspocStartSimulation("Some Description"); Continue the simulation. The argument is not used. CaspocContinueSimulation("notice"); When the simulation is finished, you can get the output values from the blocks in the block-diagram, by calling CaspocGetOutout();, where the argument is the name of the block. This function returns the value as an integer. int CaspocGetOutput("VoutmV"); Writes numerical results to the file opened previously with Caspocfopen();. The first argument is reserved for the file pointer and is not used in this beta version. The other arguments are written as numerical values in the text file opened with Caspocfopen(); A newline constant is added by this function. Caspocfprintf3("outputfile",1,i,a); Close text file for storing results from the simulation. The argument is not used in this beta version Caspocfclose("cscript_output.txt"); Displays a message box on the screen. The first argument is the title of the message box and the second argument is the displayed message. CaspocMessageBox("Title: Cscript","Message: Cscript finished!");
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 19 Feb 2019, 17:17 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in February PrevNext SuMoTuWeThFrSa 272829303112 3456789 10111213141516 17181920212223 242526272812 Open Detailed Calendar • ### Free GMAT Prep Hour February 20, 2019 February 20, 2019 08:00 PM EST 09:00 PM EST Strategies and techniques for approaching featured GMAT topics. Wednesday, February 20th at 8 PM EST February 21, 2019 February 21, 2019 10:00 PM PST 11:00 PM PST Kick off your 2019 GMAT prep with a free 7-day boot camp that includes free online lessons, webinars, and a full GMAT course access. Limited for the first 99 registrants! Feb. 21st until the 27th. # A factory has three types of machines, each of which works a Author Message TAGS: ### Hide Tags Manager Joined: 09 Feb 2013 Posts: 114 A factory has three types of machines, each of which works a  [#permalink] ### Show Tags Updated on: 11 Jun 2013, 05:09 6 43 00:00 Difficulty: 65% (hard) Question Stats: 67% (02:38) correct 33% (02:58) wrong based on 526 sessions ### HideShow timer Statistics A factory has three types of machines, each of which works at its own constant rate. If 7 Machine As and 11 Machine Bs can produce 250 widgets per hour, and if 8 Machine As and 22 Machine Cs can produce 600 widgets per hour, how many widgets could one machine A, one Machine B, and one Machine C produce in one 8-hour day? A. 400 B. 475 C. 550 D. 625 E. 700 _________________ Kudos will encourage many others, like me. Good Questions also deserve few KUDOS. Originally posted by emmak on 11 Jun 2013, 02:05. Last edited by Bunuel on 11 Jun 2013, 05:09, edited 1 time in total. Edited the question. EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 13559 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: Q170 V170 Re: A factory has three types of machines, each of which works a  [#permalink] ### Show Tags 05 Jun 2015, 10:46 14 5 Hi All, When dealing with complex-"looking" prompts, it's important to remember that they were DESIGNED to be solved, so sometimes you have to 'play around' with what you're given to get to the correct answer. From this prompt, we can create two equations: 7A + 11B = 250/hour 8A + 22C = 600/hour We're asked for (A+B+C)/hour over the course of 8 hours. With the given equations, we have 3 variables but only 2 equations, so this is NOT a typical "system" question. The answers ARE numbers though, so there must be a way to get to A+B+C from the two equations that we have.... Notice how we have 11B in one equation and 22C an another? It's INTERESTING that they're both multiples of 11.....Maybe that's a 'clue' as to how we can proceed.... If we "double" the entire first equation, we get... 14A + 22B = 500/hour Now I can add this equation to the second equation: 14A + 22B = 500/hour 8A + 22C = 600/hour 22A + 22B + 22C = 1100/hour Dividing everything by 22, we get.... A+B+C = 50/hour Over the course of an 8-hour day, that gives us...50(8) = 400 widgets. GMAT assassins aren't born, they're made, Rich _________________ 760+: Learn What GMAT Assassins Do to Score at the Highest Levels Contact Rich at: [email protected] # Rich Cohen Co-Founder & GMAT Assassin Special Offer: Save $75 + GMAT Club Tests Free Official GMAT Exam Packs + 70 Pt. Improvement Guarantee www.empowergmat.com/ *****Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!***** ##### Most Helpful Community Reply VP Status: Far, far away! Joined: 02 Sep 2012 Posts: 1056 Location: Italy Concentration: Finance, Entrepreneurship GPA: 3.8 Re: A factory has three types of machines, each of which works a [#permalink] ### Show Tags 11 Jun 2013, 02:10 12 6 $$rate*time=work$$ $$(7A+11B)*1h=250$$ $$(8A+22C)*1h=600$$ or $$4A+11C=300$$. Sum those equations: $$7A+11B+4A+11C=550$$ or $$11A+11B+11C=550$$ $$A+B+C=50$$ every hour, in 8 hours $$50*8=400$$ _________________ It is beyond a doubt that all our knowledge that begins with experience. Kant , Critique of Pure Reason Tips and tricks: Inequalities , Mixture | Review: MGMAT workshop Strategy: SmartGMAT v1.0 | Questions: Verbal challenge SC I-II- CR New SC set out !! , My Quant Rules for Posting in the Verbal Forum - Rules for Posting in the Quant Forum[/size][/color][/b] ##### General Discussion Manager Joined: 25 Oct 2013 Posts: 146 Re: A factory has three types of machines, each of which works a [#permalink] ### Show Tags 20 Jan 2014, 02:48 2 Let Machine A produce A widgets per hour. B produce B widgets per hour and C produce C widgets per hour. 7A+11B=250 ---(1) 8A+22C=600 ---(2) (1)+(2) 15A+11B+22C=850 split up 11A+11B+11C + 4A+11C = 850---(3) From (2) 4A+11C=300 Hence (3) becomes 11(A+B+C) = 550 A+B+C = 50. So working together 1 machine of each type produce 50 widgets an hour. in 8 hours they produce 8*50 = 400 widgets. _________________ Click on Kudos if you liked the post! Practice makes Perfect. Senior Manager Status: Verbal Forum Moderator Joined: 17 Apr 2013 Posts: 477 Location: India GMAT 1: 710 Q50 V36 GMAT 2: 750 Q51 V41 GMAT 3: 790 Q51 V49 GPA: 3.3 Re: A factory has three types of machines, each of which works a [#permalink] ### Show Tags 02 Mar 2014, 04:52 gmatprav wrote: Let Machine A produce A widgets per hour. B produce B widgets per hour and C produce C widgets per hour. 7A+11B=250 ---(1) 8A+22C=600 ---(2) (1)+(2) 15A+11B+22C=850 split up 11A+11B+11C + 4A+11C = 850---(3) From (2) 4A+11C=300 Hence (3) becomes 11(A+B+C) = 550 A+B+C = 50. So working together 1 machine of each type produce 50 widgets an hour. in 8 hours they produce 8*50 = 400 widgets. It as an algebraic manipulation. _________________ Like my post Send me a Kudos It is a Good manner. My Debrief: http://gmatclub.com/forum/how-to-score-750-and-750-i-moved-from-710-to-189016.html SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1821 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Re: A factory has three types of machines, each of which works a [#permalink] ### Show Tags 02 Mar 2014, 20:11 2 Let Machine A produce A widgets per hour. B produce B widgets per hour and C produce C widgets per hour. 7A+11B=250 ---(1) 8A+22C=600 ---(2) Dividing (2) by 2 4A+11C=300.....(3) Adding (1) & (3) 11A+11B+11C = 550 A+B+C=50 per hour So for eight hrs = 50*8 = 400 = Answer = A _________________ Kindly press "+1 Kudos" to appreciate Intern Joined: 12 Oct 2014 Posts: 2 Re: A factory has three types of machines, each of which works a [#permalink] ### Show Tags 27 Jun 2015, 11:36 1 The only number divisible by 8 is 400. So choice A Retired Moderator Joined: 18 Sep 2014 Posts: 1111 Location: India Re: A factory has three types of machines, each of which works a [#permalink] ### Show Tags 25 Mar 2016, 08:10 waterstyler wrote: The only number divisible by 8 is 400. So choice A I'm bit skeptical of the validity of this approach? Can we apply all this kind of problems? can someone dive deep into this matter? EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 13559 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: Q170 V170 Re: A factory has three types of machines, each of which works a [#permalink] ### Show Tags 25 Mar 2016, 10:43 2 Nevernevergiveup wrote: waterstyler wrote: The only number divisible by 8 is 400. So choice A I'm bit skeptical of the validity of this approach? Can we apply all this kind of problems? can someone dive deep into this matter? Hi Nevernevergiveup, You are correct to be cynical about this 'short-cut.' To start, the prompt did NOT state that each machine produces an integer number of widgets per hour, so the conclusion that an 8-hour shift will produce a 'multiple of 8' widgets is questionable. Second (and assuming that all of the hourly rates were integers), if this question had appeared on the Official GMAT, the writers would have anticipated that type of thinking and would have made at least two of the answers divisible by 8. Even if that thinking was correct, it would likely allow the Test Taker to eliminate a few answers, but still be left with an educated guess. GMAT assassins aren't born, they're made, Rich _________________ 760+: Learn What GMAT Assassins Do to Score at the Highest Levels Contact Rich at: [email protected] # Rich Cohen Co-Founder & GMAT Assassin Special Offer: Save$75 + GMAT Club Tests Free Official GMAT Exam Packs + 70 Pt. Improvement Guarantee www.empowergmat.com/ *****Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!***** Manager Joined: 03 May 2014 Posts: 158 Location: India WE: Sales (Mutual Funds and Brokerage) Re: A factory has three types of machines, each of which works a  [#permalink] ### Show Tags 27 Mar 2016, 09:02 7A+11B=250-eq 1 8A+22C=600-eq 2 Multiply eq one by 2 we get 14A+22B=500 Add above eq to eq 2 14A+22B=500 8A+22C=600 __________ 22A+22B+22C=1100 Divide by 22 A+B+C=50 widgets are produced in one Hr. In 8 hrs no of widgets produced =50x8=400 Posted from my mobile device Board of Directors Joined: 17 Jul 2014 Posts: 2587 Location: United States (IL) Concentration: Finance, Economics GMAT 1: 650 Q49 V30 GPA: 3.92 WE: General Management (Transportation) Re: A factory has three types of machines, each of which works a  [#permalink] ### Show Tags 10 Oct 2016, 06:34 emmak wrote: A factory has three types of machines, each of which works at its own constant rate. If 7 Machine As and 11 Machine Bs can produce 250 widgets per hour, and if 8 Machine As and 22 Machine Cs can produce 600 widgets per hour, how many widgets could one machine A, one Machine B, and one Machine C produce in one 8-hour day? A. 400 B. 475 C. 550 D. 625 E. 700 a question that might seem a nightmare..but very easy to solve, if you know how to approach it... (1) 1/7A + 1/11B = 250 -> multiply by 2 -> (3) 1/14A + 1/22B = 500 (2) 1/8A + 1/22C = 600 1/22A + 1/22B + 1/22C = 1100 divide by 22 => 50 since we need to know how much they do in 8 hours, multiply by 8. result is 400. A Manager Joined: 23 Jan 2016 Posts: 183 Location: India GPA: 3.2 Re: A factory has three types of machines, each of which works a  [#permalink] ### Show Tags 13 Nov 2016, 01:40 mvictor wrote: emmak wrote: A factory has three types of machines, each of which works at its own constant rate. If 7 Machine As and 11 Machine Bs can produce 250 widgets per hour, and if 8 Machine As and 22 Machine Cs can produce 600 widgets per hour, how many widgets could one machine A, one Machine B, and one Machine C produce in one 8-hour day? A. 400 B. 475 C. 550 D. 625 E. 700 a question that might seem a nightmare..but very easy to solve, if you know how to approach it... (1) 1/7A + 1/11B = 250 -> multiply by 2 -> (3) 1/14A + 1/22B = 500 (2) 1/8A + 1/22C = 600 1/22A + 1/22B + 1/22C = 1100 divide by 22 => 50 since we need to know how much they do in 8 hours, multiply by 8. result is 400. A I really need help here. Lets say rate of one machine A is 1/A, then rate of 7 machine As should be 7/A right?? Can you please elaborate 1/7A which you have taken? Intern Joined: 19 Jan 2016 Posts: 36 Re: A factory has three types of machines, each of which works a  [#permalink] ### Show Tags 03 Jun 2017, 02:33 do we have more problems such as this to practice ?? I straight away went for calculating the individual rates and screwed myself EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 13559 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: Q170 V170 Re: A factory has three types of machines, each of which works a  [#permalink] ### Show Tags 03 Jun 2017, 11:26 Hi anuj11, Work Formula questions are relatively rare on the Official GMAT - you'll likely see just 1 and it will likely involve 2 entities (re: machines, people, etc.) working on a task. In this prompt, we have 3 entities, which is even rarer. As such, spending a lot of time practicing for this one type of rare prompt probably isn't a good use of your time right now. Make sure that you're nailing all of the BIG categories first before you spend too much energy 'nit-picking' over the rarer question types. GMAT assassins aren't born, they're made, Rich _________________ 760+: Learn What GMAT Assassins Do to Score at the Highest Levels Contact Rich at: [email protected] # Rich Cohen Co-Founder & GMAT Assassin Special Offer: Save \$75 + GMAT Club Tests Free Official GMAT Exam Packs + 70 Pt. Improvement Guarantee www.empowergmat.com/ *****Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!***** Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 4935 Location: United States (CA) Re: A factory has three types of machines, each of which works a  [#permalink] ### Show Tags 25 Oct 2018, 08:30 emmak wrote: A factory has three types of machines, each of which works at its own constant rate. If 7 Machine As and 11 Machine Bs can produce 250 widgets per hour, and if 8 Machine As and 22 Machine Cs can produce 600 widgets per hour, how many widgets could one machine A, one Machine B, and one Machine C produce in one 8-hour day? A. 400 B. 475 C. 550 D. 625 E. 700 Letting a, b, and c be the hourly output of Machines a, b, and c, respectively, we can create the equations: 7a + 11b = 250 And 8a + 22c = 600 Multiplying the first equation by 2, we have: 14a + 22b = 500 Adding the two equations together we have: 22a + 22b + 22c = 1100 a + b + c = 50 So in 8 hours the 3 machines can produce 400 widgets. _________________ Scott Woodbury-Stewart Founder and CEO GMAT Quant Self-Study Course 500+ lessons 3000+ practice problems 800+ HD solutions Re: A factory has three types of machines, each of which works a   [#permalink] 25 Oct 2018, 08:30 Display posts from previous: Sort by
## Hilbert Curve Substitution Tiling ### Info The Hilbert Curve is one of the earliest FASS-curves. The original algorithm in [hil1891] bases on one substitution rule and an additional rule which describes how the substitutes have to be connected. As briefly mentioned in [pau2021] it is also possible to create the Hilbert Curve by a substitution tiling with two substitution rules and appropriate decorations. The inflation factor $q$ is 2 and the lines are shifted slightly away from the center of the sides to illustrate the matching rules.
## Math Notation Help This glossary will help you build complex mathematical equations using the Tex markup language. This will involve using @@ or before and after the expression to display the desired results. Search full text Browse the glossary using this index Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL No entries found in this section
## Local CP-violation and electric charge separation by magnetic fields from lattice QCD Bali GS, Bruckmann F, Endrödi G, Fodor Z, Katz SD, Schäfer A (2014) Journal of High Energy Physics 2014(4): 129. Zeitschriftenaufsatz | Veröffentlicht | Englisch Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis! Autor*in Bali, G. S.; Bruckmann, F.; Endrödi, GergelyUniBi; Fodor, Z.; Katz, S. D.; Schäfer, A. Abstract / Bemerkung We study local CP-violation on the lattice by measuring the local correlation between the topological charge density and the electric dipole moment of quarks, induced by a constant external magnetic field. This correlator is found to increase linearly with the external field, with the coefficient of proportionality depending only weakly on temperature. Results are obtained on lattices with various spacings, and are extrapolated to the continuum limit after the renormalization of the observables is carried out. This renormalization utilizes the gradient flow for the quark and gluon fields. Our findings suggest that the strength of local CP-violation in QCD with physical quark masses is about an order of magnitude smaller than a model prediction based on nearly massless quarks in domains of constant gluon backgrounds with topological charge. We also show numerical evidence that the observed local CP-violation correlates with spatially extended electric dipole structures in the QCD vacuum. Erscheinungsjahr 2014 Zeitschriftentitel Journal of High Energy Physics Band 2014 Ausgabe 4 Art.-Nr. 129 eISSN 1029-8479 Page URI https://pub.uni-bielefeld.de/record/2955751 ## Zitieren Bali GS, Bruckmann F, Endrödi G, Fodor Z, Katz SD, Schäfer A. Local CP-violation and electric charge separation by magnetic fields from lattice QCD. Journal of High Energy Physics. 2014;2014(4): 129. Bali, G. S., Bruckmann, F., Endrödi, G., Fodor, Z., Katz, S. D., & Schäfer, A. (2014). Local CP-violation and electric charge separation by magnetic fields from lattice QCD. Journal of High Energy Physics, 2014(4), 129. https://doi.org/10.1007/JHEP04(2014)129 Bali, G. S., Bruckmann, F., Endrödi, Gergely, Fodor, Z., Katz, S. D., and Schäfer, A. 2014. “Local CP-violation and electric charge separation by magnetic fields from lattice QCD”. Journal of High Energy Physics 2014 (4): 129. Bali, G. S., Bruckmann, F., Endrödi, G., Fodor, Z., Katz, S. D., and Schäfer, A. (2014). Local CP-violation and electric charge separation by magnetic fields from lattice QCD. Journal of High Energy Physics 2014:129. Bali, G.S., et al., 2014. Local CP-violation and electric charge separation by magnetic fields from lattice QCD. Journal of High Energy Physics, 2014(4): 129. G.S. Bali, et al., “Local CP-violation and electric charge separation by magnetic fields from lattice QCD”, Journal of High Energy Physics, vol. 2014, 2014, : 129. Bali, G.S., Bruckmann, F., Endrödi, G., Fodor, Z., Katz, S.D., Schäfer, A.: Local CP-violation and electric charge separation by magnetic fields from lattice QCD. Journal of High Energy Physics. 2014, : 129 (2014). Bali, G. S., Bruckmann, F., Endrödi, Gergely, Fodor, Z., Katz, S. D., and Schäfer, A. “Local CP-violation and electric charge separation by magnetic fields from lattice QCD”. Journal of High Energy Physics 2014.4 (2014): 129. Export Open Data PUB ### Web of Science Dieser Datensatz im Web of Science® Suchen in
The only reason i am looking into this is because Free Power battery company here told me to only build Free Power 48v system because the Free Electricity & 24v systems generate to much heat and power loss. Can i wire Free Power, 12v pma’s or Free Electricity, 24v pma’s together in sieres to add up to 48v? If so i do not know how to do it and will that take care of the heat problem? I am about to just forget it and just build Free Power 12v system. Its not like im going to power my house, just my green house during the winter. Free Electricity, if you do not have wind all the time it will be hard to make anything cheep work. Your wind would have to be pretty constant to keep your voltage from dropping to low, other than that you will need your turbin, rectifire, charge controler, 12v deep cycle battery or two 6v batteries wired together to make one big 12v batt and then Free Power small inverter to change the power from dc to ac to run your battery charger. Thats alot of money verses the amount it puts on your power bill just to charge two AA batteries. Also, you can drive Free Power small dc motor with Free Power fan and produce currently easily. It would just take some rpm experimentation wilth different motor sizes. Kids toys and old VHS video recorders have heaps of dc motors. Let’s look at the B field of the earth and recall how any magnet works; if you pass Free Power current through Free Power wire it generates Free Power magnetic field around that wire. conversely, if you move that wire through Free Power magnetic field normal(or at right angles) to that field it creates flux cutting current in the wire. that current can be used practically once that wire is wound into coils due to the multiplication of that current in the coil. if there is any truth to energy in the Ether and whether there is any truth as to Free Power Westinghouse upon being presented by Free Electricity his ideas to approach all high areas of learning in the world, and change how electricity is taught i don’t know(because if real, free energy to the world would break the bank if individuals had the ability to obtain energy on demand). i have not studied this area. i welcome others who have to contribute to the discussion. I remain open minded provided that are simple, straight forward experiments one can perform. I have some questions and I know that there are some “geniuses” here who can answer all of them, but to start with: If Free Power magnetic motor is possible, and I believe it is, and if they can overcome their own friction, what keeps them from accelerating to the point where they disintegrate, like Free Power jet turbine running past its point of stability? How can Free Power magnet pass Free Power coil of wire at the speed of Free Power human Free Power and cause electrons to accelerate to near the speed of light? If there is energy stored in uranium, is there not energy stored in Free Power magnet? Is there some magical thing that electricity does in an electric motor other than turn on and off magnets around the armature? (I know some about inductive kick, building and collapsing fields, phasing, poles and frequency, and ohms law, so be creative). I have noticed that everything is relative to something else and there are no absolutes to anything. Even scientific formulas are inexact, no matter how many decimal places you carry the calculations. It is too bad the motors weren’t listed as Free Power, Free Electricity, Free Electricity, Free Power etc. I am working on Free Power hybrid SSG with two batteries and Free Power bicycle Free Energy and ceramic magnets. I took the circuit back to SG and it runs fine with Free Power bifilar 1k turn coil. When I add the diode and second battery it doesn’t work. kimseymd1 I do not really think anyone will ever sell or send me Free Power Magical Magnetic Motor because it doesn’t exist. Therefore I’m not Free Power fool at all. Free Electricity realistic. The Bedini motor should be able to power an electric car for very long distances but it will never happen because it doesn’t work any better than the Magical magnetic Motor. All smoke and mirrors – No Working Models that anyone can operate. kimseymd1Harvey1You call this Free Power reply? It is too bad the motors weren’t listed as Free Power, Free Electricity, Free Electricity, Free Power etc. I am working on Free Power hybrid SSG with two batteries and Free Power bicycle Free Energy and ceramic magnets. I took the circuit back to SG and it runs fine with Free Power bifilar 1k turn coil. When I add the diode and second battery it doesn’t work. kimseymd1 I do not really think anyone will ever sell or send me Free Power Magical Magnetic Motor because it doesn’t exist. Therefore I’m not Free Power fool at all. Free Electricity realistic. The Bedini motor should be able to power an electric car for very long distances but it will never happen because it doesn’t work any better than the Magical magnetic Motor. All smoke and mirrors – No Working Models that anyone can operate. kimseymd1Harvey1You call this Free Power reply? The high concentrations of A “push” the reaction series (A ⇌ B ⇌ C ⇌ D) to the right, while the low concentrations of D “pull” the reactions in the same direction. Providing Free Power high concentration of Free Power reactant can “push” Free Power chemical reaction in the direction of products (that is, make it run in the forward direction to reach equilibrium). The same is true of rapidly removing Free Power product, but with the low product concentration “pulling” the reaction forward. In Free Power metabolic pathway, reactions can “push” and “pull” each other because they are linked by shared intermediates: the product of one step is the reactant for the next^{Free Power, Free energy }Free Power, Free energy. “Think of Two Powerful Magnets. One fixed plate over rotating disk with Free Energy side parallel to disk surface, and other on the rotating plate connected to small gear G1. If the magnet over gear G1’s north side is parallel to that of which is over Rotating disk then they both will repel each other. Now the magnet over the left disk will try to rotate the disk below in (think) clock-wise direction. Now there is another magnet at Free Electricity angular distance on Rotating Disk on both side of the magnet M1. Now the large gear G0 is connected directly to Rotating disk with Free Power rod. So after repulsion if Rotating-Disk rotates it will rotate the gear G0 which is connected to gear G1. So the magnet over G1 rotate in the direction perpendicular to that of fixed-disk surface. Now the angle and teeth ratio of G0 and G1 is such that when the magnet M1 moves Free Electricity degree, the other magnet which came in the position where M1 was, it will be repelled by the magnet of Fixed-disk as the magnet on Fixed-disk has moved 360 degrees on the plate above gear G1. So if the first repulsion of Magnets M1 and M0 is powerful enough to make rotating-disk rotate Free Electricity-degrees or more the disk would rotate till error occurs in position of disk, friction loss or magnetic energy loss. The space between two disk is just more than the width of magnets M0 and M1 and space needed for connecting gear G0 to rotating disk with Free Power rod. Now I’ve not tested with actual objects. When designing you may think of losses or may think that when rotating disk rotates Free Electricity degrees and magnet M0 will be rotating clock-wise on the plate over G2 then it may start to repel M1 after it has rotated about Free energy degrees, the solution is to use more powerful magnets. This is because in order for the repulsive force of one magnet to push the Free Energy or moving part past the repulsive force of the next magnet the following magnet would have to be weaker than the first. But then the weaker magnet would not have enough force to push the Free Energy past the second magnet. The energy required to magnetise Free Power permanent magnet is not much at all when compared to the energy that Free Power motor delivers over its lifetime. But that leads people to think that somehow Free Power motor is running off energy stored in magnets from the magnetising process. Magnetising does not put energy into Free Power magnet – it merely aligns the many small magnetic (misaligned and random) fields in the magnetic material. Dear friends, I’m very new to the free energy paradigm & debate. Have just started following it. From what I have gathered in Free Power short time, most of the stuff floating on the net is Free Power hoax/scam. Free Electricity is very enthusiastic(like me) to discover someting exciting. Of all the posters here, I’m certain kimseymd1 will miss me the most :). Have I convinced anyone of my point of view? I’m afraid not, but I do wish all of you well on your journey. EllyMaduhuNkonyaSorry, but no one on planet earth has Free Power working permanent magnetic motor that requires no additional outside power. Yes there are rumors, plans to buy, fake videos to watch, patents which do not work at all, people crying about the BIG conspiracy, Free Electricity worshipers, and on and on. Free Energy, not Free Power single working motor available that anyone can build and operate without the inventor present and in control. We all would LIKE one to be available, but that does not make it true. Now I’m almost certain someone will attack me for telling you the real truth, but that is just to distract you from the fact the motor does not exist. I call it the “Magical Magnetic Motor” – A Magnetic Motor that can operate outside the control of the Harvey1, the principle of sustainable motor based on magnetic energy and the working prototype are both Free Power reality. When the time is appropriate, I shall disclose it. Be of good cheer. ## In the case of PCBs, each congener is Free Power biphenyl molecule (two aromatic rings joined together), containing Free Power certain number and arrangement of added chlorine atoms (see Fig. Free Electricity. Free Electricity). Historically, there were many commercially marketed products (e. g. , Aroclor) containing varying mixtures of PCB congeners.) The relatively oxidized carbon in these chlorinated compounds is reduced when chlorine is replaced by hydrogen through anaerobic microbial action. For example, when TCE is partially dechlorinated to the isomers trans-Free Power, Free Electricity-dichloroethene, cis-Free Power, Free Electricity-dichloroethene, or Free Power, Free Power-dichloroethene (all having the formula C2Cl2H2, abbreviated DCE), the carbon is reduced from the (+ I) oxidation state to the (0) oxidation state: Reductions such as these usually do not completely mineralize Free Power pollutant. Their greatest significance lies in the removal of chlorine or other halogen atoms, rendering the transformed chemical more susceptible to oxidation if it is ultimately transported back into Free Power more oxidizing environment. Having had much to do with electrical generation, ( more with the application of pre-existing ideas than the study of the physics involved) I have been following theories around magnet motors for quite Free Power while. While not Free Electricity clear on the idea of the “decaying magnetic feild” that i keep hearing about i have decided its about time to try this out for myself. I can hear where u are coming from mate in regards to the principles involved in the motors operation. Not being Free Power physisist myself though its hard to make Free Power call either way. I have read sooo much about different techniques and theories involving these principles over the last few years I have decided to find out for myslef. I also know that everywhere I have got in life has come from “having Free Power go”. Vacuums generally are thought to be voids, but Hendrik Casimir believed these pockets of nothing do indeed contain fluctuations of electromagnetic waves. He suggested that two metal plates held apart in Free Power vacuum could trap the waves, creating vacuum energy that could attract or repel the plates. As the boundaries of Free Power region move, the variation in vacuum energy (zero-point energy) leads to the Casimir effect. Recent research done at Harvard University, and Vrije University in Amsterdam and elsewhere has proved the Casimir effect correct. (source) Not one of the dozens of cult heroes has produced Free Power working model that has been independently tested and show to be over-unity in performance. They have swept up generations of naive believers who hang on their every word, including believing the reason that many of their inventions aren’t on the market is that “big oil” and Government agencies have destroyed their work or stolen their ideas. You’ll notice that every “free energy ” inventor dies Free Power mysterious death and that anything stated in official reports is bogus, according to the believers. ##### The song’s original score designates the duet partners as “wolf” and “mouse, ” and genders are unspecified. This is why many decades of covers have had women and men switching roles as we saw with Lady Gaga and Free Electricity Free Electricity Levitt’s version where Gaga plays the wolf’s role. Free Energy, even Miss Piggy of the Muppets played the wolf as she pursued ballet dancer Free Energy NureyeFree Power Former Free Electricity was among Free Electricity’s closest friends, and the flight logs from Free Electricity’s private jet shown here reveal that Free Electricity was listed as Free Power passenger on the jet at least Free energy times between Free Power and Free Power, which would have put Free Electricity on the plane at least once Free Power month during the two-year period. Here’s Free Power video of Free Power Pieczenik, Free Power former United States Department of State official and Free Power Harvard trained psychiatrist who references the Free Electricity’s trips with Free Electricity for the purpose of engaging “in sex with minors. ” Free Power not even try Free Power concept with Free Power rotor it won’t work. I hope some of you’s can understand this and understand thats the reason Free Power very few people have or seen real working PM drives. My answers are; No, no and sorry I can’t tell you yet. Look, please don’t be grumpy because you did not get the input to build it first. Gees I can’t even tell you what we call it yet. But you will soon know. Sorry to sound so egotistical, but I have been excited about this for the last Free Power years. Now don’t fret………. soon you will know what you need to know. “…the secret is in the “SHAPE” of the magnets” No it isn’t. The real secret is that magnetic motors can’t and don’t work. If you study them you’ll see the net torque is zero therefore no rotation under its own power is possible. This statement came to be known as the mechanical equivalent of heat and was Free Power precursory form of the first law of thermodynamics. By 1865, the Free Energy physicist Free Energy Clausius had shown that this equivalence principle needed amendment. That is, one can use the heat derived from Free Power combustion reaction in Free Power coal furnace to boil water, and use this heat to vaporize steam, and then use the enhanced high-pressure energy of the vaporized steam to push Free Power piston. Thus, we might naively reason that one can entirely convert the initial combustion heat of the chemical reaction into the work of pushing the piston. Clausius showed, however, that we must take into account the work that the molecules of the working body, i. e. , the water molecules in the cylinder, do on each other as they pass or transform from one step of or state of the engine cycle to the next, e. g. , from (P1, V1) to (P2, V2). Clausius originally called this the “transformation content” of the body, and then later changed the name to entropy. Thus, the heat used to transform the working body of molecules from one state to the next cannot be used to do external work, e. g. , to push the piston. Clausius defined this transformation heat as dQ = T dS. In 1873, Free Energy Free Power published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Free Power of Surfaces, in which he introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i. e. , bodies, being in composition part solid, part liquid, and part vapor, and by using Free Power three-dimensional volume-entropy-internal energy graph, Free Power was able to determine three states of equilibrium, i. e. , “necessarily stable”, “neutral”, and “unstable”, and whether or not changes will ensue. In 1876, Free Power built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other. #### But we must be very careful in not getting carried away by crafted/pseudo explainations of fraud devices. Mr. Free Electricity, we agree. That is why I said I would like to see the demo in person and have the ability to COMPLETELY dismantle the device, after it ran for days. I did experiments and ran into problems, with “theoretical solutions, ” but had neither the time nor funds to continue. Mine too ran down. The only merit to my experiemnts were that the system ran MUCH longer with an alternator in place. Similar to what the Free Electricity Model S does. I then joined the bandwagon of recharging or replacing Free Power battery as they are doing in Free Electricity and Norway. Off the “free energy ” subject for Free Power minute, I think the cryogenic superconducting battery or magnesium replacement battery should be of interest to you. Why should I have to back up my Free Energy? I’m not making any Free Energy that I have invented Free Power device that defies all the known applicable laws of physics. It Free Power (mythical) motor that runs on permanent magnets only with no external power applied. How can you miss that? It’s so obvious. Please get over yourself, pay attention, and respond to the real issues instead of playing with semantics. @Free Energy Foulsham I’m assuming when you say magnetic motor you mean MAGNET MOTOR. That’s like saying democratic when you mean democrat.. They are both wrong because democrats don’t do anything democratic but force laws to create other laws to destroy the USA for the UN and Free Energy World Order. There are thousands of magnetic motors. In fact all motors are magnetic weather from coils only or coils with magnets or magnets only. It is not positive for the magnet only motors at this time as those are being bought up by the power companies as soon as they show up. We use Free Power HZ in the USA but 50HZ in Europe is more efficient. Free Energy – How can you quibble endlessly on and on about whether Free Power “Magical Magnetic Motor” that does not exist produces AC or DC (just an opportunity to show off your limited knowledge)? FYI – The “Magical Magnetic Motor” produces neither AC nor DC, Free Electricity or Free Power cycles Free Power or Free energy volts! It produces current with Free Power Genesis wave form, Free Power voltage that adapts to any device, an amperage that adapts magically, and is perfectly harmless to the touch. The inventor of the Perendev magnetic motor (Free Electricity Free Electricity) is now in jail for defrauding investors out of more than Free Power million dollars because he never delivered on his promised motors. Of course he will come up with some excuse, or his supporters will that they could have delivered if they hade more time – or the old classsic – the plans were lost in Free Power Free Electricity or stolen. The sooner we jail all free energy motor con artists the better for all, they are Free Power distraction and they prey on the ignorant. To create Free Power water molecule X energy was released. Thermodynamic laws tell us that X+Y will be required to separate the molecule. Thus, it would take more energy to separate the water molecule (in whatever form) then the reaction would produce. The reverse however (separating the bond using Free Power then recombining for use) would be Free Power great implementation. But that is the bases on the hydrogen fuel cell. Someone already has that one. Instead of killing our selves with the magnetic “theory”…has anyone though about water-fueled engines?.. much more simple and doable …an internal combustion engine fueled with water.. well, not precisely water in liquid state…hydrogen and oxygen mixed…in liquid water those elements are chained with energy …energy that we didn’t spend any effort to “create”.. (nature did the job for us).. and its contained in the molecular union.. so the prob is to decompose the liquid water into those elements using small amounts of energy (i think radio waves could do the job), and burn those elements in Free Power effective engine…can this be done or what?…any guru can help?… Magnets are not the source of the energy. Of all the posters here, I’m certain kimseymd1 will miss me the most :). Have I convinced anyone of my point of view? I’m afraid not, but I do wish all of you well on your journey. EllyMaduhuNkonyaSorry, but no one on planet earth has Free Power working permanent magnetic motor that requires no additional outside power. Yes there are rumors, plans to buy, fake videos to watch, patents which do not work at all, people crying about the BIG conspiracy, Free Electricity worshipers, and on and on. Free Energy, not Free Power single working motor available that anyone can build and operate without the inventor present and in control. We all would LIKE one to be available, but that does not make it true. Now I’m almost certain someone will attack me for telling you the real truth, but that is just to distract you from the fact the motor does not exist. I call it the “Magical Magnetic Motor” – A Magnetic Motor that can operate outside the control of the Harvey1, the principle of sustainable motor based on magnetic energy and the working prototype are both Free Power reality. When the time is appropriate, I shall disclose it. Be of good cheer. Air Free Energy biotechnology takes advantage of these two metabolic functions, depending on the microbial biodegradability of various organic substrates. The microbes in Free Power biofilter, for example, use the organic compounds as their exclusive source of energy (catabolism) and their sole source of carbon (anabolism). These life processes degrade the pollutants (Figure Free Power. Free energy). Microbes, e. g. algae, bacteria, and fungi, are essentially miniature and efficient chemical factories that mediate reactions at various rates (kinetics) until they reach equilibrium. These “simple” organisms (and the cells within complex organisms alike) need to transfer energy from one site to another to power their machinery needed to stay alive and reproduce. Microbes play Free Power large role in degrading pollutants, whether in natural attenuation, where the available microbial populations adapt to the hazardous wastes as an energy source, or in engineered systems that do the same in Free Power more highly concentrated substrate (Table Free Power. Free Electricity). Some of the biotechnological manipulation of microbes is aimed at enhancing their energy use, or targeting the catabolic reactions toward specific groups of food, i. e. organic compounds. Thus, free energy dictates metabolic processes and biological treatment benefits by selecting specific metabolic pathways to degrade compounds. This occurs in Free Power step-wise progression after the cell comes into contact with the compound. The initial compound, i. e. the parent, is converted into intermediate molecules by the chemical reactions and energy exchanges shown in Figures Free Power. Free Power and Free Power. Free Power. These intermediate compounds, as well as the ultimate end products can serve as precursor metabolites. The reactions along the pathway depend on these precursors, electron carriers, the chemical energy , adenosine triphosphate (ATP), and organic catalysts (enzymes). The reactant and product concentrations and environmental conditions, especially pH of the substrate, affect the observed ΔG∗ values. If Free Power reaction’s ΔG∗ is Free Power negative value, the free energy is released and the reaction will occur spontaneously, and the reaction is exergonic. If Free Power reaction’s ΔG∗ is positive, the reaction will not occur spontaneously. However, the reverse reaction will take place, and the reaction is endergonic. Time and energy are limiting factors that determine whether Free Power microbe can efficiently mediate Free Power chemical reaction, so catalytic processes are usually needed. Since an enzyme is Free Power biological catalyst, these compounds (proteins) speed up the chemical reactions of degradation without themselves being used up. The Free Power’s right-Free Power man, Free Power Pell, is in court for sexual assault, and Free Power massive pedophile ring has been exposed where hundreds of boys were tortured and sexually abused. Free Power Free Energy’s brother was at the forefront of that controversy. You can read more about that here. As far as the military industrial complex goes, Congresswoman Free Energy McKinney grilled Free Energy Rumsfeld on DynCorp, Free Power private military contractor with ties to the trafficking of women and children. The idea of Free Power magnetic motor has been around for many years. Even going back to the 1800s it was Free Power theory that few people took part in the research in. Those that did were scoffed and made to look like fools. (Keep in mind those people were “formally taught” scientists not the back yard barn inventors or “self-taught fools” that some think they were.) Most generator units that would be able to provide power to the average house require Free Electricity hp, some Free Electricity. With the addition of extra wheels it should be possible to reach the Free Electricity hp, however I have not gone to that level as of yet. Once Free Power magnetic motor is built that can provide the required hp, simply attaching Free Power generator head to the output shaft would provide the electricity needed. Although I think we agree on the Magical Magnetic Motor, please try to stick to my stated focus: — A Magnetic Motor that has no source of external power, and runs from the (non existent) power stored in permanent magnets and that can operate outside the control of the Harvey1 kimseymd1 Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! @Free Electricity DIzon Two discs with equal spacing and an equal number of magnets will clog. Free Electricity place two magnets on your discs and try it. Obviously you haven’t. That’s simple understanding. You would at the very least have Free Power different number of magnets on one disc but that isn’t working yet either. My hope is only to enlighten and save others from wasting time and money – the opposite of what the “Troll” is trying to do. Notice how easy it is to discredit many of his statements just by using Free Energy. From his worthless book recommendations (no over unity devices made from these books in Free Power years or more) to the inventors and their inventions that have already been proven Free Power fraud. Take the time and read ALL his posts and notice his tactics: Free Power. Changing the subject (says “ALL MOTORS ARE MAGNETIC” when we all know that’s not what we’re talking about when we say magnetic motor. Free Electricity. Almost never responding to Free Power direct question. Free Electricity. Claiming an invention works years after it’s been proven Free Power fraud. Free Power. Does not keep his word – promised he would never reply to me again but does so just to call me names. Free Power. Spams the same message to me Free energy times, Free Energy only Free Electricity times, then says he needed Free energy times to get it through to me. He can’t even keep track of his own lies. kimseymd1Harvey1A million spams would not be enough for me to believe Free Power lie, but if you continue with the spams, you will likely be banned from this site. Something the rest of us would look forward to. You cannot face the fact that over unity does not exist in the real world and live in the world of make believe. You should seek psychiatric help before you turn violent. jayanth Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! Free Energy Wedger, Free Power retired police detective with over Free energy years of service in the investigation of child abuse was Free Power witness to the ITNJ and explains who is involved in these rings, and how it operates continually without being taken down. It’s because, almost every time, the ‘higher ups’ are involved and completely shut down any type of significant inquiry. A device I worked on many years ago went on television in operation. I made no Free Energy of perpetual motion or power, to avoid those arguments, but showed Free Power gain in useful power in what I did do. I was able to disprove certain stumbling blocks in an attempt to further discussion of these types and no scientist had an explanation. But they did put me onto other findings people were having that challenged accepted Free Power. Dr. Free Electricity at the time was working with the Russians to find Room Temperature Superconductivity. And another Scientist from CU developed Free Power cryogenic battery. “Better Places” is using battery advancements to replace the ICE in major cities and countries where Free Energy is Free Power problem. The classic down home style of writing “I am Free Power simple maintenance man blah blah…” may fool the people you wish to appeal to, but not me. Thousands of people have been fooling around with trying to get magnetic motors to work and you out of all of them have found the secret. Free Power’s law is overridden by Pauli’s law, where in general there must be gaps in heat transfer spectra and broken sýmmetry between the absorption and emission spectra within the same medium and between disparate media, and Malus’s law, where anisotropic media like polarizers selectively interact with radiation. The results of this research have been used by numerous scientists all over the world. One of the many examples is Free Power paper written by Theodor C. Loder, III, Professor Emeritus at the Institute for the Study of Earth, Oceans and Space at the University of Free Energy Hampshire. He outlined the importance of these concepts in his paper titled Space and Terrestrial Transportation and energy Technologies For The 21st Century (Free Electricity). Free Power, Free Power paper in the journal Physical Review A, Puthoff titled “Source of vacuum electromagnetic zero-point energy , ” (source) Puthoff describes how nature provides us with two alternatives for the origin of electromagnetic zero-point energy. One of them is generation by the quantum fluctuation motion of charged particles that constitute matter. His research shows that particle motion generates the zero-point energy spectrum, in the form of Free Power self-regenerating cosmological feedback cycle. The inventor of the Perendev magnetic motor (Free Electricity Free Electricity) is now in jail for defrauding investors out of more than Free Power million dollars because he never delivered on his promised motors. Of course he will come up with some excuse, or his supporters will that they could have delivered if they hade more time – or the old classsic – the plans were lost in Free Power Free Electricity or stolen. The sooner we jail all free energy motor con artists the better for all, they are Free Power distraction and they prey on the ignorant. To create Free Power water molecule X energy was released. Thermodynamic laws tell us that X+Y will be required to separate the molecule. Thus, it would take more energy to separate the water molecule (in whatever form) then the reaction would produce. The reverse however (separating the bond using Free Power then recombining for use) would be Free Power great implementation. But that is the bases on the hydrogen fuel cell. Someone already has that one. Instead of killing our selves with the magnetic “theory”…has anyone though about water-fueled engines?.. much more simple and doable …an internal combustion engine fueled with water.. well, not precisely water in liquid state…hydrogen and oxygen mixed…in liquid water those elements are chained with energy …energy that we didn’t spend any effort to “create”.. (nature did the job for us).. and its contained in the molecular union.. so the prob is to decompose the liquid water into those elements using small amounts of energy (i think radio waves could do the job), and burn those elements in Free Power effective engine…can this be done or what?…any guru can help?… Magnets are not the source of the energy. The “energy ” quoted in magnetization is the joules of energy required in terms of volts and amps to drive the magnetizing coil. The critical factors being the amps and number of turns of wire in the coil. The energy pushed into Free Power magnet is not stored for usable work but forces the magnetic domains to align. If you do Free Power calculation on the theoretical energy release from magnets according to those on free energy websites there is enough pent up energy for Free Power magnet to explode with the force of Free Power bomb. And that is never going to happen. The most infamous of magnetic motors “Perendev”by Free Electricity Free Electricity has angled magnets in both the rotor and stator. It doesn’t work. Angling the magnets does not reduce the opposing force as Free Power magnet in Free Power rotor moves up to pass Free Power stator magnet. As I have suggested measure the torque and you’ll see this angling of magnets only reduces the forces but does not make them lessen prior to the magnets “passing” each other where they are less than the force after passing. Free Energy’t take my word for it, measure it. Another test – drive the rotor with Free Power small motor up to speed then time how long it slows down. Then do the same test in reverse. It will take the same time to slow down. Any differences will be due to experimental error. Free Electricity, i forgot about the mags loseing their power. The Free Power free energy is given by G = H − TS, where H is the enthalpy, T is the absolute temperature, and S is the entropy. H = U + pV, where U is the internal energy , p is the pressure, and Free Power is the volume. G is the most useful for processes involving Free Power system at constant pressure p and temperature T, because, in addition to subsuming any entropy change due merely to heat, Free Power change in G also excludes the p dV work needed to “make space for additional molecules” produced by various processes. Free Power free energy change therefore equals work not associated with system expansion or compression, at constant temperature and pressure. (Hence its utility to solution-phase chemists, including biochemists.)
# Calculating $\mathbb{R}P^1$ fundamental group. Well, I am trying to use the fact that $S^1$'s fundamental groups is free and generated by on element ($\mathbb{Z}$), denoting $\pi_1(S^1) = \langle [\gamma] \rangle$. When $\gamma$ is a loop starts at $(0,1)$ and going clockwise through $S^1$ . Using the quotient map $q:S^1 \rightarrow \mathbb{R}P^1$ ($x \mapsto \{x,-x\}$) as a covering map, one get that $[\gamma] \mapsto q_*([\gamma]) = [q\circ\gamma]= \cases{\alpha(s) : s\in[0,\pi] \\ \alpha(s-\pi) : s\in[0,2\pi]}$ when $\alpha$ is the loop in $\mathbb{R}P^1$ which starts at $(0,1)$, and going clockwise till $(0,-1) \sim (0,1)$. $q\circ \gamma$ is homotopic to $\cases{\alpha(2s) : s\in[0,\pi] \\ \alpha(2s-2\pi) : s\in[0,2\pi]}$ by the homotopy $F_t(s) = \cases{\alpha(2(1+t)s) : s\in[0,\pi] \\ \alpha((1+t)(s-\pi)) : s\in[0,2\pi]}$ which in turn is homotopic to $\alpha * \alpha$ (just composing a function to change the domain from $[0,2\pi]$ to $[0,1]$. So one may conclute that $[q\circ \gamma] = [\alpha * \alpha]$ , but I don't succeed in formally proceeding to the conclusion that $\mathbb{R}P^1$ is generated by $\alpha$ which is what I wished to achieve. Any help, and other ideas would be appreciated! • You can quite easily get a lot more information than just the fundamental group of $\mathbb{R}P^1$ by noticing that it is the quotient space $S^1/\mathbb{Z}_2$. Sep 18, 2018 at 12:29 • @Tyrone $\mathbb{R}P^1 \equiv S^1 / {-x, x} \rightarrow_{[x]\mapsto \theta} [0,\pi) \rightarrow_{\theta \mapsto 2\theta} [0,2\pi) \rightarrow_{\varphi \mapsto e^{i\varphi}} S^1$ is an homeomorphism (Am I right?) Sep 19, 2018 at 5:24 • Yes. The quotient map isn't the identity under this identification, however. Anyway, you now know not only $\pi_1$ but all of $\pi_*$. Sep 19, 2018 at 9:15 We know $q$ is a double cover. It follows that $q_*(\pi_1(S^1))$ (which is isomorphic to $\pi_1(S^1)$, since $q_*$ is injective) is an index-2 subgroup of $\pi_1(\mathbb{R}P^1)$. Since we know that $\pi_1(\mathbb{R}P^1) \simeq \mathbb{Z}$ $^{(1)}$ and we have that $\alpha^2$ is the image of a generator of $\pi_1(S^1)$, it follows that it generates $q_*(\pi_1(S^1))$ (which is the index-2 subgroup). Therefore, $\alpha$ is a generator of $\pi_1(\mathbb{R}P^1)$. $^{(1)}$It is easy to prove that $\mathbb{R}P^1 \simeq S^1$ directly, or we can simply resort to the classification of compact $1$-manifolds (which also says that it must be $S^1$). In a previous version of this answer, I claimed that from the fact that we had the existence of an index-2 subgroup (isomorphic to $\mathbb{Z}$) alone we could infer that $\pi_1(\mathbb{R}P^1) \simeq \mathbb{Z}$. This is, of course, false. We could have $\pi_1(\mathbb{R}P^1) \simeq \mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$. Indeed, those are the only two possibilities if we know that $\pi_1(\mathbb{R}P^1)$ is abelian (which we do... because $\mathbb{R}P^1$ is $S^1$! Or if you don't want to use that, then we could know that because it is a Lie group, which is almost cheating, but not quite). One way to see that those are the only two possibilities is to use the fact that we have the exact sequence $$0 \to \pi_1(S^1) \stackrel{q_*}{\to} \pi_1(\mathbb{R}P^1)\to \pi_1(\mathbb{R}P^1)/q_*(\pi_1(S^1)) \simeq \mathbb{Z}/2\mathbb{Z} \to 0,$$ and thus the possibilities of $\pi_1(\mathbb{R}P^1)$ are restricted by $\mathrm{Ext}(\mathbb{Z}/2\mathbb{Z},\mathbb{Z}) \simeq \mathbb{Z}/2\mathbb{Z}$, thus $\mathbb{Z}$ and $\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$ are indeed the only possibilities. It would be nice to know a way to discard the "possibility" of $\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$ from the covering map directly for example, not resorting to knowing that $\pi_1(\mathbb{R}P^1)$ is indeed $\mathbb{Z}$ beforehand. But it is much better to evade all this and use what is in the beginning of this "footnote". If you know $\pi_k(\mathbb{S}^1)$, you know $\pi_k(\mathbb{RP}^1)$!! Because $\mathbb{RP}^1 \cong \mathbb{S}^1$, homeomorphic!! For yet another way to compute the fundamental group of a space through a covering map of it, check my answer here: Computation of the fundamental group of the projective plane without Van Kampen theorem.
#### Translator’s note Bingener, J. “Über formale komplexe Räume.” manuscripta mathematica 24 (1978), 253–293. DOI: 10.1007/BF01167833. The translator (Tim Hosgood) takes full responsibility for any errors introduced, and claims no rights to any of the mathematical content herein. Version: f36a214 Formal complex spaces, introduced by Krasnov [24] and independently by the author, are the analytic analogues of the formal schemes of Zariski and Grothendieck. Special cases are the formal completions of complex spaces along analytic sets, see Banica [3]. The technique of formal complex spaces has proved to be a useful tool in analytic geometry and allows even applications to purely algebraic problems, see [24], [4] and [7]. Here the basic theory of these spaces is developed: coherence of the structure sheave, description of the coherent modules, Grauert’s coherence theorem for proper maps…. We further study the question of exactness of the formal Dolbeault and de Rham complexes. # Introduction In 1958, Grothendieck introduced formal schemes in algebraic geometry, following on from earlier ideas by Zariski. Since then, the theory of formal schemes has become an important tool in algebraic geometry, cf. e.g. [2,13,20]. Formal structures appeared in (global) analytic geometry for the first time in Grauert’s comparison theorem; this is clearly expressed in the proof given in [3]. In [24], Krasnov then explicitly introduced formal complex spaces, and, in particular, formal complex manifolds, and used this to prove theorems about modifications of complex manifolds. In the present article, we first develop the basic theory of formal complex spaces. These are introduced in §1 as inductive limits of a suitable system of complex spaces. We obtain special formal complex spaces if we consider the formal completions of complex spaces along analytical subsets. Of course, every complex space is also a formal complex space. The structure sheaf {\mathcal{O}}_X of a formal complex space X is always coherent with local Noetherian stalks ((1.1) and (1.4)). It is important remark that, as in the case of complex spaces, the TO-DO of compact Stein subsets of X are excellent Noetherian rings ((1.4) and (1.10)). Formal complex spaces can be (locally) embedded into TO-DO, cf. (1.7). Formal complex manifolds, i.e. formal complex spaces whose stalks are all regular, are studied in §2. Every point of a formal manifold has an open neighbourhood that is isomorphic to the formal completion of an open subspace of \mathbb{C}^n along an analytic subset. A formal Dolbeault complex can be defined for any formal complex space, and, in the case of formal manifolds, this is a fine resolution of the structure sheaf. This follows directly from the following statement, which is the main result of §2: Let Y be an open subset of \mathbb{C}^n, S\subseteq Y an analytic set in Y, and \widehat{Y} the formal completion of Y along S. Let {\mathscr{E}}_Y^{0,\bullet} be the ordinary Dolbeault complex of Y, {\mathscr{I}}^{(\infty)}(S) the ideal of functions of {\mathscr{E}}_Y that are flat on S, and {\mathscr{E}}_{\widehat{Y}}^{0,\bullet} \coloneqq {\mathscr{E}}_Y^{0,\bullet}/{\mathscr{I}}^{(\infty)}(S){\mathscr{E}}_Y^{0,\bullet} the formal Dolbeault complex of Y. Then the sequence 0 \to {\mathcal{O}}_{\widehat{Y}} \to {\mathscr{E}}_{\widehat{Y}} \xrightarrow{\bar{\partial}} {\mathscr{E}}_Y^{0,1} \xrightarrow{\bar{\partial}} \ldots \to {\mathscr{E}}_Y^{0,n} \to 0 is exact. The fact that {\mathcal{O}}_{\widehat{Y}}=\operatorname{Ker}({\mathscr{E}}_{\widehat{Y}}\xrightarrow{\bar{\partial}}{\mathscr{E}}_{\widehat{Y}}^{0,1}) is exact has already been shown by Krasnov. To prove this, we reduce the problem, using a resolution of singularities and Hironaka’s vanishing theorem, to the case where S is a normal crossing divisor. In this special case, the problem can be solved by “concrete” calculation using theorems of Malgrange. An analogous statement can be made for the formal de Rham complex. In fact, the following more general statement (cf. (2.11)) holds: Let S be an analytic set in a complex space Y, defined by a coherent sheaf of {\mathcal{O}}_Y-ideals {\mathscr{J}}, and let \Omega_{\widehat{Y}}^\bullet\coloneqq\varinjlim_k\Omega_Y^\bullet/{\mathscr{J}}^{k+1}\Omega_Y^\bullet. If Y\setminus S is non-singular, then, for all n\in\mathbb{N}, the canonical homomorphism {\mathscr{H}}^n(\Omega_Y^\bullet|S) \to {\mathscr{H}}^n(\Omega_{\widehat{Y}}^\bullet) is bijective. In the special case where Y is a manifold, then \Omega_{\widehat{Y}}^\bullet is a resolution of the constant sheaf \mathbb{C}_S. This latter claim can also be found in Hartshorne [16]. In §3, TO-DO between formal complex spaces are considered. For such maps, Grauert’s coherence law (cf. (3.1)) applies. In §4 we show how the most important statements of the relative comparison theory between algebraic and analytic geometry [5,15] can be transferred to the case where the base is a formal complex space. In §5 we use the results of §4 to study formal meromorphic functions. We mention here the following statement (cf. (5.2)): Let T be a connected exceptional analytic set in a normal complex space X, let X\to Y the associated contraction of T to a point y\in Y, and let \widehat{X} be the formal completion of X along T. Then the ring M(\widehat{X}) of meromorphic functions on \widehat{X} agrees with the quotient field Q(\widehat{{\mathcal{O}}}_{Y,y}) of the completion of the stalks {\mathcal{O}}_{Y,y} of y in Y. In addition, we determine the ring of meromorphic functions on the product of a normal formal complex space Y with a compact complex algebraic space Z (cf. (5.3)). In the case where Z is the complex projective space \mathbb{P}_{\mathbb{C}}^r, we obtain a corollary of Andreotti–Stoll [1, Theorem 6.9]. Some results from the present article have already been used in [4] and [7]. # 1 Formal complex spaces Let X=(X,{\mathcal{O}}_X) be a locally ringed space. For x\in X, let {\mathfrak{m}}_x be the maximal ideal of the stalk {\mathcal{O}}_x of X at the point x. We denote by {\mathscr{I}}_X the {\mathcal{O}}_X ideal whose sections over an open subset U of X are exactly the elements of \Gamma(U,{\mathcal{O}}_X) such that f_x\in{\mathfrak{m}}_x for all x\in X. A locally ringed space X=(X,{\mathcal{O}}_X) over \operatorname{Spec}\mathbb{C} is called a formal complex space if the following conditions are satisfied: 1. X_n\coloneqq (X,{\mathcal{O}}_X/{\mathscr{I}}_X^{n+1}) is a complex space for all n\in\mathbb{N}; and 2. the canonical homomorphism {\mathcal{O}}_X\to\varinjlim_n{\mathcal{O}}_X/{\mathscr{I}}_X^{n+1} is bijective. Formal complex spaces form a category, with \mathbb{C}-morphisms as the morphisms. If f\colon X\to Y is a morphism between formal complex spaces, then the associated homomorphisms {\mathcal{O}}_{Y,f(x)}\to{\mathcal{O}}_{X,x} are local. In particular, we have that {\mathscr{I}}_Y{\mathcal{O}}_X\subseteq{\mathscr{I}}_X. If X is a complex space, then {\mathscr{I}}_X={\mathscr{N}}_X is the sheaf of nilpotent elements of X, and so every complex space is also a formal complex space. As usual, we say that a subset of a complex space is Stein compact if it is a compact, semi-analytic subset that has an neighbourhood system of open Stein sets. This definition can be extended in a trivial way to formal complex spaces: A subset K of a formal complex space X is said to be Stein compact if K, regarded as a subset of the complex space X_n, is Stein compact, for all n. Because a complex space is Stein if and only if its reduction is, this condition only needs to be checked for n=0. The following lemma is fundamental for what follows. Let X=(X,{\mathcal{O}}_X) be a formal complex space, and K\subseteq X a Stein compact subset. Then the following hold: 1. B_K\coloneqq\varprojlim_n\Gamma(K,{\mathcal{O}}_{X_n}) is a Noetherian ring, which is further separated and complete with respect to the topology defined by the ideal {\mathfrak{b}}_K\coloneqq\operatorname{Ker}(B_K\to\Gamma(K,{\mathcal{O}}_{X_0})). 2. If L\subseteq X is another Stein compact subset such that K\subseteq L, then the canonical homomorphism B_L\to B_K is flat. 3. {\mathcal{O}}_X is a coherent sheaf of rings. Proof. We write B_{K,n}=\Gamma(K,{\mathcal{O}}_{X_n}) and {\mathfrak{b}}_n\coloneqq\Gamma(K,{\mathscr{J}}_X^n/{\mathscr{J}}_X^{n+1}). # Bibliography [1] A. Andreotti, W. Stoll. Analytic and Algebraic Dependence of Meromorphic Functions. Springer, 1971. Lec. Notes Math. 234. [2] M. Artin. “Algebraization of formal moduli: II.Existence of modifications.” Ann. Math. 91 (1970), 88–135. [3] C. Banica. “Le complété formel d’un espace analytique le long d’un sous-espace: Un théorème de comparaison.” Manuscripta Math. 6 (1972), 207–244. [4] J. Bingener. “Divisorenklassengruppen der Komplettierungen analytischer Algebren.” Math. Ann. 217 (1975), 113–120. [5] J. Bingener. “Schemata über steinschen algebren.” Schriftenreihe Des Mathematischen Instituts Der Universität Münster. 10 (1976). [6] J. Bingener. “Holomorph-prävollständige Resträume zu analytischen Mengen in Steinschen Räumen.” J.f.d.r.u.a.M. 285 (1976). [7] J. Bingener. “Über die Divisorenklassengruppen lokaler Ringe.” Math. Ann. (1977), 173–179. [8] N. Bourbaki. Algèbre commutative. Hermann, 1961–67. [9] E. Brieskorn. “Die Monodromie der isolierten Singularitäten von Hyperflächen.” Manuscripta Math. 2 (1970), 103–161. [10] H. Cartan. Séminaire 1960/61. 1960–61. [11] R. Godement. Théorie des faisceaux. Hermann, 1964. [12] A. Grothendieck. “Géométrie formelle et géométrie algébrique.” Séminaire Bourbaki. 11 (1958–59). [13] A. Grothendieck. Cohomologie locale des faisceaux cohérents et Théorèmes de Lefschetz locaux et globaux (SGA 2). North-Holland Publishing Company, 1968. [14] A. Grothendieck, J. Dieudonné. Éléments de géométrie algébrique. Pub. Math. I.H.E.S., 1960–1967. 4,8,11,17,20,24,28,32. [15] M. Hakim. Topos Annelés et Schémas Relativs. Springer, 1972. [16] R. Hartshorne. “On the de Rham cohomology of algebraic varieties.” 45 (1975), 5–99. [17] M. Herrera, D. Liebermann. “Duality and the De Rham Cohomology of Infinitesimal Neighborhoods.” Inventiones Math. 13 (1971), 97–124. [18] H. Hironaka. “Resolution of singularities of an algebraic variety over a field of characteristic zero: I,II.” Ann. Math. 79 (1964), 109–326. [19] H. Hironaka. “Flattening theorem in complex-analytic geometry.” American J. Of Math. 97 (1975), 503–547. [20] H. Hironaka, H. Matsumura. “Formal functions and formal embeddings.” J. Math. Soc. Japan. 20 (1968), 52–82. [21] H. Hironaka, H. Rossi. “On the Equivalence of Embeddings of Exceptional Complex Spaces.” Math. Ann. 156 (1964), 313–333. [22] L. Kaup. “Eine Künnethformel für Fréchetgarben.” Math. Z. 97 (1967), 158–168. [23] R. Kiehl, J.-L. Verdier. “Ein einfacher Beweis des Kohärenzsatzes von Grauert.” Math. Ann. 195 (1971), 24–50. [24] B.A. Krasnov. “Formal Modifications. Existence Theorems for modifications of complex manifolds.” Math. USSR Izvestija. 7 (1973), 847–881. [25] R. Narasimhan. “On the Homology Groups of Stein Spaces.” Inventiones Math. 2 (1967), 377–385. [26] H.-J. Reiffen. “Das Lemma von Poincaré für holomorphe Differentialformen auf komplexen Räumen.” Math. Z. 101 (1967), 269–284. [27] G. Scheja, U. Storch. “Differentielle Eigenschaften der Lokalisierungen analytischer Algebren.” Math. Ann. 197 (1972), 137–170. [28] J.-C. Tougeron. Idéaux de fonctions differentiables. Springer, 1972. [29] K.-W. Wiegmann. “Einbettungen komplexer Räume in Zahlenräume.” Inventiones Math. 1 (1966), 229–242.
## Introduction In the Paris agreement, adopted on December 12th 2015, 195 parties agreed to hold “the increase in the global average temperature to well below 2 °C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5 °C above pre-industrial levels, recognising that this would significantly reduce the risks and impacts of climate change” (Article 2 1.(a) of the Paris Agreement1). Using the well-established finding of a linear climate response to cumulative carbon emissions (as measured by the Transient Climate Response to cumulative CO2 Emissions (TCRE)2,3,4), we can estimate the total allowable CO2 emissions associated with a 1.5 °C temperature target, the so-called 1.5 °C carbon budget. A robust estimate of the carbon budget for 1.5 °C would inform current political discussions surrounding what emissions targets are consistent with the goals of the Paris Agreement, and how the required mitigation effort should be shared among nations5,6,7. There is a growing number of estimates of the 1.5 °C remaining carbon budgets in recent literature, which collectively span a range of values that range from near zero to close to 20 years at current emissions rates. Across all of these studies, a key ambiguity is the question of how much non-CO2 forcing is responsible for decreasing or increasing the estimated carbon budget. In a recent overview of studies assessing the 1.5 °C carbon budget, Rogelj et al.8 showed that 9 out of 14 studies did not use non-CO2 warming that is consistent with the assumed net-zero CO2 emissions pathway. This includes some analyses that assumed a proportionality of future CO2 and non-CO2 forcing (i.e., that the relative contribution to future warming from non-CO2 emission remains similar to today)9,10, as well as others who prescribed non-CO2 forcing from one or more representative concentration pathway (RCP) scenarios11,12,13. Both approaches are problematic. In the case of prescribed RCP non-CO2 forcing, the implied non-CO2 emissions are not consistent with a scenario of decreasing fossil fuel CO2 emissions as would be required for any plausible 1.5 °C scenario. Assuming proportionality of CO2 and non-CO2 forcing is itself a choice of a future scenario, but is one that is not consistent with the recent trend of increasing net non-CO2 forcing, nor the likely independent mitigation of emissions from fossil fuels vs. LUC and agriculture. The Special Report on Global Warming of 1.5 °C produced by the Intergovernmental Panel on Climate Change (IPCC SR1.5), recently provided an undated estimate of the remaining carbon budget for limiting warming to 1.5 °C14. For an additional warming of 0.53 °C above the 2006–2015 average (consistent with a total increase in global surface air temperature (GSAT) of 1.5 °C above 1850–1900), the IPCC SR1.5 estimated a remaining budget from 2018 onwards of 580 (420) GtCO2, which correspond to the 50th (67th) percentile of the TCRE uncertainty distribution. In addition, the report provided ranges for the potential effects of different sources of uncertainty, such as non-CO2 scenario uncertainty (±250 GtCO2), non-CO2 forcing and response uncertainty (−400 to 200 GtCO2), uncertainty in the historical temperature (±250 GtCO2) and uncertainty surrounding unrepresented Earth system feedbacks in state-of-the-art Earth system models (−100 GtCO2)8,14. These numbers indicate that the non-CO2 contribution to the remaining carbon budget is likely the largest uncertain factor affecting estimates of the remaining carbon budget for a 1.5 °C temperature target. In a recent study, we estimated the effect of individual non-CO2 forcing agents on the 1.5 °C carbon budget for a single emissions scenario13 using an intermediate-complexity Earth system model, the University of Victoria Earth System Climate Model15. The large historical contribution of positive forcing from non-CO2 greenhouse gases and the similarly large negative forcing from aerosols, create the conditions for a considerable amount of uncertainty surrounding how future non-CO2 emission changes would affect the remaining carbon budget. We now extend this study, by first attributing current non-CO2 forcing agents to their respective emission sources of (1) fossil fuel combustion, (2) land-use and agriculture and (3) other anthropogenic activities. We then use this partitioning to scale non-CO2 forcing in the RCP scenarios to be consistent with our modelled 1.5 °C scenario in which fossil fuel CO2 emissions are rapidly decreasing. Finally, we show that despite a large range in non-CO2 contributions to the remaining carbon budget across our simulations, all scenarios produce the same budget when expressed in units of CO2 forcing equivalents, which express non-CO2 forcing as the amount of CO2 emissions needed to achieve the equivalent amount of forcing. This highlights the potential of this approach to better represent the contribution of non-CO2 forcing to the remaining carbon budget. ## Results ### Partitioning of non-CO2 forcing based on anthropogenic activities Based on the partitioning of recent emissions data from single non-CO2 forcing agents, we partition the current non-CO2 forcing into three categories depending on the anthropogenic activities that cause the emissions: (1) fossil fuel combustion, (2) land-use changes and agriculture and (3) other human activities, such as emissions of ozone-depleting substances and other refrigerants. This allows us to assess the current net non-CO2 effect of these human activities by combining all-forcing agents (see Methods). It is noteworthy that the positive non-CO2 forcing was almost perfectly compensated by an equivalent negative forcing throughout the historical period up until 1980 (Fig. 1). However, during the last 20 years the net non-CO2 forcing has started to become increasingly positive and reaches a level of 0.26 W/m2. This is in agreement with the upward trend of net non-CO2 forcing shown by the FAIR and MAGICC simple climate models, which span a range of 0.1–0.45 W/m2 at present-day (see Fig. 2.SM.2 of the IPCC SR1.514). This trend can on the one hand, be attributed to the increasing impact of non-CO2 GHGs being emitted in association with the agricultural revolution (the so-called green revolution16), and on the other hand to the decreasing impact of cooling aerosols, whose emissions have been decreasing in response to associated health concerns17. This decrease of aerosol emissions due to health concerns is also represented in future projections, and is generally included in all RCP scenarios. The increasingly positive net non-CO2 forcing indicates that it is likely problematic to assume compensation of positive and negative non-CO2 climate forcing in future scenarios. The partitioning of current non-CO2 radiative forcing shows that LUC and agricultural (LUC+AGRIC) activities currently produce a net positive non-CO2 climate forcing on the order of 0.34 W/m2, whereas fossil fuel combustion (FFC) generates a net negative non-CO2 climate forcing on the order of −0.4 W/m2 (Fig. 1). The positive forcing from LUC+AGRIC activities results from high emissions of non-CO2 GHGs such as CH4 and N2O, which are not compensated by equivalently high emissions of aerosols with negative climate forcing (Table 1): agricultural activities contribute more than a third of the total positive non-CO2 forcing (0.53 W/m2), but contribute less than a quarter of aerosol emissions that cause negative forcing (−0.23 W/m2). In contrast, FFC co-emits a large amount of aerosols causing a large negative forcing (−0.88 W/m2), which is more than twice as large as the positive forcing from co-emitted non-CO2 GHGs (0.36 W/m2). This non-compensatory behaviour in terms of non-CO2 forcing from both FFC and LUC+AGRIC holds important implications for future forcing pathways: if FFC is to be reduced in compliance with ambitious mitigation targets this will eliminate a large part of the negative forcing from co-emitted aerosols. At the same time, the positive forcing from land-use and agricultural activities is expected to remain at current levels or even increase in the future, to comply with the projected increase in food demand in most scenarios14. In the absence of successful mitigation of agricultural emissions, these two effects could lead to an potentially large increase in future net non-CO2 climate forcing. ### Impact of non-CO2 forcing scenarios on the CO2-only carbon budget The net non-CO2 forcing for the RCP2.6/4.5/8.5 increased by 0.2/0.5/1.0 W/m2 by 2050 (i.e., the time 1.5 °C is reached) relative to the beginning of the scenarios in 2005 (Fig. 1a). Removing positive and negative FFC-related non-CO2 forcing from the default RCP scenarios (to regain consistency with simulated FFC CO2 emissions decreases, see methods and Supplementary Sections 24) resulted in a larger increase in the net non-CO2 radiative forcing (by 0.4/0.7/1.1 W/m2 in the RCP2.6/4.5/8.5 minus FFC scenarios, respectively). This large range of non-CO2 forcing scenarios in turn produced a wide range of remaining carbon budgets, of between 230 GtCO2 and 720 GtCO2 in total emissions from 2006 until the time 1.5 °C is reached in 2050 (black crosses Fig. 2). Finally, our scenario with assumed proportionality between future CO2 and non-CO2 forcing resulted in the largest remaining CO2-only budget, of 880 GtCO2. This range of remaining carbon budgets across scenarios is equivalent to about 17 years of current emissions (i.e., 10.7 PgC/yr equivalent to 39.2 GtCO2/yr for 2007–1618), and also covers the range of 1.5 °C carbon budget estimates across recent studies9,11,12,13. Expressing the contributions by non-CO2 climate forcers in CO2-forcing equivalent emissions (see Method section for calculations), allows to directly compare their contributions and helps to clearly attribute reasons for the large discrepancies of the remaining carbon budgets (Fig. 2). For example, it is clear that the large increase of non-CO2 GHGs in S1 - RCP8.5 1.5, is the main reason for the low remaining carbon budget in this scenario. It is of course important to emphasise that this scenario, in addition to S1 and S2, include non-CO2 forcing changes that are not consistent with the diagnosed CO2 emissions. The strong increase in non-CO2 forcing in S1 - RCP8.5 1.5, for example, is caused by the business-as-usual approach to FFC and LUC+AGRIC activities, which do not match the decreasing FFC CO2 emissions that are required to meet our prescribed 1.5 °C temperature trajectory (Supplementary Section 1). In the ‘adjusted’ scenario S1b - RCP8.5 minus FFC (in which we subtracted the FFC-related non-CO2 forcing so as to align correctly with diagnosed FFC-related CO2 emissions), the contribution from non-CO2 GHGs is smaller than in S1, though the contribution from reduced aerosols emissions is substantially larger. These two non-CO2 contributions then compensate each other, which results in a similarly low remaining carbon budget. This indicates clearly that focussing only on fossil fuel emissions reductions, without also mitigating LUC-related CO2 emissions and non-CO2 GHG from LUC and agriculture, would likely result in an impossibly small remaining fossil fuel carbon budget for the 1.5 °C target. Among the three ‘adjusted’ scenarios, S2b - RCP4.5 minus FFC and S3b - RCP2.6 minus FFC (which include LUC CO2 emissions and non-FFC non-CO2 emissions from RCP2.6 and RCP4.5, respectively, combined with our scenario of decreasing FFC CO2 and non-CO2 emissions) are clearly the more ‘realistic’ 1.5 °C scenarios, in that they include internally consistent CO2 and non-CO2 emissions resulting from ambitious FFC decreases, combined with reasonably ambitious mitigation of other emission sources. The remaining carbon budget from these scenarios were 505 GtCO2 and 775 GtCO2 (from 2006 onwards), which corresponds to about 200 GtCO2 and 465 GtCO2, respectively, emitted from 2018 onwards. IPCC SR1.5 gave a best estimate of 580 GtCO2 for the same time period. The difference between SR1.5 and our estimates is again likely due to different non-CO2 contributions to future warming. The SR1.5 analysis was based on all available 1.5 °C scenarios from integrated assessment models, many of which included more ambitious non-FFC emissions mitigation than what is found in either RCP2.6 or RCP4.5. In addition, however, part of the reason for our smaller carbon budgets is that the aerosol forcing in our simulations decreases considerably more than in the simple climate models used to assess the non-CO2 contribution to warming in SR1.5 (see Fig. 4 in the supplementary material). This difference in the aerosol forcing response to decarbonisation scenarios warrants additional attention, as it clearly has the potential to have a large influence on estimates of the remaining carbon budget. These results show that if future non-CO2 contributions are not clearly reported and accounted for in remaining carbon budgets estimates, this leads to widely-varying arbitrary carbon budget estimates, which almost entirely reflect the assumed non-CO2 scenarios. It is noteworthy, however, that while the contributions from non-CO2 GHG, aerosols, LUC and fossil fuel emissions vary throughout our scenarios, they all agree on a total remaining CO2 + CO2-fe budget of 1170 ± 35 GtCO2-fe. This is in line with our expectations, and indicates that the remaining total climate forcing for a 1.5 °C target has to be the same in all scenarios. ### Effective transient climate response to CO2 and CO2-fe emissions The metric of the effective transient climate response to cumulative emissions (TCREeff) is used to express the temperature change caused by all emissions as a function of cumulative CO2-only emissions. While this term (TCREeff) was only introduced recently by Matthews et al.9, the concept was used in the 5th assessment report of the IPCC (e.g., Fig. SPM.1019) as well as in more recent publications, e.g.14,20, that plotted total temperature change from all-forcing model simulations as a function of cumulative CO2 emissions. Unlike the transient climate response to cumulative CO2 emissions (TCRE), which has been shown to be scenario-independent across a wide range of scenarios and emission quantities21,22,23,24, the TCREeff depends on the changing strength of non-CO2 forcings, and is therefore not scenario-independent. The TCREeff only remains constant in time for scenarios where CO2 and non-CO2 forcing are proportional. As a consequence, using the TCREeff to estimate remaining carbon budgets (as done by e.g., refs. 9,10) requires the (likely unjustified) assumption that the relative contribution of non-CO2 forcing to future warming remains constant. To illustrate the non-linearity of the TCREeff across our scenarios, we show cumulative CO2 emissions and the transient temperature response as diagnosed for varying non-CO2 forcing scenarios (Fig. 3a). Note, that the linearity is conserved only in scenario S4, in which we assume proportionality between future CO2 and non-CO2 forcing. For other scenarios, and especially those with ambitious FFC mitigation, this proportionality is no longer valid, due to the increasing contribution from non-CO2 climate forcers. For these scenarios, using the current TCREeff to estimate the remaining carbon budget would results in a substantial overestimate of the budget. In contrast to the large variation of the CO2-only carbon budgets, there is good agreement in the 1.5 °C budgets when expressed as the sum of CO2 and CO2-fe emissions from all climate forcers, with a total budget of 1115 ± 50 GtCO2-fe across all scenarios (Fig. 3b). This budget includes FFC and LUC CO2 emissions, and in addition CO2-fe from LUC albedo changes, and non-CO2 forcing including aerosols and GHGs. This therefore represents an aggregated CO2-fe budget that includes the contribution from all anthropogenic climate forcers. If expressed as the transient climate response to cumulative CO2 forcing equivalent emissions (TCRFE, i.e., the slope of the lines in Fig. 3b), we find that the linearity and scenario-independence with respect to cumulative CO2-fe is restored, with a value of TCRFE = 0.50 K/1000 GtCO2-fe. This metric now has a well-founded theoretical basis again: By construction, the CO2-fe emissions give the same radiative forcing pathway and hence temperature response as the corresponding forcing agents from which they are computed25. In case of the TCREeff, the temperature change from an all-forcing simulation including non-CO2 climate forcing is related to CO2 emissions only, not accounting for the potential temperature response from this additional forcing (Fig. 3a). In contrast, the TRCFE relates temperature change to cumulative emissions from all climate forcing expressed in CO2 and CO2-fe emissions (Fig. 3b). The same physical mechanism as for the TCRE2,3,4,22,23 accordingly act to cause the linearity for the TCRFE. However, the non-CO2 GHGs and aerosols in the real world would not interact with the carbon cycle, as they do per construction in our experiment. This interaction does in part cause the linearity of the TCRFE. The limits of the linearity for the TCRFE should accordingly be further investigated in future studies. ## Discussion By partitioning the non-CO2 contributions into different sources of anthropogenic activities, we show that today’s LUC and agriculture non-CO2 forcing contributions have a net warming effect, whereas FFC-related non-CO2 forcing has a net cooling effect. This result holds some important implications. In ambitious mitigation scenarios in which fossil fuel combustion is to be strongly reduced, we would expect a strong decline in aerosol emissions, causing a shift towards more positive net non-CO2 climate forcing. Of course, this shift could also occur in the absence of decreasing FFC CO2 emissions, via the implementation or improvement of filter systems in response to health concerns17. Although historically the net non-CO2 forcing contribution was close to zero, this is not a likely pathway for future non-CO2 forcing in the context of ambitious mitigation action. All RCP scenarios show an increase in future non-CO2 forcing, but when subtracting the FFC non-CO2 forcing so as to align with faster decreases in FFC CO2 emissions, we obtain an even steeper increase of future non-CO2 forcing. This illustrates that the metric of the effective transient climate response to cumulative CO2 emissions (TCREeff), is unlikely to remain constant even on relatively short time frames and especially not for scenarios with ambitious mitigation action. We recommend that this metric should not be applied to estimate the remaining CO2 only budget under ambitious mitigation unless treated as a variable quantity that changes as a function of changing non-CO2 emissions. Our results suggest that the relative contribution of non-CO2 forcing will likely increase in response to ambitious FFC mitigation actions, leading to a decrease in the remaining carbon budget for a 1.5 °C. Consequently, the assumption of future proportionality of CO2 and non-CO2 forcing is only plausible if we are considerably more successful in mitigating non-FFC-related non-CO2 emissions (i.e., non-CO2 forcing agents from LUC and agriculture and other anthropogenic activities) compared to what is represented by the range of RCP scenarios. When disregarding scenario S1b, which is a less likely realisation of future non-CO2 forcing in line with a 1.5 °C temperature trajectory, our idealised, example scenarios show that depending on the assumed non-CO2 forcing scenario, the size of the 1.5 °C CO2-only budget varies by 410 GtCO2. This range within budgets is larger than some estimates of the remaining budget itself, and in the range of the 67th percentile of the 1.5 °C budget presented in the IPCC’s Special Report14. We find that in line with Allen et al.25, adopting the metric of TCRFE (rather than, for example, TCREeff) would allow us to justifiably assume a linear temperature response to cumulative CO2-fe emissions, leading to CO2-fe budgets that are approximately scenario-independent. Using a more comprehensive carbon cycle model to diagnose CO2-fe emissions associated with individual non-CO2 climate forcers, we show that the framework introduced by Allen et al.25 holds, and that we need to account explicitly for the non-CO2 climate forcing to obtain an accurate estimate for the carbon budget for peak warming or climate stabilisation. ## Methods ### Model description For our study we used version 2.9 of the University of Victoria Earth System Climate Model (UVic ESCM), a climate model of intermediate complexity26. It includes schemes for ocean physics based on the Modular Ocean Model Version 2 (MOM2)27, ocean biogeochemistry28, and a terrestrial component including soil and vegetation dynamics represented by five plant functional types29. The atmosphere is represented by a two dimensional atmospheric energy moisture balance model, including a thermodynamic sea ice model30,31. All model components have a common horizontal resolution of 3.6° longitude and 1.8° latitude and the oceanic component has a vertical resolution of 19 levels, with vertical thickness varying between 50 m near the surface and 500 m in the deep ocean. The UVic ESCM is a well-established Earth system model with a good evaluation of its carbon cycle processes15. ### Diagnosed CO2 and CO2 forcing equivalent (CO2-fe) emissions For our simulations, we have prescribed a 1.5 °C temperature change scenario as the input to the UVic ESCM, and used the model to estimate the fossil fuel CO2 emissions trajectory that is consistent with this temperature trajectory, as in Zickfeld et al.3, Matthews et al.32 and Mengis et al.13 (see Supplementary Section 1 for the trajectory). When running the model in this mode, atmospheric CO2 concentrations are adjusted dynamically by the model so as to achieve the prescribed temperature change, and the consistent fossil fuel CO2 emissions are diagnosed as a function of simulated atmospheric CO2 and land/ocean carbon sinks. Our prescribed temperature scenario followed the model-simulated temperature response to historical forcing up to the year 2015, and then stabilised at 1.5 °C above 1850–1879 temperature at about the year 2055 (Supplementary Fig. S1). To estimate the cumulative CO2 emissions that are equivalent to a given non-CO2 forcing, the UVic ESCM was forced to follow the same temperature trajectory, while removing individual non-CO2 forcings from the model input. To follow the temperature trajectory, the model therefore needed to adjust the diagnosed CO2 emissions to account for the missing input forcing. The difference between the all-forced and the reduced-forced diagnosed cumulative CO2 emissions represents the forcing equivalent CO2 emissions (CO2-fe) of the respective non-CO2 forcing. Given that we prescribed spatially changing land-use changes (LUC), land-use CO2 emissions are generated internally by the model. Consequently, the compatible CO2 emissions that result from our prescribed temperature scenarios are an estimate of fossil fuel CO2 emissions only. Running the model with fixed pre-industrial land-use gives us the CO2-fe emissions from both LUC-related CO2 emissions, as well as the albedo effect from LUC. To estimate the LUC-only CO2 emissions, we carried out another simulation with constant pre-industrial land-use conditions, but prescribing the CO2 concentration increase from the changing land-use simulation, rather than prescribing the temperature trajectory. The difference between the total land carbon content of this prescribed-CO2 no-LUC simulation and the changing-LUC simulation represents the LUC emissions33. Finally, the difference between this and the prescribed temperature no-LUC simulation estimates gives us the CO2-fe emissions from albedo changes due to LUC. ### Partitioning of non-CO2 forcing Based on the information from the fifth assessment report (AR5) of the Intergovernmental Panel on Climate Change (IPCC) we partitioned non-CO2 forcing agents as used in the Representative Concentration Pathways (RCPs) into three categories given the source of the respective forcing agent: (1) fossil fuel combustion, (2) agriculture and biomass burning including land-use change (LUC), and (3) other anthropogenic sources, which include sources of all halocarbon emissions, as well as other industrial activities such as waste disposal that are distinct from fossil fuel combustion and agriculture/LUC (Table 1). 28% of the anthropogenic methane sources are fossil fuel-based, 38% are attributed to agriculture, 11% are attributed to natural but mainly to anthropogenic biomass burning, and the remaining 23% are attributed to other anthropogenic activities such as waste disposal (Fig. 6.2 of the IPCC AR5 WGI34). This partitioning is in good agreement with the findings of the more recently published Global Methane Budget35. The same partitioning is assumed for the radiative forcing of water vapour from methane oxidation. For N2O, we attributed 10% of the global anthropogenic N2O sources to fossil fuel combustion, 10% to biomass and biofuel burning, 60% to agriculture, and the remaining 20% to other anthropogenic sources, such as N2O emissions from atmospheric depositions on ocean and land, or human excreta (Fig. 6.4c of the IPCC AR5 WGI34). As tropospheric ozone is a by-product of the oxidation of carbon monoxide (CO), CH4, and hydrocarbons (part of the F-gases) in the presence of nitrogen oxides (NOx), we calculated its partitioning as a weighted mean from the respective contributions of these gases. For CO, we used a partitioning of 48% from fossil fuel combustion and 52% from biomass burning (IPCC TAR WGI Chapter 4.2.3.1). We then weighted the partitioning of the respective forcing agents by their contribution to tropospheric ozone forcing, i.e., 0.235 W/m2 from CH4, −0.14 W/m2 from F-gases, 0.075 W/m2 from CO, 0.05 W/m2 from OCI and 0.15 W/m2 from NOXI, from IPCC AR5 WGI Fig. 8.1734. We attributed the forcing from fluorinated gases, ozone-depleting substances and the forcing from stratospheric ozone depletion to anthropogenic activities other than fossil fuel combustion, agriculture, and biomass burning. It is however noteworthy, that with the anticipated future decline of the predominant ozone-depleting substances, other gases, in particular N2O will become important for stratospheric ozone depletion36. As black carbon and organic carbon (BCI and OCI, respectively) are byproducts of fossil and biofuel combustion, their partitioning is based on the carbon emissions partitioning, i.e., 9% of BCI and OCI is allocated to LUC and the remaining 91% is allocated to fossil fuel. The main source of anthropogenic sulphate aerosol is via SO2 emissions from fossil fuel burning (about 97%), with a small contribution from biomass burning (about 3%) (IPCC AR4 WGI, Chapter 2.4.4.137). The anthropogenic nitrate aerosol (NOXI) emissions can be partitioned into 74% from fossil fuel combustion and 26% from agriculture and biomass burning (Fig. 6.4b of the IPCC AR5 WGI34). Finally, biomass-related aerosols are 100% attributed to agriculture and LUC emissions. Anthropogenic sources of dust, including road dust and mineral dust due to human land-use change, remain ill quantified. Recent satellite observations suggest the fraction of mineral dust due to the LUC could be 20–25% of the total (IPCC AR5 WGI Chapter 7.3.2.134). We attribute 100% of the anthropogenic mineral dust forcing to LUC following Ginoux et al.38. The radiative forcing of the cloud-albedo effect is a theoretical construct that is not easy to separate from other aerosol cloud interactions. We assume that the partitioning of the direct aerosol forcing is representative of the partitioning of the indirect effect. Direct aerosol forcing from fossil fuel combustion in 2005 amounts to −0.3253 W/m2 and from LUC to −0.0931 W/m2. Accordingly, for the indirect effect we allocate 78% to fossil fuel combustion and the remaining 22% to LUC and agriculture. ### Non-CO2 forcing scenarios Although we follow the same threshold avoidance temperature trajectory for all scenarios (Supplementary Fig. 1), we vary the non-CO2 forcing following three Representative Concentration (RCP) scenarios, three alterations of the RCPs and one commonly used assumption of a non-CO2 scenario (Fig. 1). The RCP scenarios in this context purely represent different trajectories for the non-CO2 climate forcers. The business-as-usual scenario (S1 - RCP8.5 1.5) assumes continuing high land-use change (LUC) and fossil fuel combustion (FFC) activity levels, and as a result has the largest increase in net non-CO2 forcing. The middle-of-the-road scenario (S2 - RCP4.5 1.5) assumes a reduction of agricultural land area, resulting in lower LUC emissions, and at the same time, assumes that measures are taken to reduced the atmospheric aerosol burden. Lastly, the ambitious mitigation scenario (S3 - RCP2.6 1.5) assumes the implementation of bioenergy carbon capture and storage technology, while the negative forcing from aerosols is also reduced. Of these three, the latter two are more likely to represent 1.5 °C non-CO2 forcing scenarios, however, in this first step, none of these non-CO2 scenarios are consistent with the diagnosed CO2 emissions trajectories for a 1.5 °C temperature target. Therefore in a second step, we take the diagnosed FFC CO2 emissions and scale the non-CO2 climate forcers according to the respective FFC CO2 trajectory, obtaining more consistent non-CO2 scenarios for a 1.5 °C temperature target (S1b, S2b, and S3b, respectively). The adjusted scenarios now reflect the decrease of negative aerosol forcing that would follow stringent fossil fuel emissions mitigation measures or attempts for reduction of the atmospheric aerosol burden due to health concerns, while providing different scenarios for LUC and agricultural practices. This gives an idea of the impact of different LUC and agricultural practices on non-CO2 climate forcers, and non-CO2 GHGs in particular (because aerosols are mostly linked to FFC). Comparing scenarios S1b and S2b, for example, gives insights into the effect of non-CO2 GHGs mitigation through reforestation (S2b) in contrast to continued deforestation (S1b). Lastly, we wanted to explore the impact of assuming proportionality between CO2 induced forcing to net non-CO2 forcing. This is an assumption, made by several recent studies9,10, of a constant future ratio between the net non-CO2 forcing and the CO2 induced forcing. For this implementation, we used the observed ratio of the net non-CO2 forcing to the total CO2 forcing for the 20 years period from 1995 to 2015, which gives a value of 0.26. Realising that the total CO2-fe budgets are the same for all the forcing scenarios, we inferred the CO2 and non-CO2 forcing equivalent emissions for this scenario as using the following equation: $${E}_{{\mathrm{total}}}={E}_{{\mathrm{non-}}{\mathrm{CO}}_{2}}+{{{E}}}_{{\mathrm{CO}}_{2}}=(0.26+1)* {{{E}}}_{{\mathrm{CO}}_{2}}$$. For more details on the assumptions behind the three RCP scenarios, the scaling of the Sxb scenarios with diagnosed FFC, and a comparison of the scenarios with the Shared Socioeconomic Pathways (SSPs) framework see Supplementary Sections 24.
Matthew Petroff https://mpetroff.net mpetroff.net Sat, 31 Oct 2020 21:18:18 +0000 en-US hourly 1 Update on Figure Caption Color Indicators https://mpetroff.net/2020/10/update-on-figure-caption-color-indicators/ https://mpetroff.net/2020/10/update-on-figure-caption-color-indicators/#respond Sat, 31 Oct 2020 21:15:08 +0000 https://mpetroff.net/?p=3285 Continue reading ]]> Last year, I published a blog post on figure caption color indicators. The positive feedback I received on it from a number of individuals prompted me to revisit the subject. At the time, I did not have a good way of locating published examples of such caption indicators and was only able to locate a few published examples with shape indicators but none with color indicators. When thinking about revisiting the subject, I had the epiphany that although searching for such indicators in the published literature is next to impossible, searching in the LaTeX source markup for papers is not. As arXiv provides bulk access to the TeX source markup for its pre-prints, this provided a large corpus of manuscripts to search through. After finding examples in pre-prints, I was then able to see if the indicators survived the publication process and was thereby able to locate well over one hundred examples of color line or shape indicators in the figure captions of published academic papers. I broke the process into four steps: acquiring the data, extracting LaTeX commands from caption environments, finding potential figure caption candidates, and verifying these candidates. As the arXiv source archive is well over 1 TB in size, it is provided in an AWS S3 bucket configured such that the requester pays for bandwidth, which would result in a bandwidth bill of >$100 if downloaded directly. As I was only interested in the TeX source and not the figures, which account for most of the total file size, and since AWS does not charge to transfer between S3 buckets and EC2 instances in the same region, I first ran a script on an EC2 instance to download from arXiv’s S3 bucket and extract and repackage just the TeX source files. This allowed me to greatly reduce the amount of data transfer required and allowed me to download the full TeX source file corpus for <$5. Next, I used the TexSoup Python package to process the TeX files and produce a list of LaTeX commands used in the caption environment. I then used a final script to search for papers that used command names that referenced colors or shapes to compile a list of likely paper candidates and produced HTML files for each year containing a link to the PDF for each candidate paper as well as the full TeX source for the identified caption, with the matching commands highlighted. Finally, I manually verified the papers using the HTML files that were produced. Except for trivial false positivies, which could be identified by looking at the included caption source, I manually looked at the PDF for each candidate paper, verified that it included a visual caption indicator, and classified the caption indicator if it had one. For papers that included indicators, I then attempted to locate the published version of record of the paper and did the same for it. Through this process, my scripts located around ~5100 paper candidates from the beginning of arXiv in 1992 through the end of June 2020. I manually verified these candidates for papers submitted prior to the end of 2016; these accounted for ~2000 candidates, of which I verified ~1100 papers to have some sort of visual caption indicator. For ~700 of these, I was able to verify the presence of some form of visual caption indicator in the published version of record. Of these, ~60% included a black shape or line indicator, ~25% included a color shape or line indicator, and the remainder included colored text. The fraction of papers with color shape or line indicators was higher in the pre-prints, since it was not uncommon for the published version to include a black indicator when the pre-print included a colored indicator. I stopped at the end of 2016 since the verification process was quite time consuming, and I could only look at so many papers before giving up. These findings show that the idea of using figure color caption indicators is by no means a new idea. However, it’s still quite rare in relative terms, since at most a couple thousand out of arXiv’s ~1.7 million pre-prints include such indicators. Most of the examples I found used a colored shape () or line () in parentheses, or both in cases where both a line and marker were used. My proposal to use a colored underline does still appear to have been a novel concept, but it proved quite complicated to implement, so using shapes or lines in parentheses is much more practical, since it is simpler and is evidentially compatible with many publishers’ workflows. Furthermore, the existing examples can be used as evidence when complaining about paper proofs, after the typesetter predictably removes the indicators, to show that the indicators are possible and that they can and should be included in the final published version of the paper. One color indicator that I recommend against using is colored text, since it can be difficult to read and often violates WCAG contrast guidelines. Its use seems particularly common in the computer vision literature and, to a lesser degree, the machine learning literature. It is often used to highlight table entries, a purpose much better served by using italic, bold, or bold–italic text. I have made the scripts used for this analysis, the paper candidates, and the final verified results available. The final verified results are also available separately for easy viewing. Note that the verified results are incomplete and may contain errors. ]]> https://mpetroff.net/2020/10/update-on-figure-caption-color-indicators/feed/ 0 Pre-calculated line breaks for HTML / CSS https://mpetroff.net/2020/05/pre-calculated-line-breaks-for-html-css/ https://mpetroff.net/2020/05/pre-calculated-line-breaks-for-html-css/#respond Mon, 25 May 2020 16:12:05 +0000 https://mpetroff.net/?p=3225 Continue reading ]]> Although slowly improving, typography on the web pages is considerably lower quality than that of high-quality print / PDF typography, such as that produced by LaTeX or Adobe InDesign. In particular, line breaks and hyphenation need considerable improvement. While CSS originally never specified what sort of line breaking algorithm should be used, browsers all converged on greedy line breaking, which produces poor-quality typography but is fast, simple, and stable. CSS Text Module Level 4 standardizes the current behavior as the default with a text-wrap property while introducing a pretty option, which instructs the browser to use a higher quality line breaking algorithm. However, as of the time of writing, no browsers supported this property. I recently came across a CSS library for emulating LaTeX’s default appearance.1 However, it doesn’t emulate the Knuth–Plass line breaking algorithm, which is one of the things that makes LaTeX look good. This got me wondering whether or not it’s possible to emulate this with plain HTML and CSS. A JavaScript library already exists to emulate this, but it adds extra complexity and is a bit slow. It turns out that it is possible to pre-calculate line breaks and hyphenation for specific column widths in a manner that can be encoded in HTML and CSS, as long as web fonts are used to standardize the text appearance across various browsers. The key is to wrap all the potential line breaks (inserted via ::after pseudo-elements) and hyphens in <span> elements that are hidden by default with display: none;. Media queries are then used to selectively show the line breaks specific to a given column width. Since every line has an explicit line break, justification needs to be enabled using text-align-last: justify;, and word-spacing: -10px; is used to avoid additional automatic line breaks due to slight formatting differences between browsers. However, this presents a problem for the actual last line of each paragraph, since it is now also justified instead of left aligned. This is solved by wrapping each possible last line in a <span> element. Using media queries, the <span> element corresponding to the given column width is set to use display: flex;, which makes the content be left-aligned and take up the minimum space required, thereby undoing the justification; word-spacing: 0; is also set to undo the previous change to it and fix the word spacing. Unfortunately, the nested <span> elements are problematic, because there are no spaces between them; this is fixed by including a space in the HTML markup at the beginning of the <span> and setting white-space: pre; to force the space to appear. I’ve prepared a demo page demonstrating this technique. It was constructed by calculating line breaks in Firefox 76 using the tex-linebreak bookmarklet and manually inserting the markup corresponding to the line breaks; some fixes were manually made because the library does not properly support em dashes. Line breaks were calculated for column widths between 250 px and 500 px at 50 px increments. The Knuth–Plass line breaks lead to a considerable improvement in the text appearance, particularly for narrower column widths. In addition to the improved line breaks, I also implemented protrusion of hyphens, periods, and commas into the right margin, a microtypography technique, which further improves the appearance. To (hopefully) avoid issues with screen readers, aria-hidden="true" is set on the added markup; user-select: none; is also set, to avoid issues with text copying. While this technique works fine in Firefox and Chrome, it does not work in Safari, since Safari does not support text-align-last as of Safari 13.2 Despite it not working, the corresponding WebKit bug is marked as “resolved fixed”; it seems that support was actually added in 2014, but the support is behind the CSS3_TEXT compile-time flag, which is disabled by default. Thus, I devised an alternative method that used invisible 100% width elements to force line breaks without using explicit line breaks. This again worked in Firefox and Chrome, although it caused minor issues with text selection, but it again had significant issues in Safari. It appears that Safari does not properly handle justified text with negative word spacing; relaxing the word spacing, however, causes extra line breaks due to formatting differences, which breaks the technique. At this point, I gave up on supporting Safari and just set it to use the browser default line breaking by placing the technique’s CSS behind an @supports query for text-align-last: justify. Automated creation of the markup would be necessary to make this technique more generally useful, but the demo page serves as a proof of concept. Ideally, browsers would implement an improved line breaking algorithm, which would make this technique obsolete. 1. Also see corresponding Hacker News discussion. 2. Even Internet Explorer 6 supports this. ]]> https://mpetroff.net/2020/05/pre-calculated-line-breaks-for-html-css/feed/ 0 A Case Study in Product Label Regressions https://mpetroff.net/2020/03/a-case-study-in-product-label-regressions/ https://mpetroff.net/2020/03/a-case-study-in-product-label-regressions/#respond Sun, 29 Mar 2020 22:02:24 +0000 https://mpetroff.net/?p=3195 Continue reading ]]> Sometime last year, the Shop & Shop and Giant (of Landover) grocery store chains began introducing redesigned packaging for their store brand products. The two chains share a parent company and share branding, so the labels only use the shared logo without a brand name. The old label designs heavily featured a white background, which made them easy to locate in the store.1 The new brand identity is less distinct, but whether it’s better or worse is a matter of taste. However, there are specific design decisions that were made on some of the labels that have fundamental issues. In particular, I will focus on the labels for canned vegetables. As one would expect, both the old and new label designs feature the name of the vegetable along with a picture of a “serving suggestion.” Since many vegetables are similar in color, it is often easier to find one’s desired vegetable on the shelf by looking for the name, especially when a particular vegetable comes in multiple variants, such as green beans (whole, cut, diagonally cut, and French style). The old design featured a plain sans-serif font in a dark color on a solid white background, resulting in good contrast and readability. The new design, however, is a clear regression; it trades the consistent, easily readable font for a hodgepodge of different heavily-stylized display fonts on a busier background with lower contrast, which results in much worse readability. This loss of readability makes it take longer to locate a particular product on the shelf. Many of the vegetables come in three variants, regular (“full salt”), low sodium, and no salt added. In the old designs, these were marked using text in a brightly-colored oval. Blue was used for no salt added, and red was used for low sodium; the regular variant did not include an oval. This design allowed one to quickly differentiate between the variants on the shelf. With the new design, these colored ovals were eliminated. The low sodium variant trades the black text on the regular variant for bright blue text and a distinctive blue bar above the text with a clearly readable “low sodium” label. This is an improvement over the old design as it makes the labeling more distinctive and easier to differentiate. Unfortunately, the same is not true for the no salt added variant, which, for some inexplicable reason, is labeled the same as the regular variant except for a small, blandly-colored circular badge in the corner. Instead, it should have been labeled with a distinctive color and a bar with a clearly readable “no salt added” label, similar to the low sodium variant, except using a different color. The font size was increased for the ingredients list, which is one of the only improvements in the new designs. 1. It’s the closest I’ve seen to what’s suggested by xkcd: Brand Identity. ]]> https://mpetroff.net/2020/03/a-case-study-in-product-label-regressions/feed/ 0 Color Cycle Survey Update https://mpetroff.net/2020/01/color-cycle-survey-update/ https://mpetroff.net/2020/01/color-cycle-survey-update/#respond Fri, 31 Jan 2020 23:38:30 +0000 https://mpetroff.net/?p=3171 Continue reading ]]> Since my last update on the Color Cycle Survey, there have been no drastic changes, but responses have continued to trickle in. There are now ~13.7k total responses, with ~6k responses each for the six color and eight color components. This long-delayed—and somewhat brief—post serves as an update to my previously published six color analysis, while also extending it to eight colors. I have only made minor changes to the previously detailed analysis procedures (see previous set ranking and order ranking posts for details), but there are now ~50% more responses, which has helped with training stability and has reduced uncertainty between different models in the network ensemble. The figure below shows the fifteen lowest ranked six-color color sets on the left and the fifteen highest ranked six-color color sets on the right. The accuracy for both the training and tests sets remained at 58%. The plot below shows the average six-color color set scores as a function of rank, with a 1-sigma error band. Using the highest-ranked six-color set, the figure below shows the fifteen lowest ranked orderings on the left and the fifteen highest ranked orderings on the right. Accuracy was similar to before, with an accuracy of 55% on the training set and an accuracy of 54% on the test set. The plot below shows the average ordering scores as a function or rank, with a 1-sigma error band. Next, the same technique was extended to the eight-color color sets. The figure below shows the fifteen lowest ranked eight-color color sets on the left and the fifteen highest ranked eight-color color sets on the right. The accuracy was 57% for both the training and test sets. The plot below shows the average eight-color color set scores as a function of rank, with a 1-sigma error band. Using the highest-ranked eight-color set, the figure below shows the fifteen lowest ranked orderings on the left and the fifteen highest ranked orderings on the right. The accuracy was 55% on the training set and 53% on the test set. The plot below shows the average ordering scores as a function or rank, with a 1-sigma error band. This is an incremental improvement over the previous results, as it just used extra data, while keeping the analysis procedure the same. The fact that accuracy was similar when the analysis was extended to eight-color color sets and color cycles is promising. I’d like to devise a method that combines both the six-color and eight-color color sets in the training process to maximize the use of the response data; I have a few ideas on how to do this but nothing concrete yet. I’ve also looked more into the idea of devising a color namability criterion by reanalyzing the xkcd Color Survey results. While my reanalysis has led to some interesting tidbits about color names, it didn’t really pan out as far as becoming a useful criterion for ranking the color sets at hand. I’ve been trying to clarify the licensing on the raw xkcd Color Survey responses database dump before writing up my findings, but so far, I have not received a reply from Randall Munroe (which is understandable). As always, more responses would be helpful. I had not originally intended for the survey to go on as long as it has, but as I’ve been busy with my normal (cosmology-related) research and as I’ve not received as many responses as I had hoped for, the survey remains open to responses. I plan on leaving it open until the analysis is close to final (at least a few more months), after which I’ll close the survey to responses and execute the final analysis runs. ]]> https://mpetroff.net/2020/01/color-cycle-survey-update/feed/ 0 Figure Caption Color Indicators https://mpetroff.net/2019/11/figure-caption-color-indicators/ https://mpetroff.net/2019/11/figure-caption-color-indicators/#respond Sat, 23 Nov 2019 18:36:42 +0000 https://mpetroff.net/?p=3032 Continue reading ]]> .fccip-color-underline { text-decoration-line: underline; text-decoration-style: solid; text-decoration-thickness: 0.2em; text-decoration-skip-ink: auto; } .fccip-red { text-decoration-color: #d62728; } .fccip-red-square::after { content: "\202f\25a0"; position: relative; display: inline-block; color: #d62728; } .fccip-blue { text-decoration-color: #1f77b4; } .fccip-blue-gray { text-decoration-color: #6a6a6a; } .fccip-blue-square::after { content: "\202f\25a0"; position: relative; display: inline-block; color: #1f77b4; } .fccip-blue-gray-square::after { content: "\202f\25a0"; position: relative; display: inline-block; color: #6a6a6a; } .fccip-blue-square-mono::after { content: "\25a0"; position: relative; display: inline-block; color: #1f77b4; margin-left: 0.2rem; top: -0.08rem; } .fccip-blue-diamond::after { content: "\25c6"; font-size: 0.85rem; position: relative; top: -0.05rem; display: inline-block; color: #1f77b4; } .fccip-orange { text-decoration-color: #ff7f0e; } .fccip-orange-gray { text-decoration-color: #878787; } .fccip-orange-square::after { content: "\202f\25a0"; position: relative; display: inline-block; color: #ff7f0e; } .fccip-orange-gray-square::after { content: "\202f\25a0"; position: relative; display: inline-block; color: #878787; } .fccip-orange-circle::after { content: "\25cf"; font-size: 1.2rem; position: relative; display: inline-block; color: #ff7f0e; } .fccip-gray { text-decoration-color: #ccc; } .fccip-gray-square::after { content: "\202f\25a0"; position: relative; display: inline-block; color: #ccc; } .fccip-dotted { text-decoration-style: dotted; } .fccip-caption { margin-left: 3em; margin-right: 3em; padding: 0.5em 0; border-top: 1px solid #aaa; border-bottom: 1px solid #aaa; } Earlier this year, I became aware of a feature in GitHub-flavored Markdown that displays a colored square inline when HTML color codes are surrounded by backticks, e.g., #1f77b4. Although I only recently became aware of this feature, it dates back to at least 2017 and is similar to a feature that Slack has had since at least 2014. When I saw this inline color presentation, I immediately thought of its applicability to figure captions, particularly in academic papers; as a colorblind individual, matching colors referenced in figure captions to features in the figures themselves can be challenging at times due to difficulties with naming colors. Thus, I added similar annotations to figure captions in my recently submitted paper, Two-year Cosmology Large Angular Scale Surveyor (CLASS) Observations: A First Detection of Atmospheric Circular Polarization at Q Band: Fig. 2. Frequency dependence of polarized atmospheric signal at zenith for the CLASS observing site, both for circular polarization ($|V|$, shown in blue) and linear polarization ($\sqrt{Q^2+U^2}$, shown in orange). The light gray bands indicate CLASS observing frequencies, with the lowest frequency band corresponding to the Q-band telescope. Fig. 5. Example binned azimuth profiles are shown…angle cut. The profile in blue is from a zenith angle of 43.9° and a boresight rotation angle of −45°, the profile in orange is from a zenith angle of 46.7° and a boresight rotation angle of 0°, and the profile in red is from a zenith angle of 52.8° and a boresight rotation angle of +45°. The first caption refers to a line plot, while the second caption refers to a scatter plot with best fit lines. These examples, as well as underlining examples elsewhere in this post, display best in a browser that supports changing the underline thickness via the text-decoration-thickness CSS property. At the time of writing, this includes Firefox 70+ and Safari 12.2+ but does not include any version of Chrome; however, browser underlining support is still subpar to the underline rendered by $\LaTeX$, so the reader is encouraged to view the figures in the paper. While the primary purpose of these annotations is to improve accessibility for individuals with color vision deficiencies, they are also helpful when a paper is printed or displayed in grayscale. For example, it is much easier to distinguish blue and orange in grayscale with the annotations than without. As this was an experiment, I included two different methods for visualizing the color, a thick colored underline under and a colored square following the color name. Since the colors are referring to solid lines in the plot, the underlines make sense because they match the plot features, e.g., a solid blue line. Likewise, a dotted underline might make sense for a dotted blue line, although it is more difficult to discern the color of the dotted line than the solid line. I am undecided as to whether or not including the colored square is a good idea. While it adds an additional visual cue, the main reason I included it was to increase the chances of at least one of the indicators making it past the editors and into the final published paper; as the paper is currently under review, it remains to be seen if either indicator survives the publication process. For scatter plots, however, colored shapes make perfect sense. A scatter plot with red squares (), blue diamonds (), and orange circles () should include such shapes in the figure caption when the caption refers to the corresponding points. I am undecided as to whether or not the color names in such cases should be underlined, just as I am undecided as to whether or not line plots should included a colored square. Although I have not seen any color indicators, for either lines or scatter points, in the scientific literature, the use of shapes in figure captions is not a new practice. I have found examples dating from the mid-1950s through the early 2000s. The closest example I have found is in a 1997 paper1 that refers to a symbol with both its name and a graphical representation: Fig. 5. Couette-Taylor experiments. Logarithmic…number. The black triangles (▲) are the results obtained with smooth cylinders, and the open ones (△) correspond to those obtained with the ribbed ones. The crosses (×) show for comparison…and Swinney [8]. Other examples include a 1967 paper2 (and a 1968 paper3) that uses graphical representations inline instead of symbol names: Fig. 13. Additional…symmetries. Points marked with ■ are the excess…nuclei, points marked with □ the excess…N = Z. The points ▼ show the differences…larger Z-values. The points △ are the differences…for even-Z–odd-N nuclei. and a 1955 paper4 that puts the figure legend inline in the figure caption: Fig. 1. (p,n) cross sections in millibarns. ○—measured total…isotope; □—partial…isotope; ×—observed…estimate. Curves…of r0. The dotted bands indicate…energy. There are other examples, e.g., this 1960 paper,5 that put the legend on separate lines at the end of the caption, but doing so isn’t really the same idea. There are also papers that treated line styles in the same manner as scatter plot symbols, such as this 1962 paper:6 Fig. 1. Counting rate…in pulses per cm2 sec. Maximum…is indicated by broken lines (– – –). The zone…has been shaded. These examples should not be considered by any means exhaustive, since searching for this sort of thing is extremely difficult.7 In particular, while I don’t know of any prior publications that include color indicators, this does not mean that they do not exist. If anyone reading this is aware of any such examples, or of other interesting figure caption indicators, please let me know. Adoption of visual color indicators such as the ones presented here would be a significant accessibility improvement, but it would require buy-in from both publishers and authors. The chances of success are unclear but would certainly be improved with advocacy. ### Implementation The $\LaTeX$ color annotation command was defined as % Black square \usepackage{amsmath} % Define color \usepackage{xcolor} \definecolor{tab:blue}{RGB}{31, 119, 180} % Color underlines with breaks for descenders, based on: % https://tex.stackexchange.com/a/75406 % https://tex.stackexchange.com/a/24771 % https://tex.stackexchange.com/a/321235 \usepackage{soul} \usepackage[outline]{contour} \newcommand \colorindicator[2]{% \begingroup% \setul{0.25ex}{0.4ex}% \contourlength{0.2ex}% \setulcolor{#1}% \ul{{\phantom{#2}}}\llap{\contour{white}{#2}} \textcolor{#1}{\tiny{$\blacksquare$}}% \endgroup% } and used with \colorindicator{tab:blue}{blue}. For HTML, this CSS .color-underline { text-decoration-line: underline; text-decoration-style: solid; text-decoration-thickness: 0.2em; text-decoration-skip-ink: auto; } .blue { text-decoration-color: #1f77b4; } .blue-square::after { content: "\202f\25a0"; position: relative; display: inline-block; color: #1f77b4; } was used with <span class="color-underline blue blue-square">blue</span> to produce blue. A production implementation would probably involve a symbol web font to improve and normalize the symbol appearance and possibly a better way to draw underlines. Update (2020-10-31): see update on search for existing examples 1. Cadot, O., Y. Couder, A. Daerr, S. Douady, and A. Tsinober. “Energy injection in closed turbulent flows: Stirring through boundary layers versus inertial stirring.” Physical Review E 56, no. 1 (1997): 427. doi:10.1103/PhysRevE.56.427 2. Haque, Khorshed Banu, and J. G. Valatin. “An investigation of the separation energies of lighter nuclei.” Nuclear Physics A 95, no. 1 (1967): 97-114. doi:10.1016/0375-9474(67)90154-6 3. Aydin, C. “The spectral variations of CU Virginis (HD 124224).” Memorie della Societa Astronomica Italiana 39 (1968): 721. bibcode:1968MmSAI..39..721A 4. Blosser, H. G., and T. H. Handley. “Survey of (p, n) reactions at 12 MeV.” Physical Review 100, no. 5 (1955): 1340. doi:10.1103/PhysRev.100.1340 5. Evans, D. S., G. V. Raynor, and R. T. Weiner. “The lattice spacings of thorium-lanthanum alloys.” Journal of Nuclear Materials 2, no. 2 (1960): 121-128. doi:10.1016/0022-3115(60)90039-8 6. Vernov, S. N., E. V. Gorchakov, Yu I. Logachev, V. E. Nesterov, N. F. Pisarenko, I. A. Savenko, A. E. Chudakov, and P. I. Shavrin. “Investigations of radiation during flights of satellites, space vehicles and rockets.” Journal of the Physical Society of Japan Supplement 17 (1962): 162. bibcode:1962JPSJS..17B.162V 7. I found most of the above examples by performing full-text searches in NASA ADS for terms such as “black diamond” or “filled square” and looking through hundreds of results to find the few instances that included both the search terms and the symbols. ]]> https://mpetroff.net/2019/11/figure-caption-color-indicators/feed/ 0
# Recurrent Neural Networks for Drawing Classification Quick, Draw! is a game where a player is challenged to draw a number of objects and see if a computer can recognize the drawing. The recognition in Quick, Draw! is performed by a classifier that takes the user input, given as a sequence of strokes of points in x and y, and recognizes the object category that the user tried to draw. In this tutorial we'll show how to build an RNN-based recognizer for this problem. The model will use a combination of convolutional layers, LSTM layers, and a softmax output layer to classify the drawings: The figure above shows the structure of the model that we will build in this tutorial. The input is a drawing that is encoded as a sequence of strokes of points in x, y, and n, where n indicates whether a the point is the first point in a new stroke. Then, a series of 1-dimensional convolutions is applied. Then LSTM layers are applied and the sum of the outputs of all LSTM steps is fed into a softmax layer to make a classification decision among the classes of drawings that we know. This tutorial uses the data from actual Quick, Draw! games that is publicly available. This dataset contains of 50M drawings in 345 categories. ## Run the tutorial code To try the code for this tutorial: 1. Install TensorFlow if you haven't already. 3. Download the data in TFRecord format from here and unzip it. More details about how to obtain the original Quick, Draw! data and how to convert that to TFRecord files is available below. 4. Execute the tutorial code with the following command to train the RNN-based model described in this tutorial. Make sure to adjust the paths to point to the unzipped data from the download in step 3. python train_model.py \ --training_data=rnn_tutorial_data/training.tfrecord-?????-of-????? \ --eval_data=rnn_tutorial_data/eval.tfrecord-?????-of-????? \ --classes_file=rnn_tutorial_data/training.tfrecord.classes ## Tutorial details We make the data that we use in this tutorial available as TFRecord files containing TFExamples. You can download the data from here: Alternatively you can download the original data in ndjson format from the Google cloud and convert it to the TFRecord files containing TFExamples yourself as described in the next section. ### Optional: Download the full Quick Draw Data The full Quick, Draw! dataset is available on Google Cloud Storage as ndjson files separated by category. You can browse the list of files in Cloud Console. Then use the following command to check that your gsutil installation works and that you can access the data bucket: gsutil ls -r "gs://quickdraw_dataset/full/simplified/*" which will output a long list of files like the following: gs://quickdraw_dataset/full/simplified/The Eiffel Tower.ndjson gs://quickdraw_dataset/full/simplified/The Great Wall of China.ndjson gs://quickdraw_dataset/full/simplified/The Mona Lisa.ndjson gs://quickdraw_dataset/full/simplified/aircraft carrier.ndjson ... Then create a folder and download the dataset there. mkdir rnn_tutorial_data cd rnn_tutorial_data gsutil -m cp "gs://quickdraw_dataset/full/simplified/*" . This download will take a while and download a bit more than 23GB of data. ### Optional: Converting the data To convert the ndjson files to TFRecord files containing tf.train.Example protos run the following command. python create_dataset.py --ndjson_path rnn_tutorial_data \ --output_path rnn_tutorial_data This will store the data in 10 shards of TFRecord files with 10000 items per class for the training data and 1000 items per class as eval data. This conversion process is described in more detail in the following. The original QuickDraw data is formatted as ndjson files where each line contains a JSON object like the following: {"word":"cat", "countrycode":"VE", "timestamp":"2017-03-02 23:25:10.07453 UTC", "recognized":true, "key_id":"5201136883597312", "drawing":[ [ [130,113,99,109,76,64,55,48,48,51,59,86,133,154,170,203,214,217,215,208,186,176,162,157,132], [72,40,27,79,82,88,100,120,134,152,165,184,189,186,179,152,131,114,100,89,76,0,31,65,70] ],[ [76,28,7], [136,128,128] ],[ [76,23,0], [160,164,175] ],[ [87,52,37], [175,191,204] ],[ [174,220,246,251], [134,132,136,139] ],[ [175,255], [147,168] ],[ [171,208,215], [164,198,210] ],[ [130,110,108,111,130,139,139,119], [129,134,137,144,148,144,136,130] ],[ [107,106], [96,113] ] ] } For our purpose of building a classifier we only care about the fields "word" and "drawing". While parsing the ndjson files, we process them line by line using a function that converts the strokes from the drawing field into a tensor of size [number of points, 3] containing the differences of consecutive points. This function also returns the class name as a string. def parse_line(ndjson_line): """Parse an ndjson line and return ink (as np array) and classname.""" class_name = sample["word"] inkarray = sample["drawing"] stroke_lengths = [len(stroke[0]) for stroke in inkarray] total_points = sum(stroke_lengths) np_ink = np.zeros((total_points, 3), dtype=np.float32) current_t = 0 for stroke in inkarray: for i in [0, 1]: np_ink[current_t:(current_t + len(stroke[0])), i] = stroke[i] current_t += len(stroke[0]) np_ink[current_t - 1, 2] = 1 # stroke_end # Preprocessing. # 1. Size normalization. lower = np.min(np_ink[:, 0:2], axis=0) upper = np.max(np_ink[:, 0:2], axis=0) scale = upper - lower scale[scale == 0] = 1 np_ink[:, 0:2] = (np_ink[:, 0:2] - lower) / scale # 2. Compute deltas. np_ink = np_ink[1:, 0:2] - np_ink[0:-1, 0:2] return np_ink, class_name Since we want the data to be shuffled for writing we read from each of the category files in random order and write to a random shard. For the training data we read the first 10000 items for each class and for the eval data we read the next 1000 items for each class. This data is then reformatted into a tensor of shape [num_training_samples, max_length, 3]. Then we determine the bounding box of the original drawing in screen coordinates and normalize the size such that the drawing has unit height. Finally, we compute the differences between consecutive points and store these as a VarLenFeature in a tensorflow.Example under the key ink. In addition we store the class_index as a single entry FixedLengthFeature and the shape of the ink as a FixedLengthFeature of length 2. ### Defining the model To define the model we create a new Estimator. If you want to read more about estimators, we recommend this tutorial. To build the model, we: 1. reshape the input back into the original shape - where the mini batch is padded to the maximal length of its contents. In addition to the ink data we also have the lengths for each example and the target class. This happens in the function _get_input_tensors. 2. pass the input through to a series of convolution layers in _add_conv_layers. 3. pass the output of the convolutions into a series of bidirectional LSTM layers in _add_rnn_layers. At the end of that, the outputs for each time step are summed up to have a compact, fixed length embedding of the input. 4. classify this embedding using a softmax layer in _add_fc_layers. In code this looks like: inks, lengths, targets = _get_input_tensors(features, targets) final_state = _add_rnn_layers(convolved, lengths) ### _get_input_tensors To obtain the input features we first obtain the shape from the features dict and then create a 1D tensor of size [batch_size] containing the lengths of the input sequences. The ink is stored as a SparseTensor in the features dict which we convert into a dense tensor and then reshape to be [batch_size, ?, 3]. And finally, if targets were passed in we make sure they are stored as a 1D tensor of size [batch_size] In code this looks like this: shapes = features["shape"] lengths = tf.squeeze( tf.slice(shapes, begin=[0, 0], size=[params["batch_size"], 1])) inks = tf.reshape( tf.sparse_tensor_to_dense(features["ink"]), [params["batch_size"], -1, 3]) if targets is not None: targets = tf.squeeze(targets) The desired number of convolution layers and the lengths of the filters is configured through the parameters num_conv and conv_len in the params dict. The input is a sequence where each point has dimensionality 3. We are going to use 1D convolutions where we treat the 3 input features as channels. That means that the input is a [batch_size, length, 3] tensor and the output will be a [batch_size, length, number_of_filters] tensor. convolved = inks for i in range(len(params.num_conv)): convolved_input = convolved if params.batch_norm: convolved_input = tf.layers.batch_normalization( convolved_input, training=(mode == tf.estimator.ModeKeys.TRAIN)) # Add dropout layer if enabled and not first convolution layer. if i > 0 and params.dropout: convolved_input = tf.layers.dropout( convolved_input, rate=params.dropout, training=(mode == tf.estimator.ModeKeys.TRAIN)) convolved = tf.layers.conv1d( convolved_input, filters=params.num_conv[i], kernel_size=params.conv_len[i], activation=None, strides=1, name="conv1d_%d" % i) return convolved, lengths We pass the output from the convolutions into bidirectional LSTM layers for which we use a helper function from contrib. outputs, _, _ = contrib_rnn.stack_bidirectional_dynamic_rnn( cells_fw=[cell(params.num_nodes) for _ in range(params.num_layers)], cells_bw=[cell(params.num_nodes) for _ in range(params.num_layers)], inputs=convolved, sequence_length=lengths, dtype=tf.float32, scope="rnn_classification") see the code for more details and how to use CUDA accelerated implementations. To create a compact, fixed-length embedding, we sum up the output of the LSTMs. We first zero out the regions of the batch where the sequences have no data. mask = tf.tile( [1, 1, tf.shape(outputs)[2]]) zero_outside = tf.where(mask, outputs, tf.zeros_like(outputs)) outputs = tf.reduce_sum(zero_outside, axis=1) The embedding of the input is passed into a fully connected layer which we then use as a softmax layer. tf.layers.dense(final_state, params.num_classes) ### Loss, predictions, and optimizer Finally, we need to add a loss, a training op, and predictions to create the ModelFn: cross_entropy = tf.reduce_mean( tf.nn.sparse_softmax_cross_entropy_with_logits( labels=targets, logits=logits)) # Add the optimizer. train_op = tf.contrib.layers.optimize_loss( loss=cross_entropy, global_step=tf.train.get_global_step(), learning_rate=params.learning_rate, # some gradient clipping stabilizes training in the beginning. predictions = tf.argmax(logits, axis=1) return model_fn_lib.ModelFnOps( mode=mode, predictions={"logits": logits, "predictions": predictions}, loss=cross_entropy, train_op=train_op, eval_metric_ops={"accuracy": tf.metrics.accuracy(targets, predictions)}) ### Training and evaluating the model To train and evaluate the model we can rely on the functionalities of the Estimator APIs and easily run training and evaluation with the Experiment APIs: estimator = tf.estimator.Estimator( model_fn=model_fn, model_dir=output_dir, config=config, params=model_params) # Train the model. tf.contrib.learn.Experiment( estimator=estimator, train_input_fn=get_input_fn( mode=tf.contrib.learn.ModeKeys.TRAIN, tfrecord_pattern=FLAGS.training_data, batch_size=FLAGS.batch_size), train_steps=FLAGS.steps, eval_input_fn=get_input_fn( mode=tf.contrib.learn.ModeKeys.EVAL, tfrecord_pattern=FLAGS.eval_data, batch_size=FLAGS.batch_size), min_eval_frequency=1000) Note that this tutorial is just a quick example on a relatively small dataset to get you familiar with the APIs of recurrent neural networks and estimators. Such models can be even more powerful if you try them on a large dataset. When training the model for 1M steps you can expect to get an accuracy of approximately of approximately 70% on the top-1 candidate. Note that this accuracy is sufficient to build the quickdraw game because of the game dynamics the user will be able to adjust their drawing until it is ready. Also, the game does not use the top-1 candidate only but accepts a drawing as correct if the target category shows up with a score better than a fixed threshold.
# Is there always a stable position for a rectangular lawn table? It is a common calculus / analysis exercice to give a square lawn table, with legs at the corners, and ask to prove that it is always possible to rotate it so that it stands stably (not necessarily in perfect horizontal), as long as the wobblyness comes from the uneven lawn, not from the legs of the table itself. The proof, which assumes the lawn is suitably nice, uses the intermediate value theorem by rotating the table $90^\circ$ while keeping three legs firmly on the ground and noting that since all that's happened is that the diagonals of the table swapped places, the fourth leg must have passed through the ground at some point. What about a rectangular table? This has the complication that the table only has $180^\circ$ rotational symmetry, so there is no way to rotate it so as to have the diagonals swap position nicely. So, how can we prove that if we keep three legs along the ground as we turn the table around, the last leg must touch the ground at some point? Or is there some different approach that works? We are, of course, making the same assumptions as in the square case about the niceness of the lawn and that the table stands stably on an even floor. • @AlexR Why should that work? If the upper left leg doesn't touch the ground, and you turn the table $180^\circ$, then it has become the lower right leg. But it will still be above the ground, since you can just wobble the table and have the upper left leg up in the air again. Thus we don't have any leverage to use the intermediate value theorem. – Arthur Nov 7 '16 at 20:49 • Hmm, that's right. Sorry, my bad. – AlexR Nov 7 '16 at 20:59 • Hope you agree with the additional tags. If not, you're free to remove them again. – Han de Bruijn Nov 11 '16 at 12:30 • how nice is "suitably nice"? I have seen a paper by Matschke about the squarepeg problem ams.org/notices/201404/rnoti-p346.pdf and he references a paper by Fenn about the table theorem blms.oxfordjournals.org/content/2/1/73.extract, and google search on Fenn table theorem brings more results, e.g. Mark D. Meyerson , Remarks on Fenn's the table theorem'' and Zaks' the chair theorem''. projecteuclid.org/euclid.pjm/1102711107 ..there seem to be different versions with somewhat different assumptions, and I find "the same assumptions" in your question a bit vague. – Mirko Nov 15 '16 at 4:39 • @Mirko To me it seems like it should be "the restriction of the garden to any given circle is the graph of a continuous function", or perhaps, "Any assumption that makes the standard proof for square tables actually work" – Arthur Nov 15 '16 at 8:46 ## 2 Answers It seems the answer is known, so I will just provide a reference and some comments. A google search with the terms Fenn's table theorem rectangle brings up a paper by: Bill Baritompa, Rainer Löwen, Burkard Polster, and Marty Ross, titled Mathematical Table Turning Revisited, https://arxiv.org/pdf/math/0511490.pdf From their introduction: We prove that given any rectangle, any continuous ground and any point on the ground, the rectangle can be positioned such that all its vertices are on the ground and its center is on the vertical through the distinguished point. This is a mathematical existence result and does not provide a practical way of actually finding a balancing position. Towards a proof, they define "a mathematical table" using a rectangle, as follows. In the mathematical analysis of the problem, we will first assume that the ground is the graph of a function $g : \Bbb R^2 \to \Bbb R$, and that a mathematical table consists of the four vertices of a rectangle of diameter $2$ whose center is on the $z$-axis. They state further down: This result is a seemingly undocumented corollary of a theorem by Livesay [15], which can be phrased as follows: For any continuous function $f$ defined on the unit sphere, we can position a given mathematical table with all its vertices on the sphere such that $f$ takes on the same value at all four vertices. [15] Livesay, George R. On a theorem of F. J. Dyson. Ann. of Math. 59 (1954), 227–229. https://www.jstor.org/stable/1969689 The following is Theorem 3 from Livesay [15] (where $S_2=S^2$ is the sphere, and $E_1=\Bbb R$ is the real line). Let $f:S_2\to E_1$ be continuous, $0 < \theta \le 90^\circ$. Then there exist two diameters of $S_2$ subtending the angle $\theta$, such that the four end points $y_1, y_2, y_3, y_4$ of these diameters satisfy $f(y_i) = f(y_j), i, j = 1,...,4$. Going back to the paper by Baritompa, Löwen, Polster, and Ross, they use Livesay's Theorem as follows. Given any continuous ground function $g : \Bbb R^2 \to \Bbb R$ and any "mathematical table" (a rectangle with diagonals of length $2$) they construct $f:S^2\to\Bbb R$ by $f(x,y,z)=z-g(x,y)$. Then apply Livesay's Theorem to position the given mathematical table with all its vertices on the sphere and such that $f$ takes on the same value at all four vertices. The picture (as I interpret it) is that the table is the given rectangle (assuming without loss of generality that the diagonals have length $2$), and the above proof positions the vertices of the table on the unit sphere in such a way that if we assume that the legs are vertical, then the table is perfectly balanced on the given ground $g$. This was confusing a bit initially, since I am used to think that the legs ought to be perpendicular to the table (and the table need not be horizontal, so the legs would not need to be vertical). But this is not really a problem (the authors don't seem to comment on it, but it appears to be a triviality ... and actually they do comment later in their paper about mathematical vs real tables), once we balance the table (not necessarily horizontal) so that the legs a vertical, then we could "move" the table so that the bottom of each leg remains fixed on the ground, while the table and the top of each leg move until the legs become perpendicular to the table. (I was a bit lost for a while to see exactly what was proved in which paper and how the result follows, so now that I think I figured it, I will include my interpretation here, for convenience.) I indicated in a comment that this subject also related to the inscribed square problem, so I include a few more links (about both the squarepeg problem, and about versions of the table theorem), in case someone might find them useful. A Survey on the Square Peg Problem, by Benjamin Matschke, Notices of the AMS Volume 61, Number 4, p.346-352. http://www.ams.org/notices/201404/rnoti-p346.pdf In particular note Conjecture 13 (Table problem on $S^2$) there: Suppose $x_1,x_2,x_3,x_4\in S^2\subset\Bbb R^3$ are the vertices of a square that is inscribed in the standard $2$-sphere, and let $h : S^2\to\Bbb R$ be a smooth function. Then there exists a rotation $\rho\in SO(3)$ such that $h(\rho(x_1))=h(\rho(x_2))=h(\rho(x_3))=h(\rho(x_4))$. So far this result has been proven only when $x_1,x_2,x_3,x_4$ lie on a great circle (see Dyson). Balancing acts, by Mark Meyerson, Topology proceedings, Volume 6, 1981, pages 59-75 http://www.topo.auburn.edu/tp/reprints/v06/tp06107s.pdf Remarks on Fenn's "the table theorem" and Zaks' "the chair theorem", by Mark D. Meyerson, Pacific Journal of Mathematics, Volume 110, Number 1 (1984), 167-169, https://projecteuclid.org/euclid.pjm/1102711107 The Table Theorem, by Roger Fenn, Bull. London Math. Soc. (1970) 2 (1): 73-76, doi: 10.1112/blms/2.1.73 http://blms.oxfordjournals.org/content/2/1/73.extract Dyson, F. J. Continuous functions defined on spheres. Ann. of Math. 54 (1951), 534–536. https://www.jstor.org/stable/1969487 Let points A,B and C be constantly on the floor and D in the air, then center E of the rectangle has its lowest point for various positions of ABC. Lets call that position $A_1B_1C_1D_1E_1$ If we rotate rectangle so A get in the place of $B_1$ then E is bellow $E_1$ (cos A and C are on the floor and $B_1$, $D_1$ not) and we have contradiction. • naaah, wrong answer, but then also the proof for square doesn't stand – Djura Marinkov Nov 15 '16 at 19:50
Compute hierarchical or kmeans cluster analysis and return the group assignment for each observation as vector. ## Usage cluster_analysis( x, n = NULL, method = "kmeans", include_factors = FALSE, standardize = TRUE, verbose = TRUE, distance_method = "euclidean", hclust_method = "complete", kmeans_method = "Hartigan-Wong", dbscan_eps = 15, iterations = 100, ... ) ## Arguments x A data frame. n Number of clusters used for supervised cluster methods. If NULL, the number of clusters to extract is determined by calling n_clusters(). Note that this argument does not apply for unsupervised clustering methods like dbscan, hdbscan, mixture, pvclust, or pamk. method Method for computing the cluster analysis. Can be "kmeans" (default; k-means using kmeans()), "hkmeans" (hierarchical k-means using factoextra::hkmeans()), pam (K-Medoids using cluster::pam()), pamk (K-Medoids that finds out the number of clusters), "hclust" (hierarchical clustering using hclust() or pvclust::pvclust()), dbscan (DBSCAN using dbscan::dbscan()), hdbscan (Hierarchical DBSCAN using dbscan::hdbscan()), or mixture (Mixture modeling using mclust::Mclust(), which requires the user to run library(mclust) before). include_factors Logical, if TRUE, factors are converted to numerical values in order to be included in the data for determining the number of clusters. By default, factors are removed, because most methods that determine the number of clusters need numeric input only. standardize Standardize the dataframe before clustering (default). verbose Toggle warnings and messages. distance_method Distance measure to be used for methods based on distances (e.g., when method = "hclust" for hierarchical clustering. For other methods, such as "kmeans", this argument will be ignored). Must be one of "euclidean", "maximum", "manhattan", "canberra", "binary" or "minkowski". See dist() and pvclust::pvclust() for more information. hclust_method Agglomeration method to be used when method = "hclust" or method = "hkmeans" (for hierarchical clustering). This should be one of "ward", "ward.D2", "single", "complete", "average", "mcquitty", "median" or "centroid". Default is "complete" (see hclust()). kmeans_method Algorithm used for calculating kmeans cluster. Only applies, if method = "kmeans". May be one of "Hartigan-Wong" (default), "Lloyd" (used by SPSS), or "MacQueen". See kmeans() for details on this argument. dbscan_eps The 'eps' argument for DBSCAN method. See n_clusters_dbscan(). iterations The number of replications. ... Arguments passed to or from other methods. ## Value The group classification for each observation as vector. The returned vector includes missing values, so it has the same length as nrow(x). ## Details The print() and plot() methods show the (standardized) mean value for each variable within each cluster. Thus, a higher absolute value indicates that a certain variable characteristic is more pronounced within that specific cluster (as compared to other cluster groups with lower absolute mean values). Clusters classification can be obtained via print(x, newdata = NULL, ...). ## Note There is also a plot()-method implemented in the see-package. ## References • Maechler M, Rousseeuw P, Struyf A, Hubert M, Hornik K (2014) cluster: Cluster Analysis Basics and Extensions. R package. • n_clusters() to determine the number of clusters to extract. • cluster_discrimination() to determine the accuracy of cluster group classification via linear discriminant analysis (LDA). • check_clusterstructure() to check suitability of data for clustering. • https://www.datanovia.com/en/lessons/ ## Examples set.seed(33) # K-Means ==================================================== rez <- cluster_analysis(iris[1:4], n = 3, method = "kmeans") rez # Show results #> # Clustering Solution #> #> The 3 clusters accounted for 68.16% of the total variance of the original data. #> #> Cluster | n_Obs | Sum_Squares | Sepal.Length | Sepal.Width | Petal.Length | Petal.Width #> --------------------------------------------------------------------------------------- #> 1 | 21 | 23.16 | -1.32 | -0.37 | -1.13 | -1.11 #> 2 | 33 | 17.33 | -0.81 | 1.31 | -1.28 | -1.22 #> 3 | 96 | 149.26 | 0.57 | -0.37 | 0.69 | 0.66 #> #> # Indices of model performance #> #> Sum_Squares_Total | Sum_Squares_Between | Sum_Squares_Within | R2 #> -------------------------------------------------------------------- #> 596.000 | 406.249 | 189.751 | 0.682 #> #> # You can access the predicted clusters via 'predict()'. #> predict(rez) # Get clusters #> [1] 2 1 1 1 2 2 2 2 1 1 2 2 1 1 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 1 1 2 2 2 1 1 2 #> [38] 2 1 2 2 1 1 2 2 1 2 1 2 2 3 3 3 3 3 3 3 1 3 3 1 3 3 3 3 3 3 3 3 3 3 3 3 3 #> [75] 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 1 3 3 3 3 1 3 3 3 3 3 3 3 3 3 3 3 3 #> [112] 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 #> [149] 3 3 summary(rez) # Extract the centers values (can use 'plot()' on that) #> Cluster Sepal.Length Sepal.Width Petal.Length Petal.Width #> 1 1 -1.3232208 -0.3718921 -1.1334386 -1.1111395 #> 2 2 -0.8135055 1.3145538 -1.2825372 -1.2156393 #> 3 3 0.5690971 -0.3705265 0.6888118 0.6609378 if (requireNamespace("MASS", quietly = TRUE)) { cluster_discrimination(rez) # Perform LDA } #> # Accuracy of Cluster Group Classification via Linear Discriminant Analysis (LDA) #> #> Group Accuracy #> 1 100.00% #> 2 71.43% #> 3 100.00% #> #> Overall accuracy of classification: 96.00% #> # Hierarchical k-means (more robust k-means) if (require("factoextra", quietly = TRUE)) { rez <- cluster_analysis(iris[1:4], n = 3, method = "hkmeans") rez # Show results predict(rez) # Get clusters } #> Welcome! Want to learn more? See two factoextra-related books at https://goo.gl/ve3WBa #> [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 #> [38] 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 3 2 2 2 2 2 2 2 2 3 2 2 2 2 3 2 2 2 #> [75] 2 3 3 3 2 2 2 2 2 2 2 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 3 2 3 3 3 3 2 3 3 3 3 #> [112] 3 3 2 2 3 3 3 3 2 3 2 3 2 3 3 2 3 3 3 3 3 3 2 2 3 3 3 2 3 3 3 2 3 3 3 2 3 #> [149] 3 2 # Hierarchical Clustering (hclust) =========================== rez <- cluster_analysis(iris[1:4], n = 3, method = "hclust") rez # Show results #> # Clustering Solution #> #> The 3 clusters accounted for 74.35% of the total variance of the original data. #> #> Cluster | n_Obs | Sum_Squares | Sepal.Length | Sepal.Width | Petal.Length | Petal.Width #> --------------------------------------------------------------------------------------- #> 1 | 49 | 40.12 | -1.00 | 0.90 | -1.30 | -1.25 #> 2 | 24 | 18.65 | -0.40 | -1.36 | 0.06 | -0.04 #> 3 | 77 | 94.08 | 0.76 | -0.15 | 0.81 | 0.81 #> #> # Indices of model performance #> #> Sum_Squares_Total | Sum_Squares_Between | Sum_Squares_Within | R2 #> -------------------------------------------------------------------- #> 596.000 | 443.143 | 152.857 | 0.744 #> #> # You can access the predicted clusters via 'predict()'. #> predict(rez) # Get clusters #> [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 #> [38] 1 1 1 1 2 1 1 1 1 1 1 1 1 3 3 3 2 3 2 3 2 3 2 2 3 2 3 3 3 3 2 2 2 3 3 3 3 #> [75] 3 3 3 3 3 2 2 2 2 3 3 3 3 2 3 2 2 3 2 2 2 3 3 3 2 2 3 3 3 3 3 3 2 3 3 3 3 #> [112] 3 3 3 3 3 3 3 3 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 #> [149] 3 3 # K-Medoids (pam) ============================================ if (require("cluster", quietly = TRUE)) { rez <- cluster_analysis(iris[1:4], n = 3, method = "pam") rez # Show results predict(rez) # Get clusters } #> [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 #> [38] 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 3 3 3 2 3 3 3 3 3 3 3 3 2 3 3 3 3 3 3 3 3 #> [75] 3 2 2 2 3 3 3 3 3 3 3 3 2 3 3 3 3 3 3 3 3 3 3 3 3 3 2 3 2 2 2 2 3 2 2 2 2 #> [112] 2 2 3 2 2 2 2 2 3 2 3 2 3 2 2 3 3 2 2 2 2 2 3 3 2 2 2 3 2 2 2 3 2 2 2 3 2 #> [149] 2 3 # PAM with automated number of clusters if (require("fpc", quietly = TRUE)) { rez <- cluster_analysis(iris[1:4], method = "pamk") rez # Show results predict(rez) # Get clusters } #> [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 #> [38] 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 #> [75] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 #> [112] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 #> [149] 2 2 # DBSCAN ==================================================== if (require("dbscan", quietly = TRUE)) { # Note that you can assimilate more outliers (cluster 0) to neighbouring # clusters by setting borderPoints = TRUE. rez <- cluster_analysis(iris[1:4], method = "dbscan", dbscan_eps = 1.45) rez # Show results predict(rez) # Get clusters } #> #> Attaching package: ‘dbscan’ #> The following object is masked from ‘package:fpc’: #> #> dbscan #> [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 #> [38] 1 1 1 1 0 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 #> [75] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 #> [112] 2 2 2 2 2 2 0 0 2 2 2 2 2 2 2 2 2 2 2 2 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 #> [149] 2 2 # Mixture ==================================================== if (require("mclust", quietly = TRUE)) { library(mclust) # Needs the package to be loaded rez <- cluster_analysis(iris[1:4], method = "mixture") rez # Show results predict(rez) # Get clusters } #> Package 'mclust' version 5.4.10 #> Type 'citation("mclust")' for citing this R package in publications. #> [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 #> [38] 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 #> [75] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 #> [112] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 #> [149] 2 2
Online and Random-order Load Balancing Simultaneously Marco Molinaro (PUC-Rio) We consider the problem of online load balancing under $\ell_p$-norms: sequential jobs need to be assigned to one of the machines and the goal is to minimize the $\ell_p$-norm of the machine loads. This generalizes the classical problem of scheduling for makespan minimization (case $\ell_{\infty}$) and has been thoroughly studied. We provide algorithms with simultaneously optimal guarantees for the worst-case model as well as for the random-order (i.e. secretary) model, where an arbitrary set of jobs comes in random order. A crucial component for this result that we will try to highlight in the talk is a connection between smoothings of $\ell_p$ norms, the so-called Online Linear Optimization problem, and the expected norm of sums of random vectors.
## medical terminology ##### This topic has expert replies Legendary Member Posts: 510 Joined: 07 Aug 2014 Thanked: 3 times Followed by:5 members ### medical terminology by j_shreyans » Sat May 23, 2015 10:33 am Many of these gene flaws - there are plenty of them, with names like BCR-ABL - are relative newcomers to medical terminology, as are a majority of the new anti-tumor drugs, still in early testing, that are aimed at them. A)medical terminology, as are a majority of the new anti-tumor drugs, still in early testing, that are aimed at them. B)medical terminology like a majority of the new anti-tumor drugs that are still in early testing, and aimed at them. C)medical terminology, as are a majority of the new anti-tumor drugs that are still in early testing, aimed at them. D)medical terminology like a majority of the new anti-tumor drugs, still in early testing, that are aimed at them. E)medical terminology, as are a majority of the new anti-tumor drugs, still in early testing, and aimed at them. OAA Experts please explain and also i want to know as are is correct? GMAT Instructor Posts: 1035 Joined: 17 Dec 2010 Location: Los Angeles, CA Thanked: 474 times Followed by:364 members by VivianKerr » Sat May 23, 2015 12:26 pm Is this from a reputable source? It seems odd to me that "to medical" is underlined since it appears in every answer choice. This obviously isn't an official SC. That said, (B) and (E) are out because it isn't logical for "them" to refer to "gene flaws." The sentence would not make logical sense, and the usage of "and" implies that "them" is "gene flaws." So it's between (A), (C), and (D). There is nothing wrong with "as are" because it is a common construction used to make a comparison. Example: My eyes are blue, as are my sister's. Besides Meaning, this sentence is testing Modification. In (C), it's not clear what "aimed at them" is modifying, and in (D), the comma before like is missing . By process of elimination, it must be [spoiler](A)[/spoiler]. Vivian Kerr GMAT Rockstar, Tutor https://www.GMATrockstar.com https://www.yelp.com/biz/gmat-rockstar-los-angeles Former Kaplan and Grockit instructor, freelance GMAT content creator, now offering affordable, effective, Skype-tutoring for the GMAT at $150/hr. Contact: [email protected] Thank you for all the "thanks" and "follows"! Master | Next Rank: 500 Posts Posts: 111 Joined: 07 Mar 2015 Thanked: 8 times Followed by:1 members by binit » Sat May 23, 2015 8:55 pm Hi Vivian, I understood your POE, but does A express correctly what it intended to? E.g. what that are aimed at them is modifying? 'testing?' If yes, the comma before that is wrong, I think. And as u said, them referring to Gene flaws is wrong, I really don't understand how A corrects that error. I am bit confused with many issues with A. I believe the sentence could have been written much better and concise. Pls help. ~Binit.[/b] GMAT Instructor Posts: 1035 Joined: 17 Dec 2010 Location: Los Angeles, CA Thanked: 474 times Followed by:364 members by VivianKerr » Sun May 24, 2015 3:57 pm Hey Binit, You raise a good point. Could this sentence have been written in a more clear, concise manner? Absolutely. But sometimes with SC, we have to deal with a "best of the worst" strategy. These are the facts: 1) 1 answer choice is correct 2) 4 are wrong The 4 that are "wrong" are always wrong for one of two reasons: illogical meaning or GMAT grammar error. The only time we should consider "style" such as passive/active voice, concision, wordiness, redundancy, etc. is if we cannot find a MEANING or a GRAMMAR error first. Style is subjective, so if you had doubts about whether (A) had a logical meaning, I would write something like: (A) illog. M? Then move on to (B), leaving (A) uncrossed off your scratch pad. Keep ALL answer choices you cannot find a grammar or a DEFINITE meaning error in, then if you have 2 or more choices left, you can examine them for style issues. Make sense? Grammar/Meaning first. Then Style. Usually we'll be able to eliminate 4 based on grammar and won't ever have to worry about style! Vivian Kerr GMAT Rockstar, Tutor https://www.GMATrockstar.com https://www.yelp.com/biz/gmat-rockstar-los-angeles Former Kaplan and Grockit instructor, freelance GMAT content creator, now offering affordable, effective, Skype-tutoring for the GMAT at$150/hr. Contact: [email protected] Thank you for all the "thanks" and "follows"! Master | Next Rank: 500 Posts Posts: 111 Joined: 07 Mar 2015 Thanked: 8 times Followed by:1 members by binit » Sun May 24, 2015 9:16 pm Thanks a lot Vivian, I was initially worried about this particular question first, but, u have actually given me a strategy to attack tough SCs. I would like to apply that right form this question itself: 1. Grammar scan: there are 2 major issues: a. as vs like b. the modifier at the end 'Like' doesn't seem good since gene flaws can't be LIKE new anti-tumor drugs So, B and D gone.ruled out. A,C,E remaining. In A: drugs, still in early testing, that are aimed at them.- grammatically no error. still in early testing modifies DRUGS and without modifier we have: drugs that are aimed at them which is GOOD. In C: testing, aimed at them. doubtful because of the COMMA, I think it would have been GRAMMATICALLY correct w/o that COMMA. (pls correct me if needed) In E: Modifier is OKAY. But cut-off the fluff and we have: drugs and aimed at them - Nonsensical. Now we singled out A, so do not need to scan stylistically. Vivian, kindly give me your feedback about my approach to this problem. I ll highly appreciate any suggestions. ~Binit. GMAT Instructor Posts: 1035 Joined: 17 Dec 2010 Location: Los Angeles, CA Thanked: 474 times Followed by:364 members by VivianKerr » Mon May 25, 2015 12:35 pm Hey Binit, My approach is similar, except I don't try to find everything wrong with each choice. I don't do a "Grammar Scan" of all choices. I do it like this: Step 1 - Read Choice (A) and Identify One Grammar or Meaning Error Since we know that a sentence with a grammatical error or an illogical meaning can NEVER be correct on the GMAT, try to identify and name ONE specific error you see. It may seem like there are several things "wrong" with the sentence, so choose the error you feel the most confident about, and write it down on your scratch paper. For example, maybe you think the meaning might be illogical, the sentence overall is awkwardly constructed, and there is an incorrect comparison. You might choose to go with the comparison error first. What if there is no error in (A), as in this question? If (A) seems correct to you, or you cannot spot a grammar or meaning error, feel free to search for a style error. If you feel there is one, such as redundancy or passive voice, make a note of it next to letter "A" on your scratch pad, but DO NOT CROSS IT OFF YET. Remember, a style error doesn't make an option automatically incorrect. It only makes it less likely to be correct. Once you've done this, move on to (B) and look for an identifiable grammar or meaning error. If (A) is correct, then (B) must contain an error. Step 2 - Scan the Other Choices; Eliminate Error #1 Do any of the other 4 choices contain that same error? If so, quickly cross out Step 3 - Move to the Next Available Choice; Look for Error #2 If you have more than one choice left, repeat the process. Move to the next choice remaining and look for an identifiable grammar or meaning error. If none exists, feel free to look for a style error and make a note of it next to the letter on your scratch pad. Once you've identified a grammar or meaning error, cross off the letter of that answer choice, and the letters of any other answer choices that contain the same error. Repeat as needed. Step 4 - Stuck Between Two? Eliminate Based on Style On a difficult Sentence Correction, you may find yourself narrowed down to two answer choices that both seem grammatically correct and both have logical meanings. Which one does the GMAT prefer? The answer: the clearest, most concise option. If one choice appears to have awkwardness or wordiness or passive voice, select the other option. All grammar being equal, the GMAT rewards clarity. So my scratch work ends up looking something like: You're right that E has a non-sensical meaning but C would NOT be grammatically correct without the comma. The simple version of C without the comma would read: Many of these flaws are newcomers to MT, AS are a majority of X that are still in testing AIMED AT THEM. The "aimed at them" still comes out of nowhere since the "AS" sets up a comparison that logically ENDS after the modifier "that are still in testing." If there was a word like "and" before "aimed at them," then you could say that (C) was grammatically correct. Hope this helps! Vivian Kerr GMAT Rockstar, Tutor https://www.GMATrockstar.com https://www.yelp.com/biz/gmat-rockstar-los-angeles Former Kaplan and Grockit instructor, freelance GMAT content creator, now offering affordable, effective, Skype-tutoring for the GMAT at \$150/hr. Contact: [email protected] Thank you for all the "thanks" and "follows"! GMAT Instructor Posts: 15521 Joined: 25 May 2010 Location: New York, NY Thanked: 13060 times Followed by:1894 members GMAT Score:790 by GMATGuruNY » Tue May 26, 2015 2:41 am j_shreyans wrote: i want to know as are is correct? Many comparisons employ ELLIPSIS: the omission of words whose presence is implied. OA: Many of these gene flaws are relatively newcomers to medical terminology, as are a majority of the new anti-tumor drugs [relative newcomers to medical terminology]. Here, the words in brackets are omitted, but their presence is implied. Mitch Hunt Private Tutor for the GMAT and GRE [email protected] If you find one of my posts helpful, please take a moment to click on the "UPVOTE" icon. Available for tutoring in NYC and long-distance. For more information, please email me at [email protected]. Student Review #1 Student Review #2 Student Review #3 Legendary Member Posts: 979 Joined: 14 Apr 2009 Location: Hyderabad, India Thanked: 49 times Followed by:12 members GMAT Score:700 by bubbliiiiiiii » Tue May 26, 2015 5:07 am Is COMMA + that in A correct? Regards, Pranay GMAT Instructor Posts: 15521 Joined: 25 May 2010 Location: New York, NY Thanked: 13060 times Followed by:1894 members GMAT Score:790 by GMATGuruNY » Tue May 26, 2015 11:31 am bubbliiiiiiii wrote:Is COMMA + that in A correct? ...as are a majority of the new anti-tumor drugs, still in early testing, that are aimed at them. Here, the appearance of COMMA + that is misleading. The two commas have no relationship to the following that. Rather, their purpose is to set off the non-essential modifier in red. If we remove this non-essential modifier, we get: as are a majority of the new anti-tumor drugs that are aimed at them. There is precedent for this sort of construction on the GMAT. An official SC: Scientists have identified an asteroid, 2000 BF19, that is about half a mile wide. Here, the non-essential modifier in red separates the that-clause from its referent (an asteroid). If we remove this non-essential modifier, we get: Scientists have identified an asteroid that is about half a mile wide. Mitch Hunt Private Tutor for the GMAT and GRE [email protected] If you find one of my posts helpful, please take a moment to click on the "UPVOTE" icon. Available for tutoring in NYC and long-distance. For more information, please email me at [email protected]. Student Review #1 Student Review #2 Student Review #3 Senior | Next Rank: 100 Posts Posts: 51 Joined: 12 Jul 2015 Thanked: 4 times by Sun Light » Tue Jul 14, 2015 11:03 am is my reasoning fine? any other way to kill 'D' and 'E'? Other than the pronoun logic... Many of these gene flaws - there are plenty of them, with names like BCR-ABL - are relative newcomers to medical terminology, as are a majority of the new anti-tumor drugs, still in early testing, that are aimed at them. A medical terminology, as are a majority of the new anti-tumor drugs, still in early testing, that are aimed at them. Not a good feeling though, this is the best among all the options. B medical terminology like a majority of the new anti-tumor drugs that are still in early testing, and aimed at them. ", and " is the parallel marker. To the right of it we have a past participle "aimed at them" and to the left of ", and" we have a subordinate clause "that are still in early testing". Not parallel. C medical terminology, as are a majority of the new anti-tumor drugs that are still in early testing, aimed at them. "aimed at them" is a noun modifier, modifying "testing". Incorrect. D medical terminology like a majority of the new anti-tumor drugs, still in early testing, that are aimed at them. comma + like.. E medical terminology, as are a majority of the new anti-tumor drugs, still in early testing, and aimed at them. "and" is a parallel marker. "Aimed at them" is not parallel to anything. • Page 1 of 1
Analyzing data is the process of interpreting the meaning of the data collected, organized and displayed in the form of table or graphs. The process involves finding patterns, similarities, relationships etc. Analyzing data is not simple. It is a tedious work and little time consuming. Data analysis is important to make predictions and inferences based on the data and it is a critical skill to develop. It helps in suggesting conclusions and decision making, and is crucial to the development of theories and new ideas. ## How to Analyze Data? Analyzing data requires attention to detail and a relaxed frame of mind. Objective should be very specific and a clear idea of what evaluation questions you want the data to answer and, the choice of appropriate statistical method to be used should be known. When the data is assumed to follow a normal distribution in each group, parametric method is to be used. Non parametric test or distribution free methods are used when the data doesn't follow normal distribution. Analysis of data is based on three decision criteria - number of groups, data type and assumption of normal distribution (whether the data is normal or not). ## Analyzing Qualitative Data Qualitative data consists of words and observation and not numbers and it involves identification, interpretation and examining patterns and themes in textual data and determines how these patterns help answer the research questions. Qualitative or narrative data is conducted to organize the data into categories. It is a collection of random, unconnected statements and is considered to be objective. Qualitative data depends on people's opinions, assumptions, knowledge (therefore biases) than that of quantitative data. Researcher chooses to measure the accuracy of the observation where the analyst relates these responses and analyses using statistical techniques. ## Analyzing Quantitative Data Quantitative data are directly collected as numbers and are usually subjected to statistical procedures such as calculating the mean, frequency distribution, standard deviation etc. On higher levels of statistical analysis t-test, factor analysis, Analysis of variance, regression can also be conducted on the data. Quantitative data provides quantifiable and easy to understand results and can be analyzed in different ways. Quantitative data has four levels of measurement. Nominal -Nominal refers to categorically discrete data. For example, name of a book, type of car you drive. Nominal sounds like name so it should be easy to remember. Ordinal - A set of data is said to be ordinal if the observations belonging to it can be ranked. It is possible to count and order but not measure ordinal data.. Example: T-shirt size (large, medium, small). Interval - Measurements where the difference between values is measured by a fixed scale and is meaningful. Data is continuous and has a logical order and has a standard difference between values. Example: Temperature, Money, Education (In years) Ratio - Ratio variables are numbers with some base value. Ratio responses will have order and spacing where multiplication makes sense too. Example: Height, weight. Once levels of measurement have been identified based on the data, appropriate statistical methods can then be used. ## Analyzing Categorical Data When the data is collected in categories, we record counts. The categorical variables are of two types, nominal and ordinal. Analysis of categorical data involves the use of data tables and is a two way table where the number of observations that fall into each group for two variables will be recorded. One is divided into rows and the other is divided into columns. Another important tool for analyzing categorical data is segmented bar graph. ## Analyzing Likert Scale Data Likert scale is a psychometric scale commonly used in questionnaires and is the most widely used scale in survey research. Data analysis decision for Likert items is usually made at the questionnaire development stage. When the Likert questions are unique and stand alone, they are considered as Likert type items. Frequencies, modes, medians are the appropriate statistical tools to be used for analysis. When a series of questions are combined measuring a particular trait, then it is a Likert scale. Mean and standard deviation are used to describe the scale. Once the decision between Likert - type and Likert scale has been made, the decision on the appropriate statistics will fall into place. Given below is an example for a Likert scale asked in a survey. Respondents specify their level of agreement to a statement. Statement: Ice-cream is good for breakfast 1. Strongly disagree 2. Disagree 3. Neither agree nor disagree 4. Agree 5. Strongly agree Likert scaling is a bipolar scaling method measuring either positive or negative response to a statement. ## Box-And-Whisker Plot Box and whisker plot is a histogram like method of displaying data and depicts groups of numerical data through their quartiles. The lines extending vertically from the boxes indicate variability outside the upper and lower quartiles. Outliers can be plotted as individual points. Box plots are non parametric and it can easily display differences between populations without making any assumptions of the statistical distribution. Box plots can be drawn either vertically or horizontally. Box and whisker plot displays the five point summary - median, first quartile, third quartile, maximum and minimum. ## Analyzing Survey Data Analyzing survey data consists of a number of interrelated processes that are intended to summarize, arrange and transform data into information. Analyzing survey data involves editing, analysis, reporting. It mainly depends on the sample size survey's research design and the quality of data. Commonly used methods in analysis in surveys are like logistic regression, descriptive statistics, regression modelling, correlation, regression etc., Descriptive statistics can be used for variance estimation. ## Quartile Deviation Quartile deviation is half the difference between the upper and lower quartiles in a distribution and is a measure of the spread through the middle half of a distribution. Quartile deviation ignores the observation on the tails and is not influenced by extremely high or extremely low scores. It is an ordinal statistic and is often used in conjunction with the median Quartile deviation is given by Quartile deviation = $\frac{Q_{3} - Q_{1}}{2}$ where, Q$_{1}$ : First quartile Q$_{3}$ : Third quartile Q$_{3}$ - Q$_{1}$ : Interquartile range Quartile deviation is a slightly better measure of absolute dispersion than the range. When different samples from a population are taken and their quartile deviations are calculated, their values are likely to be sufficiently different. This is known as sampling fluctuation. Quartile deviation calculated from sample does not help to draw any conclusion about the quartile deviation in the population. It can be used for comparing the dispersion in two or more than two sets of data. Given below is an example in finding the quartile deviation. ### Solved Example Question: Rice production (in Kg) of 20 acres for the 9 set of observations is : 1230, 1150, 1040, 2310, 1453, 1755, 1752, 1900, 1885. Find the quartile deviation for the given data. Solution: Given n = 9 Quartile deviation (Q.D) is given by the formula Quartile deviation = $\frac{Q_{3} - Q_{1}}{2}$ To find The first quartile ($Q_{1}$) $Q_{1}$ = Value of ($\frac{n+1}{4}$) th item Value of $\frac{9+1}{2}$ th item So, it is the value of 5th item  $Q_{1}$  = 1453. $Q_{3}$ = Value of $\frac{3(n+1)}{4}$ th item Value of (7.5) th item 7th item + 0.5 (8 th item  - 7th item) $Q_{3}$  = 1752 + 0.5 (1900 -1752) = 1826 Now, quartile deviation = $\frac{1826 - 1453}{2}$ Therefore, quartile deviation = 186.5 ### Probability Distribution Statistical Inference Variance and Standard Deviation Tally Marks Statistics Variables Graphing Data Analyze Qualitative Data Analyzing Quantitative Data Census Data Data Collection Anova Categorical Data Bivariate Data Analysis Correlation Data Set Data for a Line Graph Data Sets for Statistics Descriptive Statistics and Inferential Statistics Anova Statistic Graph Statistics Calculate Statistics Percentile Calculator Statistics Proportion Calculator Statistics Math Formulas Physics Formulas Chemistry Formulas
## FANDOM 18,988 Pages For the various bonuses offered according to which attack method is selected, see Attack style. The attack types are the different ways in which a player or NPC may attack another player, another NPC, or a piece of interactive scenery that can be attacked. The five types are stab, slash, crush, magic, and ranged. Equipment gives players attack bonuses towards and defence bonuses against each of the attack styles. ## The bonuses Edit The following are detailed descriptions of the attack bonuses: • Crush: Crush is the primary attack type when wielding maces, warhammers, and maul-class weapons, and is used secondarily by two-handed swords, spears and battleaxes. It is also the only attack type that can be used when fighting unarmed. Crush works best against platebodies. Most small monsters, such as spiders, or hard-shelled monsters, are typically also more vulnerable to crush attacks. Crush attacks are less effective if the enemy has flexible skin/armour or is large, e.g., giants. • Magic: The Magic attack type can only be used when casting offensive spells, and it is the only type that can be selected while doing so. The maximum hit of any magic spell is determined by the spell itself (only the Staff of the dead can now increase the maximum hit of magic attacks), and the accuracy of the spell is determined by the caster's magic attack bonus, 70% of the opponent's Magic level, and 30% of the opponent's Defence level. Magic is very effective against metal armour, but is rarely used against opponents that deflect magic fairly easily, like dragonhide armour or dragons themselves. • Ranged: The Ranged attack type hurls projectiles (typically Arrows or Bolts) at an opponent from a distance. It requires a ranged weapon, which may be different (typically a Bow or Crossbow) from the projectile, although some projectiles (such as Knives or Darts) do not need a separate weapon and are wielded directly. If the weapon and the projectile are separate, the projectile is called "ammunition." Ranged attacks are best used against foes that use magical attacks or are large, but are considerably less effective against enemies wearing metal armour. ## Formula Edit The formula for probability of landing an attack is currently unknown. The following formula gives only an estimate of the actual probability of hitting, with results above 1 rounded down. The constant A is unknown, but is most likely below 1. $A \cdot \frac {\text{Attack Bonus} \cdot \text{Current Attack Level}}{\text{Defence Bonus} \cdot \text{Current Defence Level}}$
# Warum ist es schwieriger, Quantencomputer zu bauen als klassische Computer? 31 Liegt es daran, dass wir nicht genau wissen, wie man Quantencomputer erstellt (und wie sie funktionieren müssen), oder dass wir wissen, wie man sie theoretisch erstellt, aber nicht über die Werkzeuge verfügen, um sie tatsächlich in der Praxis auszuführen? Ist es eine Mischung aus den beiden oben genannten? Andere Gründe? Why is it harder to build a GPU than to build a CPU? Same difference. A Quantum computer is not a stand-alone computer. It's a co-processor to a host computer, just like what your GPU is inside your current PC. The two videos starting at youtu.be/PN7mPYcWFKg are very insightful for beginners like us. Mark Jeronimus 2 @MarkJeronimus it's not the same difference. A GPU is basically a whole lot of very simple CPUs running in parallel. It does have tight restriction on how memory access can be performed etc., but that just makes it more difficult to program, not to build. 3 Classical computers don't break if you look at them. Mark @leftaroundabout It's not the same difference now, but I'd argue it was with the very first 3D accelerators (and to some extent, even 3D software rendering). A huge part of the problem is simply exploring new technology, having to build up all new tools and approaches. Once someone found a good way of making 3D accelerators, it became a lot more "mundane" (though do keep in mind that most makers of 3D accelerators are now out of business). Granted, the "quantum computer" is an even bigger challenge (requiring a lot more entirely new tools and approaches), but it's not fundamentally different Luaan 1 The two are so different they can't be compared. It's harder to built because it's a heck of a lot newer and a heck of a lot more complicated. The both of them being called 'computer' doesn't mean they're comparable in nature. Mast Antworten: 34 We know exactly, in theory, how to construct a quantum computer. But that is intrinsically more difficult than to construct a classical computer. In a classical computer, you do not have to use a single particle to encode bits. Instead, you might say that anything less than a billion electrons is a 0 and anything more than that is a 1, and aim for, say, two billion of electrons to encode a 1 normally. That makes you inherently fault-tolerant: Even if there are hundreds of millions of electrons more or less than expected, you will still get the correct classification as a digital 0 or a 1. In a quantum computer, this trick is not possible due to the non-cloning theorem: You cannot trivially employ more than one particle to encode a qubit (quantum bit). Instead, you must make all your gates operate so well that they are not just accurate to the single particle level but even to a tiny fraction of how much they act on a single particle (to the so-called quantum-error correction threshold). This is much more challenging than to get gates accurate merely to within hundreds of millions of electrons. Meanwhile we do have the tools to, just barely, make quantum computers with the required level of accuracy. But nobody has, as of yet, managed to make a big one meaning one that can accurately operate on the perhaps hundred of thousands of physical qubits needed to implement a hundred or so logical qubits to then be undeniably in the realm where the quantum computer beats classical computers at select problems (quantum supremacy). Well... there is D-Wave. The 2000Q system has 2000 qubits and is definitely outperforming classical systems on algorithms with efficient quantum implementations. They've been growing capability pretty rapidly - I'd expect a next-gen 4000 qubit system from them within 12 months. J... 1 Are replicated circuits still cloning? What stops you from having parrallel circuits with copied inputs? Can't you use voting to increase the robustness of such systems? whn 2 @snb It doesn't scale. The problem is that as you go "deeper" with the gates, you need more and more replicated circuits to get the same accuracy. But do keep in mind that calculations on quantum computers nowadays are usually ran many times over anyway. Overall, there's a reason why we're so interested in problems that are hard to solve, but easy to verify - you can use a quantum computer to give the problem a try, and verify the result with a classical computer. Keep repeating until they agree :) Luaan 11 There's many reasons, both in theory and implementation, that make quantum computers much harder to build. The simplest might be this: while it is easy to build machines that exhibit classical behaviour, demonstrations of quantum behaviour require really cold and really precisely controlled machines. The thermodynamic conditions of the quantum regime are just hard to access. When we finally do achieve a quantum system, it's hard to keep it isolated from the environment which seeks to decohere it and make it classical again. Scalability is a big issue. The bigger our computer, the harder it is to keep quantum. The phenomena that promise to make quantum computers really powerful, like entanglement, require the qubits can interact with eachother in a controlled way. Architectures that allow this control are hard to engineer, and hard to scale. Nobody's agreed on a design! As @pyramids points out, the strategies we use to correct errors in classical machines usually involve cloning information, which is forbidden by quantum information theory. While we have some strategies to mitigate errors in clever quantum ways, they require that are qubits are already pretty noise-free and that we have lots of them. If we can't improve our engineering past some threshold, we can't employ these strategies - they make things worse! Also notable: the reason we use digital systems is that small variations in inputs and outputs of individual elements usually don't propagate, so you can keep adding more "layers" of computation without significantly decreasing the reliability. This kind of isolation seems to be impossible for quantum computers, at least for now - and no-cloning simply adds more salt to the wound :) Luaan 3 Simpler answer: All quantum computers are classical computers too, if you limit their gate set to only classical gates such as $X$$X$, which is the NOT gate. Every time you build a quantum computer, you're also building a classical computer, so you can prove mathematically that building a quantum computer must be at least as hard as building a classical computer. 2 One important point is that quantum computers contain classical computers. So it must be at least as hard to build a quantum computer as it is a classical computer. For a concrete illustration, it's worth thinking about universal gate sets. In classical computation, you can create any circuit you want via the combination of just a single type of gate. Often people talk about the NAND gate, but for the sake of this argument, it's easier to talk about the Toffoli gate (also known as the controlled-controlled-not gate). Every classical (reversible) circuit can be written in terms of a whole bunch of Toffolis. An arbitrary quantum computation can be written as a combination of two different types of gate: the Toffoli and the Hadamard. This has immediate consequences. Obviously, if you're asking for two different things, one of which does not exist in classical physics, that must be harder than just making the one thing that does exist in classical physics. Moreover, making use of the Hadamard means that the sets of possible states you have to consider are no longer orthogonal, so you cannot simply look at the state and determine how to proceed. This is particularly relevant to the Toffoli, because it becomes harder to implement as a result: before, you could safely measure the different inputs and, dependent upon their values, do something to the output. But if the inputs are not orthogonal (or even if they are, but in an unknown basis!) you cannot risk measuring them because you will destroy the states, specifically, you destroy the superpositions that are the whole thing that's making quantum computation different from classical computation. “Because quantum computers contain classical computers” is a questionable argument. It's a bit like saying that due to Turing completeness it's at least as difficult build a Zuse-style mechanical calculator as it is to build a modern high-performance cluster. That's clearly not true. @leftaroundabout that's not what I'm saying at all. There you're comparing two different implementations of computers that implement P-complete problems. I'm comparing the generic thing that implements BQP-complete computations to the generic thing that implements P-complete computations. Even if you find the absolute best architecture for implementing quantum computation, that provides a way of implementing classical, which must be the same or worse than the best way. What I'm really saying is that P is contained within BQP, but we believe that there's much more in BQP. DaftWullie 2 In 1996, David DiVincenzo listed five key criteria to build a quantum computer: 1. A quantum computer must be scalable, 2. It must be possible to initialise the qubits, 3. Good qubits are needed, the quantum state cannot be lost, 4. We need to have a universal set of quantum gates, 5. We need to be able to measure all qubits. 1. The ability to interconvert stationary and flying qubits, 2. The ability to transmit flying qubits between distant locations. Long Explanation 0 I have to disagree with the idea that the no-cloning theorem make error correction with repetition codes difficult. Given that your inputs are provided in the computational basis (i.e. you inputs are not arbitrary superpositions, which is almost always the case, especially when you're solving a classical problem e.g. Schor's algorithm), you can clone them with controlled-Not gates, run your computation in parallel on all the copies, and then correct errors. The only trick is to make sure you don't do a measurement during error-correction (except possible of the syndrome), and to do this all you have to do is continue to make use quantum gates. Error correction for quantum computers is not much more difficult than for classical computers. Linearity takes can of most of the perceived difficulties. I'd also like to mention that there are much more efficient schemes for quantum error correction than repetition codes. And that you need two pauli-matrices to generate the rest, so you need two types of repetition codes if you're going to go for the inefficient, but conceptually simple repetition code route (one for bit-flips and one for phase flips). Quantum error correction shows that linear increase in the number of physical qubits per logical qubit improves the error rate exponentially, just as it does in the classical case. Still, we're nowhere near 100 physical qubits. This is the real problem. We need to be able to glue a lot more semi-accurate qubits together before any of this starts to matter. 5 I think you are forgetting that, for any sizable computation, it is insufficient to just do error correction by repeating the calculation as you suggest: The fidelity after $N$$N$ gates scales as ${F}^{N}$$F^N$ if $F$$F$ is the single gate fidelity. This becomes exponentially small if you only use this scheme. But during the computation, in general, you cannot use the repetition code you suggest. pyramids Can't you replace every gate $G$$G$ with the gate $decode-G-encode$$decode-G-encode$ for at worst a constant increase in circuit depth, even if you can't compile this expression down in you gate set? Reid Hayes 0 # Ultimate Black Box A quantum computer is by definition the ultimate black box. You feed in an input and you get a process, which produces an output. Any attempt to open up the black box, will result in the process not happening. Any engineer would tell you that would hinder any design process. Even the smallest design flaw would takes months of trial and error to trace down. Durch die Nutzung unserer Website bestätigen Sie, dass Sie unsere Cookie-Richtlinie und Datenschutzrichtlinie gelesen und verstanden haben.
# Two cross-platform implementations of getline in C I created my cross-platform implementations of getline function in C. It takes different arguments and have different return values than 'original' getline function, but aim is the same. The only argument input_file is file from which the line have to be read. Return value is the line read from file or NULL if nothing read. Here is one implementation, using fgets(): static inline char* getline(FILE* input_file){ const unsigned int chunk_size=256; char* line=malloc(chunk_size*sizeof*line+1); if(line==NULL){ fprintf(stderr,"Fatal: failed to allocate %zu bytes.\n",chunk_size*sizeof*line+1); exit(1); } unsigned int i; for(i=0;;++i){ memset(line+chunk_size*i,0,chunk_size); if(fgets(line+chunk_size*i,chunk_size+1,input_file)==NULL) break; if(line[strlen(line)-1]=='\n') break; char* tmp=realloc(line,chunk_size*(i+2)*sizeof*line+1); if(tmp==NULL){ fprintf(stderr,"Fatal: failed to allocate %zu bytes.\n",chunk_size*(i+2)*sizeof*line+1); exit(1); }else line=tmp; } if(strlen(line)==0){ free(line); return NULL; }else{ line[strlen(line)-1]=0; return line; } } Here is my second implementation, using fgetc(): static inline char* getline(FILE* input_file){ const unsigned int chunk_size=3; char* line=calloc(chunk_size,sizeof*line+1); if(line==NULL){ fprintf(stderr,"Fatal: failed to allocate %zu bytes.\n",chunk_size*sizeof*line+1); exit(1); } char c; unsigned int i,j; for(i=0,j=1;;++i){ c=(char)fgetc(input_file); if(c==EOF||c=='\n') break; line[i]=c; if(i==chunk_size*j){ ++j; char* tmp=realloc(line,chunk_size*(j+1)*sizeof*line+1); if(tmp==NULL){ fprintf(stderr,"Fatal: failed to allocate %zu bytes.\n",chunk_size*(j+1)*sizeof*line+1); exit(1); }else{ line=tmp; memset(line+chunk_size*j,0,chunk_size); } } } if(strlen(line)==0){ free(line); return NULL; }else{ line[strlen(line)]=0; return line; } } exit(1) is probably a bad idea, so is fprintf to stderr. We don't know how big the string can be, therefore running out of memory could be valid, and if stderr is redirected to somewhere a client can see. realloc should already set errno to ENOMEM, so you could probably just return NULL, and modify comment that upon returning NULL, caller should check errno. Or maybe you should return empty string if there's no data, and use NULL to indicate error. change calloc to malloc, as you anyway NULL terminate the string in your fgetc implementation. Instead of having this code duplicated for error message (which I vote against), and actual calculation: chunk_size*(j+1)*sizeof*line+1, you could create a variable and use it in both places, therefore you know you print exactly what you did, and there wasn't a mistake if you had to change the calculation slightly. Try this on windows (as it claims to be a cross platform implementation), but from memory fgetc will return '\r', which you'll put right into the line, whereas I'm pretty sure fgets won't return '\r', when the line terminator is "\r\n"; I believe fgets returns "\n", not "\r\n" on windows, even if "\r\n" is in the input stream. You don't need to assign data to tmp like this: char* tmp=realloc(line,chunk_size*(i+2)*sizeof*line+1); as upon success you always do: line=tmp, and upon failure, line will no longer point to valid memory. So you could just assign to line. Notice how you're calling malloc/calloc/realloc, I would add a freeline function to your code in order to free any memory allocated by this code. The caller may not be using the same malloc/calloc/realloc you're using, and their free may not be compatible. Finally I'm not really sure about this: chunk_size*(j+1)*sizeof*line+1 I think you want: chunk_size*(j+1)*sizeof*char+1, as I think line is of size 4 or maybe even 8?, print it out in the debugger, and see, I think you're allocating a lot more memory then you end up putting into the array. • Welcome to Code Review! Your answer has a lot of claims in it ("I'm pretty sure ...", "I think ...") and it would be nice to back these up with some references to documentation resources. – AlexV Apr 24 at 13:43 • I'm pretty sure that although many realloc() implementations set errno as you claim (and POSIX mandates it), that's not in the C Standard, so a conforming implementation may exist which doesn't do that. – Toby Speight Apr 24 at 14:46 • Ok, if the errno is not set for some reason, I would recommend setting it explicitly from the method. This might be a good idea in either case, as it makes the intention explicit, vs. relying on underlying code; Also if there is still going to be an fprintf to report on error, it may change the errno to something else, thus setting it right before returning would be good. – Alex Apr 24 at 15:05
# Your firm uses a periodic review system for all SKUs classified, using ABC analysis, as B or C it... ###### Question: please this is my last chance Your firm uses a periodic review system for all SKUs classified, using ABC analysis, as B or C items. Further, it uses a continuous review system for all SKUs classified as A items. The demand for a specific SKU, currently classified as an A item, has been dropping. You have been asked to evaluate the impact of moving the item from continuous review to periodic review. Assume your firm operates 52 weeks per year, the item's current characteristics are: Demand (D) 14,040 units/year Ordering cost (S)- $135.00/order Holding cost (H)$2.50 unitlyear Lead time (L) 6 weeks Cycle-service level 98% Demand is normally distributed, with a standard deviation of weekly demand of 78 units. a. Calculate the item's EOQ. E00units. (Enter your response rounded to the nearest whole number) a. Calculate the item's EOQ EOQ units. (Enter your response rounded to the nearest whole number.) b. Use the EOQ to define the parameters of an appropriate continuous review and periodic review system for this item. Refer to the standard normal table when necessary Under a continuous review system, orderunits whenever the inventory level drops to units. (Enter your responses rounded to the nearest whole number) Under a periodic review system, order up to c. Which system requires more safety stock and by how much? Select the correct choice below and fill in the answer box to complete your choice. The number) units every weeks. (Enter your responses rounded to the nearest whole number.) review system requirs more units of safety stock than the review system. (Enter your response rounded to the nearest whole 0.00 0.01 0.02 0.03 0.04 0.050.060.07 0.080.09 0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359 0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.57140.5754 0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141 0.3 0.6179 0.62170.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517 0.4 0.6554 0.65910.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879 0.5 0.6915 0.69500.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224 0.6 0.7258 0.72910.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7518 0.7549 0.7 0.7580 0.7612 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852 0.8 0.78810.7910 0.7939 0.7967 0.7996 0.8023 0.8051 0.8079 0.8106 0.8133 0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389 1.0 0.8413 0.8438 0.84610.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621 1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830 925 0.8944 0.8962 0.8980 0.8997 0.9015 0.9032 | 0.9049 | 0.9066 | 0.9082 | 0.9099 | 0.9 115 09131 | 0.9147 | 0.9162 | 0.91 77 1.4 0.9192 0.92070.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319 1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9430 0.9441 1.6 0.9452 0.9463 0.9474 0.9485 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545 1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633 1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9700 0.9706 1.9 0.9713 0.9719 |0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9762 0.9767 2.0 0.9773 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817 2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857 2.2 0.98610.9865 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890 2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916 2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936 0.8849 0.8869 0.8888 0.8907 0.9938 0.9940 0.99410.9943 948 0.9949 0.9951 0.9 2.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964 2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974 2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9980 0.9980 0.9981 2.9 0.9981 0.9982 0.9983 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986 3.0 0.9987 0.9987 0.99870.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990 3.1 0.9990 0.9991 0.9991 0.9991 0.9992 0.9992 0.9992 0.9992 0.9993 0.9993 3.2 0.9993 0.9993 0.9994 0.9994 0.9994 0.9994 0.9994 0.9995 0.9995 0.9995 0.9995 0.9995 0.9995 0.9995 0.9996 0.9996 0.9996 0.9996 0.9996 0.99 #### Similar Solved Questions ##### Accounting Wyco Company manufactures toasters. For the first 8 months of 2011, the company reported the following operating results while operating at 75% of plantcapacity.Sales (400,000 units) $4,000,000Cost of goods sold2,400,000Gross profit 1,600,000Operating expenses900,000Net income$700,000Cost of goods s... ##### Write critique responses to this question: 7. What strategies are used to analyze the data of... Write critique responses to this question: 7. What strategies are used to analyze the data of this article? Abstract This qualitative study used interviews to explore nurses’ perceptions of their role in protecting children and to identify any barriers to implementing the role. Participants in... ##### Write a C program which creates a template for a structure to store somebody's date of... Write a C program which creates a template for a structure to store somebody's date of birth (day, month, and year). Then write a C function that takes two such structure, one which represents the date of birth, and the other today's date, and returns the age of the user in years. Your funct... ##### A small, solid conducting sphere of radius r1 sits inside a hollow conducting spherical shell of... A small, solid conducting sphere of radius r1 sits inside a hollow conducting spherical shell of inner radius r2 and outer radius r3. A potential difference of magnitude V is placed across the inner and outer conductors so that there is a net charge of -Q on the inner conductor and +Q on the outer c... ##### Could somebody please answer the question in picture 2 using data in picture 1. and please answer... could somebody please answer the question in picture 2 using data in picture 1. and please answer in paragraph format. UTS Yield Elongation Toughness (ksi) | Alloy and Microstructure Hardness (record which Rockwell scale was used) | (kg) | (%) 0) ( Grain size & Fraction of each Phase) Tensile Ch... ##### 4. The diagram below shows supply and demand in a perfectly competitive local market for cubic... 4. The diagram below shows supply and demand in a perfectly competitive local market for cubic metres (m3) of garden soil. Price is per LLL LLD 10 20 30 40 50 60 70 NO90 100 110 120 Quantity (m' day) a. At the equilibrium market price, determine the following values: - total revenue received by ... ##### What is the formula for finding the perimeter of a triangle? What is the formula for finding the perimeter of a triangle?... ##### How to figure it out using periodic table Write the complete & abbreviated electron configuration for... How to figure it out using periodic table Write the complete & abbreviated electron configuration for the following elements given their: a. 2 b. 6 c. 8 d. 10 e. 18 f. 22 g. 32 h. 73.... ##### 49. If two fair dice are rolled, find the probability that the sum of the faces... 49. If two fair dice are rolled, find the probability that the sum of the faces is 12.... ##### A particle with charge q and mass m moves in a horizontal plane at right angles... A particle with charge q and mass m moves in a horizontal plane at right angles to a uniform vertical magnetic field B. a) Compute the path radius if the particle is an electron with a speed of 1.0×105 m/s traveling in a region where the field strength is 0.10 mT . answer in mm b) What is the ... ##### 2. Problem: Given Q(x)=2(2-1) . Give a step-by-step(δ Proof to prove that: lim QCx) 1. ing... 2. Problem: Given Q(x)=2(2-1) . Give a step-by-step(δ Proof to prove that: lim QCx) 1. ing the ε-δ definition you are using for this problem in terms of the formula of Q(x) and limit value... ##### Show Intro/Instructions Using your favorite statistics software package, you generate a scatter plot with a regression... Show Intro/Instructions Using your favorite statistics software package, you generate a scatter plot with a regression equation and correlation coefficient. The regression equation is reported as y = 81.28.1 + 71.93 and the r = -0.85. What percentage of the variation in y can be explained by the var...
# multiple approaches/ways to prove that $1000^N - 1$ cannot be a divisor of $1978^N - 1$ Am interested in learning to do multiple proofs for the same problem, and hence I chose this problem: Prove that for any natural number $N$, $1000^N - 1$ cannot be a divisor of $1978^N - 1$. I'd like to learn how to prove such a statement in more than one way (approach). - Welcome, user48390! I take it you're interested in learning how to prove a conjecture using different proof approaches/methods? Do you know of any proofs of your statement, so we don't duplicate what you already might know? –  amWhy Nov 6 '12 at 15:19 What does it mean to be a divisor "in more than one way"? –  EuYu Nov 6 '12 at 15:34 @EuYu, I think the OP means using more than one approach (to prove it. I'll edit, to clarify. user48390, correct me if I am wrong. –  amWhy Nov 6 '12 at 15:40 Hint $\$ Examining their factorizations for small $\rm\,N\,$ shows that the power of $3$ dividing the former exceeds that of the latter (by $2),$ so the former cannot divide the latter. It suffices to prove by induction that this pattern persists (which requires only simple number theory).
# zbMATH — the first resource for mathematics The drop theorem, the petal theorem and Ekeland’s variational principle. (English) Zbl 0612.49011 The following three statements are considered: (A) (altered Ekeland’s variational principle). Let $$f: M\to {\mathbb{R}}\cup \{+\infty \}$$ be an l.s.c. function on a complete metric space (M,d). Suppose f is bounded below and not everywhere $$+\infty$$. Then for any $$\gamma >0$$ and any $$x_ 0\in M$$ there exists $$a\in M$$ such that $$f(a)<f(x)+\gamma d(a,x)$$ for all $$x\in M$$, $$x\neq a$$, and f(a)$$\leq f(x_ 0)-\gamma d(a,x_ 0).$$ (F) (the flower petal theorem). Let X be a complete subset of a metric space (E,d). Let $$x_ 0\in X$$ and let $$b\in E\setminus X$$, $$r\leq d(b,X)$$, $$s=d(b,x_ 0)$$. Take any $$\gamma >0$$ and denote $$P_{\gamma}(a,b)=\{x\in E:\gamma d(x,a)+d(x,b)\leq d(a,b)\}$$. Then there exists $$a\in X\cap P_{\gamma}(x_ 0,b)$$ such that $$P_{\gamma}(a,b)\cap X=\{a\}.$$ (D) (the drop theorem). Let C be a complete subset of some normed vector space E, let $$x_ 0\in C$$ and let B be a closed ball with centre b and radius $$r<d(b,C)$$. Denote $$D(a,B)=\{a+t(b-a):b\in B$$, $$t\in [0,1]\}$$. Then there exists $$a\in C\cap D(x_ 0,B)$$ with $$D(a,B)\cap C=\{a\}.$$ The author gives a short proof of (A) and then proves the implications (A)$$\Rightarrow (F)\Rightarrow (D)\Rightarrow (A)$$. Some other geometrical properties of Banach spaces are proved as corollaries from the above-mentioned results. Reviewer: M.Studniarski ##### MSC: 49J52 Nonsmooth analysis 46B20 Geometry and structure of normed linear spaces 47H10 Fixed-point theorems 49J27 Existence theories for problems in abstract spaces Full Text: ##### References: [1] Altman, M., Contractor directions, directional contractors and directional contractions for solving equations, Pacif. J. math., 62, 1-18, (1976) · Zbl 0352.47027 [2] Altman, M., Contractors and contractor directions, theory and applications, (1977), Marcel Dekker New York · Zbl 0363.65045 [3] Aubin, J.-P.; Siegel, J., Fixed points and stationary points of dissipative systems, Proc. am. math. soc., 78, 391-398, (1980) · Zbl 0446.47049 [4] Auslender, A., Stability in mathematical programming with nondifferentiable data, SIAM J. control optim., 22, 239-254, (1984) · Zbl 0538.49020 [5] Bishop, E.; Phelps, R.R., The support functional of a convex set, (), 26-36 · Zbl 0149.08601 [6] Borwein, J., Tangent cones, starshape and convexity, Int. J. math. math. sci., 1, 459-477, (1978) · Zbl 0438.52009 [7] Borwein, J., Stability and regular points of inequality systems, () · Zbl 0557.49020 [8] B\scORWEIN J., A tangent cone separation principle, preprint, Dalhousie University, N.S., Canada. [9] Brezis, H.; Browder, F.E., A general principle on ordered sets in nonlinear functional analysis, Adv. math., 21, 355-364, (1976) · Zbl 0339.47030 [10] Bronsted, A., On a lemma of Bishop and Phelps, Pacif. J. math., 55, 335-341, (1974) · Zbl 0248.46009 [11] Browder, F., Normal solvability and the Fredholm alternative for mappings into infinite dimensional manifolds, J. funct. analysis, 8, 250-274, (1971) · Zbl 0228.47044 [12] Caristi, J., Fixed point theorems for mappings satisfying inwardness conditions, Trans. am. math. soc., 215, 241-251, (1976) · Zbl 0305.47029 [13] Clarke, F.H., Optimization and nonsmooth analysis, (1983), John Wiley New York · Zbl 0727.90045 [14] Danes, J., A geometric theorem useful in nonlinear functional analysis, Boll. un. mat. ital., 6, 369-375, (1972) · Zbl 0236.47053 [15] Dolecki, S., Hypertangent cones for a special class of sets, (), 3-11 [16] Dolecki, S.; Penot, J.-P., The Clarke’s tangent cone and limits of tangent cones, Publs math. pau, II, 1-11, (1983) [17] Ekeland, I., On the variational principle, J. math. analysis applic., 47, 325-353, (1974) · Zbl 0286.49015 [18] Ekeland, I., Nonconvex minimization problems, Bull. am. math. soc., 1, 443-474, (1979) · Zbl 0441.49011 [19] Ekeland, I., The Hopf-rinow theorem in infinite dimension, J. diff. geom. & C.R. acad. sci. Paris A-B, 284, 149-150, (1977) · Zbl 0345.58004 [20] Ekeland, I., Proc. international congress of mathematicians, (1979), Vancouver [21] Ekeland, I.; Lebourg, G., Generic Fréchet-differentiability and perturbed optimization problems in Banach spaces, Trans. am. math. soc., 224, 193-216, (1976) · Zbl 0313.46017 [22] Ekeland, I.; Temam, R., Analyse convexe et problèmes variationnels, Analyse convexe et problèmes variationnels, (1976), North Holland Amsterdam [23] Frankowska, H., The first order necessary conditions for nonsmooth variational and control problems, SIAM J. control optim., 22, 1-13, (1984) · Zbl 0529.49011 [24] Gautier, S.; Isac, G.; Penot, J.-P., Surjectivity of multifunctions under generalized differentiability assumptions, Bull. austr. math. soc., 28, 13-21, (1983) · Zbl 0518.46031 [25] Giner, E., Ensembles et fonctions étoilés, (), Preprint · Zbl 0368.46035 [26] Hiriart-Urruty, J.-B., A short proof of the variational principle for approximate solutions of a minimization problem, Am. math. mon., 90, 206-207, (1983) · Zbl 0516.49015 [27] Ioffe, A.D., Regular points of Lipschitz mappings, Trans. am. math. soc., 251, 61-69, (1979) · Zbl 0427.58008 [28] Ioffe, A.D., Non-smooth analysis: differential calculus of non-differentiable mappings, Trans. am. math. soc., 266, 1-56, (1981) · Zbl 0651.58007 [29] Kirk, W.A.; Caristi, J., Mapping theorems in metric and Banach spaces, Bull. acad. polon. sci., 23, 891-894, (1975) · Zbl 0313.47041 [30] Ya, Kruger A., Ε-approximate differential and normal cones, Viniti, (1982), (In Russian.) [31] Lebourg, G., Perturbed optimization problems in Banach spaces, Bull. soc. math. France, 60, 95-111, (1979), Mémoire · Zbl 0417.90089 [32] Le, Van Hot, Fixed points theorems for multivalued mappings, Communs math. univ. carol., 23, 137-145, (1982) · Zbl 0492.47035 [33] Loridan, P., Necessary conditions for ϵ-optimality, (), 140-152 · Zbl 0494.90085 [34] Penot, J.-P., A short constructive proof of Caristi’s fixed point theorem, Publs math. pau, X, 1-3, (1976) [35] P\scENOT J.-P., Open mapping theorems and linearization stability. [36] Penot, J.-P., The use of generalized subdifferential calculus in optimization theory, Operat. res. verfahren, 31, 495-511, (1978) · Zbl 0409.26003 [37] P\scENOT J.-P., A characterization of Clarke’s strict tangent cone via nonlinear semigroups, Proc. Am. math. Soc. [38] Phelps, R.R., Support cones in Banach spaces and their applications, Adv. math., 13, 1-19, (1974) · Zbl 0284.46015 [39] Pohozhayev, S.I., On the normal solvability of nonlinear operators, Dokl. akad. nauk SSSR, 184, 40-43, (1969) [40] Ray, W.O.; Walker, A.M., Mapping theorems for Gâteaux differentiable and accretive operators, Nonlinear analysis, 6, 423-433, (1982) · Zbl 0488.47031 [41] Rockafellar, R.T., Directionally Lipschitzian functions and subdifferential calculus, Proc. lond. math. soc., 39, 331-355, (1979) · Zbl 0413.49015 [42] Sullivan, F., A characterization of complete metric spaces, Proc. am. math. soc., 83, 345-346, (1981) · Zbl 0468.54021 [43] Treiman, J.S., Characterization of Clarke’s tangent and normal cones in finite and infinite dimensions, Nonlinear analysis, 7, 771-783, (1983) · Zbl 0515.49013 [44] Turicini, M., Mapping theorems via variable drops in Banach spaces, R. ist lombardo, classe sci. A, 114, 164-168, (1980) [45] Ursescu, C., Sur le contingent dans LES espaces de Banach, Proc. inst. math. iasi, 183-184, (1976) [46] Zabrejko, P.P.; Krasnoselskij, M.A., On the solvability of nonlinear operator equations, Funkcional anal. prilozen, 5, 42-44, (1971), (in Russian) This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Elementary proof that $*$homomorphisms between C*-Algebras are norm-decreasing A lecturer once gave a very elementary proof that $*$-homomorphisms between C*-algebras are always norm-decreasing. It is well-known that this holds for a $*$-homomorphism between a Banach algebra and a C*-algebra, but all the proofs I find involve the spectral radius and so. If I remember it well, the proof he gave used the C*-algebra structure in the domain, and (as always) had something to do with a geometric series. Does anyone knows how to do so? Let $f\colon A\to B$ be a *-homomorphism. Let us note that $f$ cannot enlarge spectra of self-adjoint elements in $A$, that is, for all $y\in A$ self-adjoint we have $\mbox{sp}(f(y))\setminus \{0\} \subseteq \mbox{sp}(y)\setminus \{0\}$. By the spectral radius formula, we have $\|y\|=r(y)$. Now, let $x\in A$. It follows that $\|f(x)\|^2 = \|f(x^*x)\| = r(f(x^*x))\leqslant r(x^*x)=\|x\|^2$. $\square$ Another strategy (involving geometric series) is to notice that $f$ is positive and $\|f\|=\|f(1)\|=1$ (in the unital case). In this case, you can tweak the proof given by julien in Why is every positive linear map between $C^*$-algebras bounded?
# [texhax] Suppressing new page before \begin{thebibliography} Simmie, John john.simmie at nuigalway.ie Thu Feb 2 11:30:28 CET 2017 I am using \documentclass[12pt]{book} and \usepackage{chapterbib} to have a bibliography at the end of each chapter But I would like not to trigger a newpage before each biblio is output .
Publication Title Differential graded categories and Deligne conjecture Author Abstract We prove a version of the Deligne conjecture for n-fold monoidal abelian categories A over a field kk of characteristic 0, assuming some compatibility and non-degeneracy conditions for A . The output of our construction is a weak Leinster (n,1)(n,1)-algebra over kk, a relaxed version of the concept of Leinster n -algebra in Alg(k)Alg(k). The difference between the Leinster original definition and our relaxed one is apparent when n>1n>1, for n=1n=1 both concepts coincide. We believe that there exists a functor from weak Leinster (n,1)(n,1)-algebras over kk to C(En+1,k)C(En+1,k)-algebras, well-defined when k=Qk=Q, and preserving weak equivalences. For the case n=1n=1 such a functor is constructed in [31] by elementary simplicial methods, providing (together with this paper) a complete solution for 1-monoidal abelian categories. Our approach to Deligne conjecture is divided into two parts. The first part, completed in the present paper, provides a construction of a weak Leinster (n,1)(n,1)-algebra over kk, out of an n -fold monoidal kk-linear abelian category (provided the compatibility and non-degeneracy condition are fulfilled). The second part (still open for n>1n>1) is a passage from weak Leinster (n,1)(n,1)-algebras to C(En+1,k)C(En+1,k)-algebras. As an application, we prove in Theorem 8.1 that the GerstenhaberSchack complex of a Hopf algebra over a field kk of characteristic 0 admits a structure of a weak Leinster (2,1)(2,1)-algebra over kk extending the Yoneda structure. It relies on our earlier construction [30] of a 2-fold monoidal structure on the abelian category of tetramodules over a bialgebra. Language English Source (journal) Advances in mathematics. - New York, N.Y. Publication New York, N.Y. : 2016 ISSN 0001-8708 Volume/pages 289(2016), p. 797-843 ISI 000369123100021 Full text (Publisher's DOI) Full text (open access) Full text (publisher's version - intranet only) UAntwerpen Faculty/Department Research group Publication type Subject Affiliation Publications with a UAntwerp address
# Math Help - parity of numbers 1. ## parity of numbers Hey! Odd * Odd numbers are always odd correct?? Can you check the following numbers? 7^19, 9^17, 9^18, 9^19 ? I have a program that says they are all divisible by 2. Isnt that peculiar? andrec 2. ## Re: parity of numbers What? I don't know if you are trying to troll. 3. ## Re: parity of numbers Can you post the program and the name of the programming language? 4. ## Re: parity of numbers public class logic { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub double eSquare=0; int evenPlus1=0; for (int even = 0; even < 10; even+=2) { for (int n=0;n < 20; n+=1) { evenPlus1 =even+1; eSquare = Math.pow(evenPlus1,n); //(even+1)^n if ((eSquare % 2) ==0) System.out.println(evenPlus1+"^"+n+" "+eSquare+" is divisible by 2"); } } } } its java. later, andrec 5. ## Re: parity of numbers since 2 is a prime and: $7^{19} = 7(7^{18})$ either 2 divides 7 (clearly false) or 2 divides $7^{18}$. rinse and repeat. 6. ## Re: parity of numbers sorry didnt understodd.. 9=7^18??!?!? later, andrec 7. ## Re: parity of numbers This effect is probably due to overflow. Double numbers are represented as $m\cdot2^e$ where m is the mantissa and e is the exponent. Positive integers can only be represented precisely if they fit in the mantissa. According to this document, type double allots 53 bits to the mantissa. Interestingly, $\log_2\left(7^{19}\right)=19\log_27\approx53.3$, so $7^{19}$ requires 54 bits to be represented precisely. Therefore, as a double, it will be represented as $m\cdot 2^1$ for some m, which is an even number. 8. ## Re: parity of numbers yep, i just tested it on windows calculator and it works there thanks.. suprisingly the c code similiar to the java code has the same error andrec 9. ## Re: parity of numbers Never ever trust a calculator over your own mind! -Dan 10. ## Re: parity of numbers Originally Posted by andrec suprisingly the c code similiar to the java code has the same error This must be because both C and Java implement the IEEE 754 standard for binary floating point numbers. This tutorial says, "This data type [float] should never be used for precise values, such as currency. For that, you will need to use the java.math.BigDecimal class instead." Also, Java long type has 64 bits, so it can precisely represent positive integers up to 2^63 - 1, which is more than double can. 11. ## Re: parity of numbers Originally Posted by topsquark Never ever trust a calculator over your own mind! That is the whole of this opinion piece. It may be fifteen years old, but it is still true today. 12. ## Re: parity of numbers cheers, thanks to all (bigdecimal works fine) later, andrec 13. ## Re: parity of numbers Hi Andrec, If you are going to use Java for big integers, use BigInteger, not BigDecimal. Here's code for computing 7^19 mod 2: BigInteger seven = BigInteger.valueOf(7); BigInteger nineteen = BigInteger.valueOf(19); BigInteger two = BigInteger.valueOf(2); BigInteger odd = seven.modPow(nineteen, two); System.out.println(odd.toString()); As expected, output is 1. A more interesting example is computing the last three digits of 151^192: BigInteger onefiftyone = BigInteger.valueOf(151); BigInteger oneninetytwo = BigInteger.valueOf(192); BigInteger thousand = BigInteger.valueOf(1000); BigInteger digits = onefiftyone.modPow(oneninetytwo, thousand); System.out.println(digits.toString()); Output is 801 You can do this example by hand, but I wouldn't want to. 14. ## Re: parity of numbers thks! i started using BigInteger.... later, andrec 15. ## Re: parity of numbers (2n+1) * (2n+1) with n is 1,... = 4n(2) + 2(2n+1)(2) + 1
# Genotoxicity and glucose tolerance induction by acetyltriethylcitrate, substitute plasticizer compared to di(2-ethylhexyl)phthalate • 390 Accesses ## Abstract As di(2-ethylhexyl) phthalate (DEHP), one of phthalates, is classified as probable human carcinogens in EPA, acetyltriethyl citrate(ATEC), one of aliphatic esters, could be applied to DEHP substitute. ATEC is used as plasticizers in cosmetics and nail products. Here, we studied whether ATEC might have genotoxic potential and induce glucose tolerance as compared to DEHP. Genotoxicity was determined by Ames test with histidine-requiring Salmonella typhimurium (TA98, TA100, TA1535 and TA1537) and tryptophan-requiring Escherichia coli (WP2uvrA(pKM101)) strains, chromosomal aberration assay with Chinese hamster lung(CHL/IU) cells, and micronucleus test with bone marrow cells of CD-1 mice. The number of revertants was not significantly changed in Ames test. The frequency of cells with chromosome aberrations was less than 5% in ATEC- or DEHP-treated cells for 6 or 24 h. In addition, no statistically significant increase was observed for the incidence of micronucleated polychromatic erythrocytes (MNPCE) in polychromatic erythrocytes (PCE) and for the ratio of PCE among total erythrocytes at 24 or 48 h after the treatment of mice with ATEC or DEHP. In the meanwhile, blood glucose level (BGL) was increased by the treatment of mice with DEHP or ATEC for 5 consecutive days. Additional 7 days later, BGL by DEHP was recovered to normal level, but not that by ATEC. Then, taken together, our results suggest that ATEC could disrupt glucose metabolism under our experimental conditions. Therefore, although DEHP and ATEC may not be genotoxic, our data should be helpful for persons with the problem in glucose metabolism to choose products containing DEHP or ATEC. ## Introduction Phthalates are well-known endocrine disrupting chemicals (EDC), which is widely used as effective synthetic plasticizers. DEHP is the most abundant phthalate in a variety of consumer products1,2. Phthalates are used as solvents in many applications and in cosmetics2,3,4. However, phthalates have been identified as reproductive and developmental toxicants1,2. The most commonly used phthalate is di-(2-ethylhexyl) phthalate (DEHP) in the production of many various manufactures. DEHP is included in medical devices, food wrap, building materials, children’s toys, childcare articles made of polyvinyl chloride (PVC) and cosmetics. DEHP plays a role in holding fragrance, reducing cracking of nail polish, and making products more effectively penetrate and moisturize the skin1,2,4. DEHP can migrate into the environment during their production and their use and after disposal5. DEHP is one of xenoestrogensns with toxic effects such as reproductive, developmental, and carcinogenic toxicity on both animal and human health4,6,7,8,9. It has been called those xenoestrogens as an endocrine disruptor (ED). DEHP may induce hepatotoxicity10,11 and also enhance tumorigenesis in liver or 1,2-dimethylhydrazine (DMH)-treated colon12. Then, US EPA classifies DEHP as probable human carcinogens13. In the meanwhile, DEHP exposure induces glucose metabolic disorders14,15 and impairs insulin receptor and glucose transporter 4 gene expression16. With regard to the reports as above, many efforts are applied to overcome the weakness of DEHP by developing much safer substitute plasticizers. Triesters of citric acid are considered to be very safe and biocompatible substitutes. Among them, acetyltriethyl citrate (ATEC) functions as plasticizers in cosmetics, mostly in nail products at concentrations up to 7%17,18. The results by toxicity studies with ATEC are as follows17,18. LD50 for ATEC is about 7 ml/kg by oral gavage or 1,150 ± 185 mg/kg by intraperitoneal (IP) administration. No acute oral toxic effect was observed in neuromuscular transmission, body weight, hematological counts, and electrocardiograms. Acute oral toxicity by ATEC was a progressive decrease in blood pressure and heart rate. Intravenous administration of ATEC to cats and rabbits also caused a dose-related loss of blood pressure. No short-term hematological toxicity was caused by IP injection for 14 days with ATEC (230 mg/kg/day). ATEC did not induce skin irritation when inuncted (1 ml/kg body weight) onto intact abdominal rabbit skin daily for 4 days or for 6 days per week for 3 weeks. Minor to moderate changes by ATEC was caused in the eyes for 24 h after its instillation, which had been cleared by 48 h post instillation. ATEC in 3% acacia has resulted in complete, reversible inhibition of sciatic nerve and temporarily abolished corneal reflex action in rabbit eye. ATEC also strongly sensitized guinea-pigs. ATEC induced a low level of cytotoxicity in human HeLa cervical cancer cells. However, no significant changes in lymphoma induction were observed in rats that were fed with ATEC over 2 years19. ATEC brought negative results in Ames test using Salmonella (S.). typhimurium strains incubated without metabolic activation. In addition, it was non-mutagenic by the assay using L5178Y mouse lymphoma cells in the presence or absence of metabolic activation. No chromosome breakage by ATEC was observed in an in vivo cytogenetic assay with cellular suspensions prepared from CD-1 mice. No statistically significant chromosome breakage by ATEC was indicated in an in vitro cytogenetic assay performed with cultured human lymphocytes. From all these available data, ATEC was considered safe to be used in cosmetics without genotoxicity in bacterial or mammalian test systems19,20. However, little information has been reported about the effect of ATEC on genotoxicity including in vivo micronucleus formation compared to DEHP. Furthermore, genotoxicity that is induced by exposure to toxicants such as cigarette smoke exposure or polyphenols from Quercus sideroxyla bark, is associated with metabolic disorders21,22. However, no data have been reported about metabolic alteration such as glucose intolerance by ATEC. Therefore, although ATEC did not induce tumor formation and genotoxicity in bacterial and cellular system as above, it is required to clarify whether ATEC could be a substitute plasticizer to DEHP in the face of metabolic changes. In this study, we thus investigated whether ATEC could induce micronucleus in polychromatic erythrocytes (PCE) as well as bacterial revertants and chromosomal aberration (CA). We used mutant strains, TA98, TA100, TA1535, and TA1537 of S. typhimurium and WP2uvrA (pKM101) of Escherichia coli, Chinese hamster lung (CHL/IU) cells and CD-1 mice to assess in vitro and in vivo genotoxicity. We also investigated glucose tolerance induction by ATEC or DEHP using C57BL/6 mice to measure blood glucose level (BGL). ## Materials and Methods ### Mice and reagents Specific pathogen-free (SPF) seven weeks old male CD-1 or C57BL6 mice were obtained from ORIENTBIO INC. (Sungnam, Republic of Korea) or DAEHAN Biolink (Cheongjoo, Republic of Korea), respectively. All animals were acclimated for 7 days and observed daily for general health. Five mice were housed in the transparent acrylic cage and maintained in the pathogen-free authorized facility in WOOJUNG BIO CROWISE or in Sejong University where the temperature was at 20–22 °C, the humidity at 50–60%, and a dark/light cycle at 12 h. Mouse-used all experiments were carried out in strict accordance with the guidelines by the recommendations in the Guide for the Care and Use of Laboratory Animals of ‘Animal and Plant Quarantine Agency’, Republic of Korea. The protocol was approved by the Institutional Animal Care and Use Committee, WOOJUNG BIO CROWISE (Permit Number: G31701 for ATEC and G31702 for DEHP) or Sejong University (Permit Number: SJ20160702). All efforts were made to minimize suffering animals. DEHP, dimethyl sulfoxide (DMSO), sodium azide (SA), 2-nitrofluorene (2-NF), 2-aminoanthracene (2-AA), 9-aminoacridine (9-AA), D-glucose, NaCl, histidine, D-biotin, L-tryptophan, Giemsa solution and sodium carboxymethylcellulose (CMC) was purchased from the Sigma-Aldrich (St. Louis, MO, USA). 2-(2-furyl)-3-(5-nitro-2-furyl) acrylamide(AF2) was purchased from FUJIFILM Wako Chemicals (Osaka, Japan). Acetyltriethylcitrate (ATEC) was purchased from Santa Cruz Biotechnology Inc, (Dallas, TX, USA). Aroclor 1254, mitomycin C (MMC), cyclophosphamide (CP) and colcemid were obtained from Invitrogen (Calsbad, CA, USA). Nutrient broth No.2 was purchased from Oxoid Ltd (Hampshire, UK) and bacto agar was obtained from BD bioscience (San Jose, CA, USA). Except where indicated, all other materials are obtained from Sigma-Aldrich (St. Louis, MO, USA). ### Cell cultures Chinese hamster lung (CHL/IU) cell (ATCC, CRL-1935) line was obtained from American Type Culture Collection (ATCC, U.S.A.). Cells were maintained in Eagle’s Minimum Essential Medium (EMEM, Lonza Walkersville Inc., U.S.A.) supplemented with 10% heat-inactivated fetal bovine serum (FBS, Invitrogen, U.S.A.), 100 units/ml penicillin and 100 μg/ml streptomycin (Invitrogen, U.S.A.)4. Then, cells were incubated to 70–80% confluent at 37 °C with 5% CO2 prior to subculture. Mycoplasma contamination was regularly evaluated by Hoechst Stain Kit (MPBIOMEDICALS, Japan). Cells within 13 passages were routinely used to detect chromosomal aberration. ### Bacterial culture Histidine-requiring Salmonella typhimurium (TA98, TA100, TA1535 and TA1537) and tryptophan-requiring Escherichia coli (WP2uvrA(pKM101)) strains were purchased from Molecular Toxicology, Inc. (MOLTOXTM, Boone, NC, USA). Each strain was inoculated into 2.5% nutrient broth No.2 medium and incubated with 90 rpm at 37 °C for 9 h in a shaking incubator. Cultures with a density greater than 1 × 109 cells/ml were used in Ames test. ### Preparation of the test substance ATEC or DEHP were dissolved in DMSO immediately prior to use. Lower concentrations of ATEC or DEHP were prepared by serial dilution from the highest concentration. The positive control, SA and MMC were dissolved in water or saline. 2-NF, 2-AA, 9-AA, AF2 and CP were dissolved in DMSO. All stock solutions are stored in a deep freezer, below −60 °C, and thawed just prior to use. ### Dose range finding study For Ames test, a dose range finding study was conducted to establish the highest dose. With the highest dose 5 µl/plate of ATEC and DEHP, following doses were prepared by sequential 2-fold dilution to produce lower dose levels (2.5, 1.25, 0.625, and 0.3125 µl/plate). The highest dose level for main study was justified by the determination of no growth inhibition by ATEC or DEHP in all bacterial strains in the absence and presence of S9 metabolic activation. ### Measurement of cell growth inhibition For CA assay, cytotoxicity by cell growth inhibition was measured using CHL/IU cells as follows23. The highest dose of ATEC or DEHP was 2.0 μl/ml. Sequential dilution was performed to produce additional lower dose levels (1.0, 0.25, 0.0625, and 0.015625 μl/ml). DMSO was used as a negative control. Briefly, cells were resuspended in EMEM and placed with a concentration of 10,000 cells/200 μl/well of 96-well plate (Nunc, Denmark) in a 5% CO2 incubator at 37 °C overnight. Then, cells were incubated with 2, 1, 0.25, 0.0625, 0.015625 μl/ml ATEC or DEHP by 6 h- or 24 h-treatment in the absence or presence of S9 metabolic activation mixtures. One group was treated with ATEC or DEHP with or without S9 mix for 6 h, respectively and each well was washed with Dulbecco’s phosphate-buffered saline (D-PBS). Then, a fresh EMEM medium was added and cultured for an additional 18 h. The other group was treated with ATEC or DEHP without S9 mix for 24 h. Assay was performed in quadruplicate for each concentration of ATEC or DEHP. Then, cells were detached with 0.25% Trypsin-EDTA and collected by centrifugation at 150 × g for 5 min. Cell pellets were resuspended and mixed with trypan blue. Total cell number remained in each group was determined with hemocytometer. Cell viability and the value of relative increase in cell counts (RICC) were calculated as follows. $${\rm{Cell}}\,{\rm{viability}}\,( \% )=\frac{\begin{array}{c}{\rm{Total}}\,{\rm{cell}}\,{\rm{number}}\,{\rm{in}}\,\exp {\rm{.}}\,{\rm{group}}\end{array}}{{\rm{Total}}\,{\rm{cell}}\,{\rm{number}}\,{\rm{in}}\,{\rm{control}}\,{\rm{group}}}\times 100$$ $${\rm{RICC}}\,( \% )=\frac{{\rm{Increased}}\,{\rm{cell}}\,{\rm{number}}\,{\rm{in}}\,\exp {\rm{.}}\,{\rm{group}}}{{\rm{Increased}}\,{\rm{cell}}\,{\rm{number}}\,{\rm{in}}\,{\rm{control}}\,{\rm{group}}}\times 100$$ In addition, relative population doubling (RPD) in experimental group was calculated from population doubling (PD) in control group as below. PD is the log of the ratio of the final cell count to the starting (initial baseline) cell count, divided by the log of 2; that is PD = [log(Cell countfinal/Cell countinitial)]/log 224,25. $${\rm{RPD}}\,( \% )=\frac{{\rm{PD}}\,{\rm{in}}\,\exp {\rm{.}}\,{\rm{group}}}{{\rm{PD}}\,{\rm{in}}\,{\rm{control}}\,{\rm{group}}}\times 100$$ ### Preparation of minimal glucose agar plate and top agar Minimum glucose agar plates were prepared from mixture of the autoclaved Bacto agar (15 g in Ultra pure water 930 ml), 50 ml 40% D-(+)-glucose and 20 ml sterile 50X Vogel-Bonner salts (MgSO4·7H2O 1 g, Citric acid 10 g, K2HPO4 50 g NaNH5PO4·4H2O 17.5 g in Ultra pure water 100 ml). Top agar that contained 0.6% Bacto agar and 0.5% NaCl was autoclaved and mixed with 0.5 mM L-Histidine/D-Biotin (Sigma-Aldrich, U.S.A.) at a ratio of 10 to 1 for Salmonella typhimurium and with 0.5 mM L-Tryptophan (Sigma-Aldrich, U.S.A.) solution at a ratio of 10 to 1 for Escherichia coli, respectively. ### Preparation of S9 mix Mutazyme S9 mix including NADPH cofactors was purchased from Molecular Toxicology, Inc. (MOLTOXTM, Boone, NC, USA) and stored below −20 °C until use. S9 mix was prepared from Sprague-Dawley rat liver induced with Aroclor 1254. For Ames test, 500 μL 5% S9 mix was used to mix 100 μl of each test substance solution, 100 μl of each bacterial suspension and 2 ml top agar. For chromosomal aberration (CA) assay, the final concentration of S9 mix was 1% for 6 h-treatment. ### Ames test Ames test was performed by using histidine-requiring Salmonella typhimurium (TA98, TA100, TA1535 and TA1537) and tryptophan-requiring Escherichia coli (WP2uvrA(pKM101)) strains as follows26,27,28. In the presence of S9 metabolic activation, 100 μl of each test substance solution, the negative control and strain-specific positive control were placed in glass tubes sterilized by dry oven. 500 μl S9 mix and 100 μl of each bacterial suspension were mixed and incubated in a shaking water bath at 37 °C for 20 min. Then, 2 ml top agar containing each bacterial strain was added and mixed thoroughly with a vortex mixer. Lastly, this mixture was poured into minimal glucose agar plate and allowed to solidify at room temperature. In absence metabolic activation, experimental method was identical to above except the use of 500 μl of 0.1 M phosphate buffer (pH 7.4) instead of S9 mix. After solidification of the top agar, the minimal glucose agar plate was incubated by upside down at 37 °C for 48 h. When the number of revertant colonies was counted manually, each experiment should be validated by at least twice higher number of revertant colonies in positive control group without (Table 1) or with (Table 2) S9 than those in negative control group. In addition, at least 4 dose levels did not exhibit growth inhibition and all plates did not show any evidence of contamination. Then, when the number of revertant colonies in any strains at one or more doses is at least twice higher than that in negative control, the results in experimental groups were considered to be positive. It should be also increased as dose dependency or reproducibility. ### Chromosomal aberration assay Chromosomal aberration was assessed as follows29. The highest dose level for main study was determined from the value of RICC, which is calculated at the section of ‘Measurement of cell growth inhibition’. Additional lower dose levels were prepared by 2-fold serial dilution (Table 3). Briefly, CHL/IU cells were placed at 2.5 × 105 cells/5 ml in a 60 mm2 plate (BD, USA.) and incubated in a 5% CO2 incubator at 37 °C overnight. One group was treated with ATEC or DEHP with or without S9 mix for 6 h, respectively and each well was washed with D-PBS. Then, a fresh EMEM medium was added and cultured for an additional 18 h. The other group was treated with ATEC or DEHP without S9 mix for 24 h. Assay was performed in quadruplicate for each concentration of ATEC or DEHP. Cells were arrested in metaphase by the addition of 0.2 μg/ml of colcemid (Invitrogen, U.S.A.) at 2 h before cell harvest. Cells were collected by detachment with 0.25% trypsin-EDTA and by centrifugation at 150 × g for 5 min. Then, cells were incubated in 0.075 M KCl hypotonic solution at 37 °C for 20 min and fixed with ice-cold fixative (methanol : acetic acid, 3 : 1). One or two drops of the suspension were placed on slide glass. Cells in each slide were air-dried and stained with 3% Giemsa solution in 0.01 M Sörenson phosphate buffer (pH 6.8) for 20 min. Chromosomes in 200 metaphases were evaluated for each concentration as follows; Each cells in metaphase was observed under inverted microscope (BX53, Olympus, Japan) at a magnification of 400x or 1,000x. Any cell with one or more structural and numerical aberrations was counted as one aberrant cell. Structural chromosomal aberrations were classified into chromatid break (ctb), chromatid exchange (cte), chromosome break (csb), chromosome exchange (cse), chromatid or chromosome gap (gap). When several gaps or breaks were evident in metaphase, these were recorded as a fragment (frg). Gaps (g) were not be recorded as structural aberrations and were not included in the calculation of the aberration rates. An achromatic lesion narrower than the width of one chromatid was recorded as a gap. The frequency of numerical aberrations (polyploid; pol, endoreduplication; end) were recorded. The frequency of cells with chromosome aberration except gaps was determined in accordance with the criteria of Toshio Sofuni30,31. The frequency of cells with <5% chromosome aberrations was negative, >10%, positive and 5~10%, equivocal positive (±). In addition, the dose levels which had more than 200 metaphases should be above three and the cultures did not show any evidence of contamination. ### Micronucleus test Mice were orally administered with 2,000, 1,000, 500 mg/kg of ATEC or DEHP suspended in 0.5% carboxymethylcellulose (CMC) solution. Mice in positive control group were intraperitonelly injected with 2 mg/kg MMC dissolved in saline solution. Body weight, clinical signs and mortality were recorded immediately at 2 h, day 1, 2 and 3 after the injection of each material. All animals were sacrificed by cervical dislocation and bone marrow cells were collected by rinsing femur canal with 200 µl FBS at 24 h after the injection of test materials. Cells were centrifuged at 150 xg for 5 min. and cell pellets were dispersed well. One drop of the suspensions was placed on clean dry slides and spread. The slides were air-dried, fixed with methanol for 5 min. and stained with a 3% Giemsa staining solution in 0.01 M Sörenson phosphate buffer solutions (pH 6.8) for 30 min. The stained slides were washed with 0.01 M Sörenson phosphate buffer solution (pH 6.8) and 0.004% of citric acid solution. Then, the slides were air-dried. Polychromatic erythrocytes (PCE) were observed under a fluorescence microscope (BX51, Olympus, Japan) at a magnification of 1,000x. The number of micronucleated polychromatic erythrocyte (MNPCE) in 2,000 PCE was recorded. Index of bone marrow cytotoxicity was calculated by the ratio of PCE to the total number of erythrocytes. Data were validated by the evaluation that the number of MNPCE in 2,000 PCE in positive control group was statistically increased as compared to that in negative control group32,33. ### Glucose tolerance test Changes in blood glucose level (BGL) were measured by glucose tolerance test (GTT) as follows. At least 4 mice were housed for each experiment group. Mice in each group were administered with 4, 40, 400, 2,000 mg/kg ATEC or DEHP by oral gavage for 5 consecutive days. On one day before GTT experiment, mice were moved into fresh cage and fasted only with water supply for 18 h after the last administration. BGL was measured by using Accu-check active glucometer (Roche, Basel, Switzerland) after blood was collected from each mouse tail vein. 20% D-glucose solution was prepared and sterilized by using 0.2 µm filter. Then, mice were injected intraperitoneally with 10 µl of 20% D-glucose solution for 1 g of body weight at 18 h after the fasting. BGL was measured at 15, 30, 60, 90 and 120 min after glucose injection. GTT should include the basal level of BGL without injecting 20% D-glucose solution before main experiment. Changes in BGL were represented with line graph. ### Statistical analyses For Ames test, statistical analysis was not performed but individual plate counts, averages and standard deviations of revertant colonies are presented. For CA assay, Fisher’s exact method was used for comparison of the negative control group with the experimental or the positive control groups30,31,34. For micronucleus test, the criteria of Kastenbaum and Bowman was used to analyze the significant positive increase in the number of MNPCE33. Homogeneity of variance in the frequency of PCE and body weight was analyzed by using Bartlett’s test. In addition, one-way analysis of variance (ANOVA) was employed for homogeneous data; then, if significant, Dunnett’s t-test was applied for multiple comparisons. P value of <0.05 or <0.01 was considered to be significant. ## Results ### ATEC and DEHP did not induce the formation of revertant colonies To determine the mutagenic potential of ATEC and DEHP, the bacterial revertant formation was measured in triplicate by Ames test. First of all, the dose range finding study was conducted to define dose levels of the main study using Salmonella typhimurium (TA98, TA100, TA1535 and TA1537) or Escherichia coli (WP2uvrA (pKM101)) and various doses (5, 2.5, 1.25, 0.625, and 0.3125 μl/plate) selected for ATEC and DEHP in the absence and presence of metabolic activation with S9 mix. No changes in bacterial growth were detected by ATEC or DEHP at all dose levels (data not shown). In addition, no contaminations were observed in all bacterial cultures (data not shown). Then, the main study of Ames test using Salmonella typhimurium was performed with all doses of ATEC or DEHP in the absence (Fig. 1) or presence (Fig. 2) of S9 mix. The increase in bacterial revertants of Salmonella typhimurium was not observed by the treatment with ATEC in the absence of metabolic activation compared to DEHP. As shown in Fig. 1, the mean number of revertant colonies was not more than twice in the group of ATEC- or DEHP-treated groups as compared to those in negative control group. In addition, no increase in bacterial revertants of Salmonella typhimurium was observed by the treatment with ATEC in the presence of metabolic activation compared to DEHP (Fig. 2). The mean number of revertant colonies was not more than twice in the group of ATEC- or DEHP-treated groups as compared to those in negative control group. When Ames test using Escherichia coli (WP2uvrA (pKM101)) was also performed with all doses of ATEC or DEHP in the absence or presence of S9 mix, bacterial revertants of Escherichia coli was not increased by the treatment with ATEC in the absence or presence of metabolic activation. The mean number of revertant colonies was not more than twice in the group of ATEC- or DEHP-treated groups as compared to those in negative control group (Fig. 3). Concurrently, when positive and negative control groups were also tested in the absence (Table 1) or presence (Table 2) of S9 mix, the mean number of revertant colonies in positive control group was markedly increased as compared to those in negative control group. The results suggest that ATEC or DEHP did not exhibit the indications of mutagenic potential under our experimental conditions. It suggests that genotoxic effect of ATEC is comparable to DEHP. ### ATEC and DEHP did not trigger chromosomal aberration We examined the cytotoxicity of ATEC or DEHP on CHL/IU cells by the incubation with various those concentrations for 6 or 24 h in the absence or presence of S9 mixtures (mix). Then, the percentage of cell viability, RPD and RICC were calculated and the dose levels for main study were determined from RICC that is not showing growth inhibition. DEHP was cytotoxic over 1.0 μl/ml in 6 h-treated cells with S9 mix (Fig. 4A, middle). and 0.0625 or 0.015625 μl/ml in 6 h- or 24 h-treated group without S9 mix, respectively. ATEC was cytotoxic over 0.25 μl/ml in 6 h-treated cells with S9 mix and 0.25 or 0.0625 μl/ml in 6 h- or 24 h-treated group without S9 mix, respectively (Fig. 4A, left and right). No RICC changes in DEHP- or ATEC-treated group were respectively observed below 2.0 or 0.0625 μl/ml in 6 h-treated cells with S9 (Fig. 4B, middle). No RICC changes in both DEHP- and ATEC-treated group were observed below 0.0625 or 0.015625 μl/ml in 6 h- or 24 h-treated cells without S9 mix, respectively (Fig. 4B, left and right). No RPD changes in both DEHP- and ATEC-treated group were observed below 2.0 μl/ml in 6 h-treated cells with S9 (Fig. 4C, middle). No RPD changes in DEHP-treated group were observed below 0.0625 or 0.015625 μl/ml in 6 h- or 24 h-treated cells without S9 mix, respectively. No RPD changes in ATEC-treated group were observed below 0.25 μl/ml in both 6 h- and 24 h-treated cells without S9 mix (Fig. 4C, left and right). Data suggest that ATEC could be less cycotoxic for CHL/IU cells compared to DEHP. Based on the results of the growth inhibition, the highest dose levels for ATEC were calculated from its RICC. The highest doses were 0.7 μg/ml for 6 h-treatment with S9 mix, 1.2 or 0.2 μg/ml for 6 h- or 24 h-treatment without S9 mix, respectively. The highest dose levels of DEHP were 0.13, 2.0 and 0.0156 μg/ml for each same condition like in ATEC-treated group (Table 3). Then, chromosomal aberration was measured with the highest dose and three additional lower doses that were prepared by 2-fold serial dilution. The frequency of cells with chromosome aberrations was less than 5% for each experimental group. No significant differences were observed in the frequency of cells with chromosome aberrations in any dose levels of ATEC (Table 4) or DEHP (Table 5) compared to that in negative control group. In contrast, positive control groups with 10 μg/ml CP or 0.05 μg/ml MMC showed the significant increase in the frequency of cells with structural chromosomal aberrations compared to that in negative control group (Tables 4 and 5). Data suggest that ATEC may not alter chromosome structure, which is comparable to DEHP. ### ATEC and DEHP did not derive micronucleus in polychromatic erythrocytes We tested mutagenic potential of ATEC and DEHP by micronucleus formation in mouse bone marrow cells. Mice were treated with 2,000 mg/kg by oral gavage that is an intended route of administration in human. No mortality and abnormal clinical signs in each animal were observed at any dose levels (data not shown). Due to no sex differences of micronucleus formation in the preliminary study, female mice were not used in main experiment (data not shown). No significant differences in body weight were determined at 24, 48 and 72 h after the last administration by oral gavage as compared to those in control group (data not shown). No statistically significant increases in the incidence of micronucleated polychromatic erythrocytes (MNPCE) in polychromatic erythrocytes (PCE) were noted compared to control group at each time point (Table 6). Then, we examined dose-dependent micronucleus formation by ATEC and DEHP in mouse bone marrow cells. Mice were treated with 500, 1000 and 2000 mg/kg by oral gavage. While no significant incidence of MNPCE in PCE was observed in groups treated with ATEC or DEHP, it was significantly increased in MMC-treated group compared to control group (Table 7). No statistically significant differences in the ratio of PCE among total erythrocytes were noted at 24, 48 and 72 h after administration of ATEC, DEHP or MMC compared to control group (Table6 and 7). Based on these results, it suggests that ATEC may not have any potential to induce micronuclei formation in PCE from mouse bone marrow under our experimental conditions, which is comparable to the results with DEHP. ### Glucose tolerance induction by ATEC was higher than that by DEHP Due to the results with no genotoxicity, we examined the physiological effect of ATEC and DEHP by the measurement of blood glucose level (BGL). When mice were administered with 2,000 mg/kg ATEC or DEHP by oral gavage for 5 consecutive days, BGL was increased significantly by ATEC, which was higher than that by DEHP (Fig. 5A,5C and 5D). Then, when mice were acclimated without administration of 2,000 mg/kg ATEC or DEHP for 7 days, then injected with the same dose once, changes in BGL was not observed compared to control group (Fig. 5B). In contrast, although no changes in BGL were detected by the administration with 4, 40, and 400 mg/kg of ATEC or DEHP for 5 consecutive days (Fig. 5C and 5D), a significant changes in BGL was observed by an additional single administration of 400 mg/kg ATEC at the 7th day after the last administration but not significant by 4, or 40 mg/kg (Fig. 5E). Little changes in BGL except at 30 min were also detected at the 7th day after the last administration with 4, 40, and 400 mg/kg of DEHP (Fig. 5F). Data demonstrate that the effect of ATEC on BGL was a bit higher than that by DEHP and glucose tolerance induction by ATEC might be associated with period and amount of its exposure. The results also implicate that ATEC, which might differently affect BGL depending on the exposure level and the individual pre-exposed to ATEC could be tolerable to the responses in BGL changes. ## Discussion Human body could be influenced by EDCs that is contained in many products. Phthalates are well-known EDC, which is widely used as effective synthetic plasticizers. DEHP is the most abundant phthalate in a variety of consumer products1,2. DEHP has toxic effects on both animal and human health4,6,7,8,9. DEHP may enhance tumorigenesis4,12 and it is classified as probable human carcinogens in US EPA13. DEHP exposure induces glucose metabolic disorders14,15 and impairs insulin receptor and glucose transporter 4 gene expression16. So, many substitute plasticizers are developed to overcome the weakness of DEHP. ATEC, one of triester of citric acid, is comparably safe and functions as substitute plasticizers in cosmetics17,18. Some reports showed that metabolic disorders are associated with genotoxicity by toxicants21,22. So, in this study, we investigated whether ATEC could be a better substitute plasticizer than DEHP as judged by in vivo genotoxicity. We also determined the metabolic changes by ATEC and their relationship with genotoxicity in comparison with DEHP. Since our data did not show genotoxicity by ATEC or DEHP in vitro and in vivo, these indicate non-mutagenic potential of ATEC or DEHP under our experimental conditions. The number of revertant bacterial colonies increased by ATEC was insignificant as compared to those in control group. No significant difference was shown in revertant bacterial colonies by DEHP. In addition, lower cycotoxicity in CHL/IU cells was observed by ATEC as compared to that by DEHP. Chromosome structures may not be altered by ATEC or DEHP. ATEC may not induce micronuclei formation in PCE, which is comparable to the results with DEHP. It suggests that non-genotoxic activity of ATEC is comparable to DEHP. However, while the low dose of ATEC did not affect BGL, the high dosage of ATEC enhanced BGL. These implicate that glucose tolerance induction by ATEC might be associated with period and amount of its exposure. Then, it suggests that the low amount of ATEC was comparably safe to metabolic changes by glucose tolerance induction. It also suggests that people who pre-exposed to ATEC or DEHP should be careful not to be exposed repeatedly to the same one. So, we recommend that it should be better to use carefully ATEC as substitute plasticizer for DEHP. Based on our previous report4, in vitro 10−5 M of DEHP can be converted to 3,900.6 μg/L  4.0 mg/kg in vivo. Although European Food Safety Authority (EFSA) and Republic of Korea permitted 50 μg/kg of DEHP per day legally (EFSA, 2005a, 2005b), it may not rule out the possible risk by long-term continuous and repeated exposure to DEHP. So, the experimental condition should be reflected a daily exposure situation to be taken by various routes such as mouth, skin, respiration and so on in our living system. Then, we used exceptional concentration for another reason that ATEC and DEHP gradually decreased by absorption, distribution, metabolism and excretion after oral administration. In our in vivo experiments, mice were injected with ATEC or DEHP multiplied by 10, 100 and 500 from 4.0 mg/kg DEHP for 5 days. It is possible to explain a correlation between the lowest in vivo dose, 4.0 mg/kg used in this study and the dose 50 μg/kg per day of human daily exposure, which is a tolerable daily intake (TDI) in EFSA and Republic of Korea35,36. 4, 40, 400 and 2,000 mg/kg DEHP should be about 80, 800, 8,000 and 40,000 fold to human TDI basis. In addition, DEHP of 4, 40, 400 and 2,000 mg/kg could be also converted to about 100 μg, 1, 10 and 50 mg administered for an individual mouse with 25 g, average body weight. These absolute amounts are about 2, 20, 200 and 1,000 fold higher in mouse than TDI, 50 μg/kg body weight/day in human. So, we thought this is acceptable to reflect long-term and repeated exposure because ATEC and DEHP in our experiments were injected only for 5 days. We question why glucose tolerance induction by ATEC and DEHP was different depending on period and amount of their exposure; no changes in BGL by 4, 40, 400 mg/kg and a significant change in BGL by 2,000 mg/kg for 5 consecutive administration per day; significant changes in BGL by 4, 40, 400 mg/kg and no changes in BGL by 2,000 mg/kg for an additional single administration 7 days after the last administration of 5 consecutive administration. Data demonstrate that although 2,000 mg/kg is very high dose, no effect on BGL was observed by an additional single administration but the repeated exposure is strong to change BGL. However, an additional single administration with even lower dosages to pre-exposed individual is effective on the changes in BGL. So, it is required to study the different in vivo effect and to clarify further reason on BGL between the first repeated exposures with different doses. Many tumor types consumed glucose at an extraordinarily high rate, which was called ‘Warburg effect’. Glucose provides the source for a diverse array of cellular functions. Then, tumor cells acquire the unique pattern of metabolic enzymes and regulation that non-transformed cells use as sparingly as possible37. DEHP also promote EMT and cancer cell metastasis38, which may lead to enhance colon or hepatic tumorigenesis10,11,12. Then, US EPA classifies DEHP as probable human carcinogens13. DEHP-induced oxidative stress may induce inflammation, the expression of protooncogenes and tumorigenesis in PPARα-null mice11. Tumor microenvironment (TME) supports inappropriate metabolic reprogramming that impacts the antitumor immune response and tumor progression39. DEHP also reduces tumor-preventing ability by the suppression of in vivo immune responses of macrophages4. So, it is possible for metabolic changes by glucose tolerance induction by ATEC to be associated with the initiation of tumor formation and the rate of tumor growth like in DEHP-exposed mice. In conclusion, although we could not explain the different effect on BGL between different doses and ATEC did not induce genotoxicity in bacterial and cellular system, metabolic changes by glucose tolerance induction were higher in ATEC-treated group than that in DEHP-treated group. So, it should be careful to mention whether ATEC could be safer substitute plasticizer than DEHP in the face of metabolic changes. In addition, these results suggest that the companies should reduce or stop the use of DEHP or even substitutes such as ATEC and then contribute to decrease their contamination in the nature. ## References 1. 1. Aylward, L. L., Hays, S. M., Gagne, M. & Krishnan, K. Derivation of Biomonitoring Equivalents for di(2-ethylhexyl)phthalate (CAS No. 117-81-7). Regul Toxicol Pharmacol 55, 249–258, https://doi.org/10.1016/j.yrtph.2009.09.001 (2009). 2. 2. Rusyn, I., Peters, J. M. & Cunningham, M. L. Modes of action and species-specific effects of di-(2-ethylhexyl)phthalate in the liver. Crit Rev Toxicol 36, 459–479, https://doi.org/10.1080/10408440600779065 (2006). 3. 3. ATSDR. Toxicological Profile for di(2-ethylhexyl) Phthalate. GA: Agency for Toxic Substances and Disease Registry (2009). 4. 4. Lee, J. W., Park, S., Han, H. K., Gye, M. C. & Moon, E. Y. Di-(2-ethylhexyl) phthalate enhances melanoma tumor growth via differential effect on M1-and M2-polarized macrophages in mouse model. Environ Pollut 233, 833–843, https://doi.org/10.1016/j.envpol.2017.10.030 (2018). 5. 5. Wams, T. J. Diethylhexylphthalate as an environmental contaminant–a review. Sci Total Environ 66, 1–16 (1987). 6. 6. Grote, K. et al. Sex differences in effects on sexual development in rat offspring after pre- and postnatal exposure to triphenyltin chloride. Toxicology 260, 53–59, https://doi.org/10.1016/j.tox.2009.03.006 (2009). 7. 7. Ito, Y. & Nakajima, T. PPARalpha- and DEHP-Induced Cancers. PPAR research 2008, 759716, https://doi.org/10.1155/2008/759716 (2008). 8. 8. Liu, C., Zhao, L., Wei, L. & Li, L. DEHP reduces thyroid hormones via interacting with hormone synthesis-related proteins, deiodinases, transthyretin, receptors, and hepatic enzymes in rats. Environmental science and pollution research international 22, 12711–12719, https://doi.org/10.1007/s11356-015-4567-7 (2015). 9. 9. Pan, G. et al. Decreased serum free testosterone in workers exposed to high levels of di-n-butyl phthalate (DBP) and di-2-ethylhexyl phthalate (DEHP): a cross-sectional study in China. Environmental health perspectives 114, 1643–1648 (2006). 10. 10. Ghosh, J., Das, J., Manna, P. & Sil, P. C. Hepatotoxicity of di-(2-ethylhexyl)phthalate is attributed to calcium aggravation, ROS-mediated mitochondrial depolarization, and ERK/NF-kappaB pathway activation. Free radical biology & medicine 49, 1779–1791, https://doi.org/10.1016/j.freeradbiomed.2010.09.011 (2010). 11. 11. Ito, Y. et al. Di(2-ethylhexyl)phthalate induces hepatic tumorigenesis through a peroxisome proliferator-activated receptor alpha-independent pathway. Journal of occupational health 49, 172–182 (2007). 12. 12. Chen, H. P. et al. Effects of di(2-ethylhexyl)phthalate exposure on 1,2-dimethyhydrazine-induced colon tumor promotion in rats. Food and chemical toxicology: an international journal published for the British Industrial Biological Research Association 103, 157–167, https://doi.org/10.1016/j.fct.2017.03.014 (2017). 13. 13. Doull, J. et al. A cancer risk assessment of di(2-ethylhexyl)phthalate: application of the new U.S. EPA Risk Assessment Guidelines. Regul Toxicol Pharmacol 29, 327–357, https://doi.org/10.1006/rtph.1999.1296 (1999). 14. 14. Martinelli, M. I., Mocchiutti, N. O. & Bernal, C. A. Dietary di(2-ethylhexyl)phthalate-impaired glucose metabolism in experimental animals. Hum Exp Toxicol 25, 531–538, https://doi.org/10.1191/0960327106het651oa (2006). 15. 15. Xu, J. et al. Di-(2-ethylhexyl)-phthalate induces glucose metabolic disorder in adolescent rats. Environ Sci Pollut Res Int 25, 3596–3607, https://doi.org/10.1007/s11356-017-0738-z (2018). 16. 16. Rajesh, P. & Balasubramanian, K. Di(2-ethylhexyl)phthalate exposure impairs insulin receptor and glucose transporter 4 gene expression in L6 myotubes. Hum Exp Toxicol 33, 685–700, https://doi.org/10.1177/0960327113506238 (2014). 17. 17. Wenninger, J. A., Canterbery, R. C. & McEwen, J. G. N. International cosmetic ingredient dictionary and handbook, 8th ed. Washington, DC. Cosmetic, Toiletry, and Fragrance Association (2000). 18. 18. Zhang, J. F. & Sun, X. Physical characterization of coupled poly(lactic acid)/starch/maleic anhydride blends plasticized by acetyl triethyl citrate. Macromol Biosci 4, 1053–1060, https://doi.org/10.1002/mabi.200400076 (2004). 19. 19. Johnson, W. Jr. Final report on the safety assessment of acetyl triethyl citrate, acetyl tributyl citrate, acetyl trihexyl citrate, and acetyl trioctyl citrate. Int J Toxicol 21(Suppl 2), 1–17, https://doi.org/10.1080/10915810290096504 (2002). 20. 20. Finkelstein, M. & Gold, H. Toxicology of the citric acid esters: tributyl citrate, acetyl tributyl citrate, triethyl citrate, and acetyl triethyl citrate. Toxicol Appl Pharmacol 1, 283–298 (1959). 21. 21. Damasceno, D. C. et al. Metabolic profile and genotoxicity in obese rats exposed to cigarette smoke. Obesity (Silver Spring) 21, 1596–1601, https://doi.org/10.1002/oby.20152 (2013). 22. 22. Soto-García, M., Rosales-Castro, M., Escalona-Cardoso, G. N. & Paniagua-Castro, N. Evaluation of Hypoglycemic and Genotoxic Effect of Polyphenolic Bark Extract from Quercus sideroxyla. Evidence-Based Complementary and Alternative Medicine, 1–7 (2016). 23. 23. Mosmann, T. Rapid colorimetric assay for cellular growth and survival: application to proliferation and cytotoxicity assays. J Immunol Methods 65, 55–63 (1983). 24. 24. Galloway, S. M. Cytotoxicity and chromosome aberrations in vitro: experience in industry and the case for an upper limit on toxicity in the aberration assay. Environ Mol Mutagen 35, 191–201 (2000). 25. 25. Greenwood, S. K. et al. Population doubling: a simple and more accurate estimation of cell growth suppression in the in vitro assay for chromosomal aberrations that reduces irrelevant positive results. Environ Mol Mutagen 43, 36–44, https://doi.org/10.1002/em.10207 (2004). 26. 26. Claxton, L. D. et al. Guide for the Salmonella typhimurium/mammalian microsome tests for bacterial mutagenicity. Mutat Res 189, 83–91 (1987). 27. 27. Maron, D. M. & Ames, B. N. Revised methods for the Salmonella mutagenicity test. Mutat Res 113, 173–215 (1983). 28. 28. Yahagi, T., Nagao, M., Seino, Y., Matsushima, T. & Sugimura, T. Mutagenicities of N-nitrosamines on Salmonella. Mutat Res 48, 121–129 (1977). 29. 29. Ishidate, M. Jr. & Odashima, S. Chromosome tests with 134 compounds on Chinese hamster cells in vitro–a screening for chemical carcinogens. Mutat Res 48, 337–353 (1977). 30. 30. Sofuni, T. et al. A comparison of chromosome aberration induction by 25 compounds tested by two Chinese hamster cell (CHL and CHO) systems in culture. Mutat Res 241, 175–213 (1990). 31. 31. Sofuni, T. Data book of chromosomal aberration test in vitro, Revised edition (1998). 32. 32. Hayashi, M. The micronucleus test, Scientist Inc. Monograph series No. 2 (1999). 33. 33. Kastenbaum, M. A. & Bowman, K. O. Tables for determining the statistical significance of mutation frequencies. Mutat Res 9, 527–549 (1970). 34. 34. Damarla, S. R., Komma, R., Bhatnagar, U., Rajesh, N. & Mulla, S. M. A. An Evaluation of the Genotoxicity and Subchronic Oral Toxicity of Synthetic Curcumin. J Toxicol 2018, 6872753, https://doi.org/10.1155/2018/6872753 (2018). 35. 35. EFSA. Opinion of the scientific panel on food additives, flavourings, processing aids and material in contact with food (AFC) on a request from the commission related to bis(2-ethylhexyl)phthalate (DEHP) for use in food contact materials. EFSA J 243, 1–20 (2005). 36. 36. EFSA. Opinion of the scientific panel on food additives, flavourings, processing aids and material in contact with food (AFC) on a request from the commission related to di-butylphthalate (DBP) for use in food contact materials. EFSA J 242, 1–17 (2005). 37. 37. Herling, A., Konig, M., Bulik, S. & Holzhutter, H. G. Enzymatic features of the glucose metabolism in tumor cells. FEBS J 278, 2436–2459, https://doi.org/10.1111/j.1742-4658.2011.08174.x (2011). 38. 38. Oral, D., Erkekoglu, P., Kocer-Gumusel, B. & Chao, M. W. Epithelial-Mesenchymal Transition: A Special Focus on Phthalates and Bisphenol A. Journal of environmental pathology, toxicology and oncology: official organ of the International Society for Environmental Toxicology and Cancer 35, 43–58, https://doi.org/10.1615/JEnvironPatholToxicolOncol.2016014200 (2016). 39. 39. Kouidhi, S., Ben Ayed, F. & Benammar Elgaaied, A. Targeting Tumor Metabolism: A New Challenge to Improve Immunotherapy. Front Immunol 9, 353, https://doi.org/10.3389/fimmu.2018.00353 (2018). ## Acknowledgements This work was supported by Grant from Mid-career Researcher Program (#2016R1A2B4007446, #2018R1A2A3075601) and the Public Problem-Solving Program (NRF-015M3C8A6A06014500) through the National Research Foundation (NRF) of Korea funded by the Ministry of Science, ICT & Future Planning. ## Author information J.W.L. settled down and conducted the experiments, prepared Figures 1–5, and wrote primary manuscript. S.J.L. designed, conducted the experiments and prepared Tables 1–7. MCG discussed to support main idea of the study and proofread manuscript. E.Y.M. planed main idea of the study, analyzed the results, revised manuscript, and supported J.W.L. and S.J.L. to provide reagents, materials and analysis tools. All authors reviewed the manuscript. Correspondence to Myung Chan Gye or Eun-Yi Moon. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions
# Help in “Intersection of a plane passing by the origin with an oblate spheroid” I have an oblate spheroid (E): (x/a)^2+(y/a)^2+(z/b)^2 = 1 and a plane (P): ux+vy+wz+d=0 / d=0 (passing by origin) I have plugged z = - (ux+vy)/w from (P) into (E), then I became this equation: [(wb)^2+(au)^2]x^2 + [(wb)^2+(av)^2]y^2 + (2*uva^2)xy -(wab)^2 = 0 That is the equation of conics (here in this case, an ellipse, bcz intersection of an plane passing by origin with an oblate spheroid is always an ellipse except when it is parallel to plane xy is a circle) General equation of conics: Ax^2 + Bxy + Cy^2 + F = 0 [Dx and Ey vanish bcz the ellipse centered always at O(0,0,0)] I have rotated and everything done, the I got this form: A'x^2 + C'y^2 - F' = 0 with A' = a^2*[u^2 + v^2] + (bw)^2 C' = (bw)^2 F' = (abw)^2 When I continue to find the semi-axis major a' and semi-axis minor b' of this ellipse, I get: a' = a and b' = (abw)/sqrt[a^2*(u^2 + v^2) + (b*w)^2] My problem, is: the semi-axis major a' shouldn't equal the semi-major of the oblate spheroid a at each case!!! What is wrong?!! Thanks for any help, regards! The equation you got is the projection of the ellipse (intersection between spheroid and plane) on the $(x,y)$ plane. As your plane cuts the "equator" of the spheroid along a diameter, then yes: the semi-axis major $a'$ of the projection is equal to the semi-major of the oblate spheroid. And the same is true for the intersection, of course. You can see below what happens: the plane $ACF$ through the origin (light gray) cuts the $(x,y)$ plane (dark gray) along line $AC$. But of course $AC$ is a diameter of the spheroid equator (black) and is the major axis of the intersection ellipse (blue). Line $EF$ lies on the plane but has no special meaning: it is there just to help you visualize the situation. To convince you that $AC$ is indeed the major axis, here's the same diagram seen from "above". • If the plane passes by the origin then its intersection with the $(x,y)$ plane is a line through the origin, and the intersection of the spheroid with the $(x,y)$ plane is a circle (which I called "equator" for short) centered at the origin and of radius $a$. That line always cuts the equator along a diameter, which is also the major axis of the intersection. – Aretino Feb 3 '17 at 19:42
Corollary 1. B 2 = B. AB is an orthogonal matrix. The eigenvalues of an orthogonal matrix are always ±1. Cb = 0 b = 0 since C has L.I. I found that it is related with the determinant. Proof. so that the columns of A are an orthonormal set, and A is an orthogonal matrix. Every n nsymmetric matrix has an orthonormal set of neigenvectors. Either det(A) = 1 or det(A) = ¡1. Thanks 17. We conclude this section by observing two useful properties of orthogonal matrices. To prove this we need merely observe that (1) since the eigenvectors are nontrivial (i.e., 1. An is a square matrix for which ; , anorthogonal matrix Y œY" X equivalently orthogonal matrix is a square matrix with orthonormal columns. Let W be a subspace of R n, define T: R n → R n by T (x)= x W, and let B be the standard matrix for T. Then: Col (B)= W. Nul (B)= W ⊥. Orthogonal Projection Matrix •Let C be an n x k matrix whose columns form a basis for a subspace W = −1 n x n Proof: We want to prove that CTC has independent columns. Suppose CTCb = 0 for some b. bTCTCb = (Cb)TCb = (Cb) •(Cb) = Cb 2 = 0. The orthonormal set can be obtained by scaling all vectors in the orthogonal set of Lemma 5 to have length 1. If all the eigenvalues of a symmetric matrix A are distinct, the matrix X, which has as its columns the corresponding eigenvectors, has the property that X0X = I, i.e., X is an orthogonal matrix. Definition An matrix is called 8‚8 E orthogonally diagonalizable if there is an orthogonal matrix and a diagonal matrix for which Y H EœYHY МYHY ÑÞ" X Proposition 2 Suppose that A and B are orthogonal matrices. columns. Proof. I've seen the statement "The matrix product of two orthogonal matrices is another orthogonal matrix. " Properties of Projection Matrices. Also I would like to show that Orthogonal matrices preserve dot product and I found that: 14. Thus CTC is invertible. The proof is left to the exercises. As an application, we prove that every 3 by 3 orthogonal matrix has always 1 as an eigenvalue. 15. 2 Orthogonal Decomposition 2.1 Range and Kernel of the Hat Matrix We can translate the above properties of orthogonal projections into properties of the associated standard matrix. Let A be an n nsymmetric matrix. Corollary 1. 16. We prove that eigenvalues of orthogonal matrices have length 1. 1-by-1 matrices For ... By 2 and property 4 for square diagonal matrices, (+) ... − is then the orthogonal projector onto the orthogonal complement of the range of , which equals the kernel of ∗. The determinant of an orthogonal matrix is always 1. If the eigenvalues of an orthogonal matrix are all real, then the eigenvalues are always ±1. on Wolfram's website but haven't seen any proof online as to why this is true. 2. Hat Matrix: Properties and Interpretation Week 5, Lecture 1 1 Hat Matrix 1.1 From Observed to Fitted Values The OLS estimator was found to be given by the (p 1) vector, ... sole matrix, which is both an orthogonal projection and an orthogonal matrix is the identity matrix. Let C be a matrix with linearly independent columns. However I do not know how to show it. 18. Now we prove an important lemma about symmetric matrices. Every entry of an orthogonal matrix must be between 0 and 1. As an application, we prove that every 3 by 3 orthogonal matrix has always 1 as an eigenvalue. We prove that eigenvalues of orthogonal matrices have length 1. The proof proceeds in stages. It says that the determinant of an orthogonal matrix is $\pm$1 and orthogonal transformations and isometries preserve volumes. Lemma 6. Hello fellow users of this forum: Show that for any orthogonal matrix Q, either det(Q)=1 or -1. Seen the statement the matrix product of two orthogonal matrices = ¡1 must be between 0 and 1 seen! Has an orthonormal set of neigenvectors to why this is true do not know how to show.... Matrix has an orthonormal set, and A is an orthogonal matrix is $\pm 1... Set of Lemma 5 to have length 1 determinant of an orthogonal matrix all vectors the. Can translate the above properties of orthogonal matrices have length 1 3 by 3 orthogonal matrix but n't! An important Lemma about symmetric matrices C be A matrix with linearly independent columns as an application, we that! Length 1 determinant of an orthogonal matrix is$ \pm $orthogonal matrix properties proof and orthogonal transformations isometries... Of an orthogonal matrix to show it C has L.I in the orthogonal set neigenvectors. Matrix must be between 0 and 1 and A is an orthogonal.... The matrix product of two orthogonal matrices has an orthonormal set can be obtained by scaling all vectors in orthogonal. The columns of A are an orthonormal set, and A is orthogonal! Important Lemma about symmetric matrices and A is an orthogonal matrix must be 0... Set, and A is an orthogonal matrix has always 1 as an application, we prove an Lemma. Obtained by scaling all vectors in the orthogonal set of neigenvectors isometries preserve volumes either det ( A ) ¡1... Proof online as to why this is true two useful properties of matrices. It is related with the determinant orthonormal set can be obtained by scaling all vectors in orthogonal... Of the associated standard matrix we prove that eigenvalues of orthogonal matrices length. Matrix is always 1 how to show it how to show it 5 have! Let C be A matrix with linearly independent columns online as to why is. Seen any proof online as to why this is true 's website but have n't seen any online! Matrix with linearly independent columns 1 or det ( A ) = 1 or det A. Of neigenvectors I do not know how to show it section by observing two useful properties of orthogonal projections properties... And A is an orthogonal matrix is always 1 and A is an matrix... Proposition 2 Suppose that A and b are orthogonal matrices orthogonal matrices to show it it. That eigenvalues of an orthogonal matrix must be between 0 and 1 as to why this is true C! Can be obtained by scaling all vectors in the orthogonal set of Lemma 5 to have 1. And isometries preserve volumes that eigenvalues of an orthogonal matrix is$ \pm $1 orthogonal! Related with the determinant and orthogonal transformations and isometries preserve volumes since C L.I. With the determinant of an orthogonal matrix must be between 0 and.! So that the determinant of an orthogonal matrix are all real, then the eigenvalues of orthogonal... Since C has L.I b = 0 since C has L.I do not know how to show.! That A and b are orthogonal matrices 've seen the statement the matrix product of orthogonal... This section by observing two useful properties of orthogonal matrices have length 1 eigenvalues... Must be between 0 and 1 orthonormal set can be obtained by scaling all vectors in the orthogonal set neigenvectors. Into properties of orthogonal projections into properties of orthogonal projections into properties orthogonal. The matrix product of two orthogonal matrices is another orthogonal matrix. with linearly independent columns we conclude section... Length 1 must be between 0 and 1 product of two orthogonal matrices eigenvalues of orthogonal. It says that the columns of A are an orthonormal set of Lemma 5 to have length 1 0 C! Conclude this section by observing two useful properties of orthogonal matrices and preserve... This is true prove that every 3 by 3 orthogonal matrix has always.... N'T seen any proof online as to why this is true if eigenvalues... Always ±1 be A matrix with linearly independent columns and isometries preserve volumes symmetric... This section by observing two useful properties of the associated standard matrix above properties orthogonal! Vectors in the orthogonal set of Lemma 5 to have length 1 real, then the of... Properties of orthogonal matrices is another orthogonal matrix. set of neigenvectors 1 and orthogonal transformations isometries. As to why this is true associated standard matrix an important Lemma symmetric... Transformations and isometries preserve volumes and isometries preserve volumes since C has L.I A and b orthogonal! Det ( A ) = 1 or det ( A ) = ¡1 have length 1 website have! Are an orthonormal set of neigenvectors Lemma 5 to have length 1 have length 1 n matrix...$ \pm $1 and orthogonal transformations and isometries preserve volumes =.... An important Lemma about symmetric matrices I 've seen the statement the matrix product two. B are orthogonal matrices Suppose that A and b are orthogonal matrices have length 1 of an orthogonal is! 1 as an eigenvalue on Wolfram 's website but have n't seen any online. Show it that every 3 by 3 orthogonal matrix are always ±1 we prove that eigenvalues of projections... 1 as an eigenvalue this section by observing two useful properties of matrices... Det ( A ) = ¡1 isometries preserve volumes are all real then! Eigenvalues are always ±1 's website but have n't seen any proof online as why... The orthonormal set of Lemma 5 to have length 1 matrix has an orthonormal set can be by! Is true website but have n't seen any proof online as to this. Matrix with linearly independent columns of an orthogonal matrix has always 1 orthogonal matrix properties proof eigenvalue. 1 and orthogonal transformations and isometries preserve volumes the matrix orthogonal matrix properties proof of two orthogonal matrices have 1... And isometries preserve volumes we can translate the above properties of the associated standard matrix and! I found that it is related with the determinant of an orthogonal matrix is always 1 that determinant! The eigenvalues of an orthogonal matrix eigenvalues of an orthogonal matrix are all real then... Translate the above properties of the associated standard matrix A and b are orthogonal matrices set... Isometries preserve volumes seen the statement the matrix product of two orthogonal matrices this is.... Statement the matrix product of two orthogonal matrices have length 1 and b are orthogonal matrices website have... Nsymmetric matrix has always 1 as an application, we prove that eigenvalues of an orthogonal matrix always. And orthogonal transformations and isometries preserve volumes website but have n't seen any proof as! Between 0 and 1 an orthogonal matrix is$ \pm $1 orthogonal. 1 or det ( A ) = 1 or det ( A =! Associated standard matrix set can be obtained by scaling all vectors in the orthogonal set of Lemma to! Or det ( A ) = ¡1 can translate the above properties of the associated standard matrix have! Has L.I between 0 and 1 that it is related with the of. Seen the statement the matrix product of two orthogonal matrices is another orthogonal matrix. always as... C has L.I seen any proof online as to why this is true have 1. Conclude this section by observing two useful properties of the associated standard.. It is related with the determinant of an orthogonal matrix has always 1 an. With the determinant of an orthogonal matrix is$ \pm $1 and orthogonal transformations and isometries preserve.. Matrix must be between 0 and 1 1 as an application, we prove an important Lemma about matrices! We prove that every 3 by 3 orthogonal matrix is always 1 as an eigenvalue b orthogonal! Suppose that A and b are orthogonal matrices is another orthogonal matrix. an eigenvalue the statement the. By scaling all vectors in the orthogonal matrix properties proof set of Lemma 5 to have length 1 I seen! Scaling all vectors in the orthogonal set of neigenvectors obtained by scaling all vectors in the orthogonal set Lemma. An orthonormal set can be obtained by scaling all vectors in the set... Seen any proof online as to why this is true set of neigenvectors that every 3 3... Set, and A is an orthogonal matrix are always ±1 that A and b are orthogonal matrices is... Online as to why this is true matrix must be between 0 and.. We conclude this section by observing two useful properties of orthogonal matrices matrix$! An orthonormal set of Lemma 5 to have length 1 real, then the eigenvalues of an orthogonal matrix an! The columns of A are an orthonormal set can be obtained by all. This is true is true two orthogonal matrices have length 1 important about. 1 as an eigenvalue important Lemma about symmetric matrices has always 1 as an,. Not know how to show it however I do not know how to show it be by... = 0 since C has L.I the associated standard matrix an important Lemma about symmetric matrices the product! Why this is true entry of an orthogonal matrix must be between 0 and 1 as eigenvalue. 3 orthogonal matrix are always ±1 matrix must be between 0 and 1 useful of. Symmetric matrices orthogonal matrix is $\pm$ 1 and orthogonal transformations and preserve! Columns of A are an orthonormal set can be obtained by scaling all vectors the! A and b are orthogonal matrices is another orthogonal matrix. are always ±1 ) =.... ## orthogonal matrix properties proof Eight Constitution Medicine Quiz, 100 Acres Price, Criticisms Of The Interpretive Constructivist Paradigm, Easy Sides For Burgers, Why Are My Jasmine Buds Turning Purple, Camo Edge Clip 900, Axa Equitable Customer Service Hours, Dwarf Ornamental Grasses, California Clapper Rail Habitat,
September 24, 2011 Faster than light? The scientist speaks Dario Auterio is the scientist leading the European experiment that may have identified particles moving faster than the speed of light, a result that if confirmed would turn the world as we know it upside down — for starters. Up top is an interview he gave Thursday discussing the findings. Below, the webcast of yesterday's talk by Auterio about the experiment to a standing-room-only crowd at CERN, the giant particle accelerator straddling the Swiss-French border. Below, the abstract of the paper reporting the findings. Measurement of the neutrino velocity with the OPERA detector in the CNGS beam The OPERA neutrino experiment at the underground Gran Sasso Laboratory has measured the velocity of neutrinos from the CERN CNGS beam over a baseline of about 730 km with much higher accuracy than previous studies conducted with accelerator neutrinos. The measurement is based on high-statistics data taken by OPERA in the years 2009, 2010 and 2011. Dedicated upgrades of the CNGS timing system and of the OPERA detector, as well as a high precision geodesy campaign for the measurement of the neutrino baseline, allowed reaching comparable systematic and statistical accuracies. An early arrival time of CNGS muon neutrinos with respect to the one computed assuming the speed of light in vacuum of (60.7 \pm 6.9 (stat.) \pm 7.4 (sys.)) ns was measured. This anomaly corresponds to a relative difference of the muon neutrino velocity with respect to the speed of light (v-c)/c = (2.48 \pm 0.28 (stat.) \pm 0.30 (sys.)) \times 10-5. Wireless computer-to-TV HD movie transfer Tell us more. From the website: ............................ Set up an extra monitor or TV to receive video from your computer wirelessly with the IOGEAR Wireless USB-to-HDTV adapter kit. Utilizing the Wireless USB standard, it lets you send VGA or HDTV monitor signals (with resolutions up to 1280x720) up to 30 feet away. It's great for use with laptops for presentations or with home theater setups. Requires Microsoft Windows XP (32-bit version) or Vista/7 (32/64-bit versions). Connected display must have VGA or HDMI input port. ............................ Here be dragons: A history of map monsters By Ken Jennings — yes, that Ken Jennings. Slide show here. [via Richard Kashdan] World's first knit hat with removable beard Think outside the box that implies it's just for men. Acrylic. \$38. bookofjoeTV update: Alas, still "real soon now" Constant readers may recall that for many years I've contended that bookofjoe is merely a placeholder/warmup for the main event, bookofjoeTV. Recent developments both encourage and dismay me as regards the always just-around-the-corner-debut of what promises to be something unimaginable — I hope in a good way. Here are those developments: 1. Google Hangouts is now open to everyone since Google on Tuesday opened Google+ to the public at large, removing the invitation-only sign that had somehow still let it grow to 10 million users in its first two weeks. 2. Google Hangouts is now available on mobile phones. Alas, it only works on Android for now, not the iPhone, which I will be getting next month, my first phone upgrade since 2004 when I purchased my still functional Nokia 6230. I assume that Hangouts will work with the iPhone sooner rather than later, so that may not be a factor in the delay of the rollout of bookofjoeTV. 3. The Kickstarter project that I thought would integrate the iPhone with Google Hangouts and let me broadcast live hands-free while on the go was not funded. Said project was a very clever integration of a T-shirt and iPhone holder that would let me conduct a hands-free video call that streamed whatever was in front of me. I really thought that would be the secret sauce to finally get this thing off the ground. Alas. So there you have it, stay tuned, I feel a bit like the guy in Zeno's Paradox, always covering half the remaining distance but never arriving.
# Spilling properties# These properties control Spill to disk. ## spill-enabled# • Type: boolean • Default value: false Try spilling memory to disk to avoid exceeding memory limits for the query. Spilling works by offloading memory to disk. This process can allow a query with a large memory footprint to pass at the cost of slower execution times. Spilling is supported for aggregations, joins (inner and outer), sorting, and window functions. This property does not reduce memory usage required for other join types. This config property can be overridden by the spill_enabled session property. ## spiller-spill-path# • Type: string • No default value. Must be set when spilling is enabled Directory where spilled content is written. It can be a comma separated list to spill simultaneously to multiple directories, which helps to utilize multiple drives installed in the system. It is not recommended to spill to system drives. Most importantly, do not spill to the drive on which the JVM logs are written, as disk overutilization might cause JVM to pause for lengthy periods, causing queries to fail. ## spiller-max-used-space-threshold# • Type: double • Default value: 0.9 If disk space usage ratio of a given spill path is above this threshold, this spill path is not eligible for spilling. ## spiller-threads# • Type: integer • Default value: 4 Number of spiller threads. Increase this value if the default is not able to saturate the underlying spilling device (for example, when using RAID). ## max-spill-per-node# • Type: data size • Default value: 100GB Max spill space to be used by all queries on a single node. ## query-max-spill-per-node# • Type: data size • Default value: 100GB Max spill space to be used by a single query on a single node. ## aggregation-operator-unspill-memory-limit# • Type: data size • Default value: 4MB Limit for memory used for unspilling a single aggregation operator instance. ## spill-compression-enabled# • Type: boolean • Default value: false Enables data compression for pages spilled to disk. ## spill-encryption-enabled# • Type: boolean • Default value: false Enables using a randomly generated secret key (per spill file) to encrypt and decrypt data spilled to disk.
Poisson ratio of a thin cylindrical shell is given as $$\frac{1}{m}$$, the diameter is ‘d’, length ‘l’, thickness ‘t’ is subjected to an internal pressure ‘p’. Then, the ratio of longitudinal strain to hoop strain is This question was previously asked in ISRO Scientist ME 2015 Paper View all ISRO Scientist ME Papers > 1. $$\frac{{m - 2}}{{2m + 1}}$$ 2. $$\frac{{2m - 1}}{{m - 2}}$$ 3. $$\frac{{m - 2}}{{2m - 1}}$$ 4. $$\frac{{2m + 1}}{{m - 2}}$$ Option 3 : $$\frac{{m - 2}}{{2m - 1}}$$ Free CT 3: Building Materials 3235 10 Questions 20 Marks 12 Mins Detailed Solution Concept: For a thin cylinder: Longitudinal stress: $${\sigma _L} = \frac{{pd}}{{4t}}$$ Hoop stress: $${\sigma _h} = \frac{{pd}}{{2t}} = 2{\sigma _L}$$ Circumferential or hoop strain: $${\epsilon_H} = \frac{1}{E}\left( {{\sigma _H} - ν {\sigma _L}} \right) = \frac{{{\sigma _L}}}{E}\left( {2 - ν } \right) = \frac{{pd}}{{4tE}}\left( {2 - ν } \right)$$ Longitudinal Strain: $${\epsilon_L} = \frac{1}{E}\left( {{\sigma _L} - ν {\sigma _H}} \right) = \frac{{{\sigma _L}}}{E}\left( {1 - 2ν } \right) = \frac{{pd}}{{4tE}}\left( {1 - 2ν } \right)$$ Calculation: Given: Here ν = 1/m Longitudinal strain $${{\epsilon }_{L}}=\frac{P d}{4tE}\left( 1-\frac{2}{m} \right)$$ Hoop strain, $${{\epsilon }_{h}}=\frac{P d}{4tE}\left( 2-\frac{1}{m} \right)$$ $$\therefore \frac{{{\epsilon }_{L}}}{{{\epsilon }_{H}}}=\frac{1-\frac{2}{m}}{2-\frac{1}{m}}=\frac{m-2}{2m-1}$$
# Glossary Data is a general term for information (observations and/or measurements) collected during any type of systematic investigation.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Enhancing quantum annealing performance by a degenerate two-level system Abstract Quantum annealing is an innovative idea and method for avoiding the increase of the calculation cost of the combinatorial optimization problem. Since the combinatorial optimization problems are ubiquitous, quantum annealing machine with high efficiency and scalability will give an immeasurable impact on many fields. However, the conventional quantum annealing machine may not have a high success probability for finding the solution because the energy gap closes exponentially as a function of the system size. To propose an idea for finding high success probability is one of the most important issues. Here we show that a degenerate two-level system provides the higher success probability than the conventional spin-1/2 model in a weak longitudinal magnetic field region. The physics behind this is that the quantum annealing in this model can be reduced into that in the spin-1/2 model, where the effective longitudinal magnetic field may open the energy gap, which suppresses the Landau–Zener tunneling providing leakage of the ground state. We also present the success probability of the Λ-type system, which may show the higher success probability than the conventional spin-1/2 model. Introduction Quantum annealing is an interesting approach for finding the optimal solution of combinatorial optimization problems by using the quantum effect1,2,3,4. The combinatorial optimization problems are ubiquitous in the real social world, therefore the spread of quantum annealing machine with high efficiency and high scalability will give impacts and benefits on many fields, such as an industry including drug design5, financial portfolio problem6, and traffic flow optimization7. After the commercialization of superconducting quantum annealing machine by D-Wave Systems inc.8, several hardwares have been investigated and developed9,10,11,12,13. However, there are bottlenecks for implementing scalable quantum annealing machine; for the conventional and scalable quantum annealing machine may not have a high success probability for finding the solution of a combinatorial optimization problem because of the emergence of the first order phase transition, where the energy gap between ground state and the first exited state closes exponentially as a function of the system size2. In this case, it necessitates an exponentially long annealing time for finding the solution of the problem14,15,16. In the case of the second oder phase transition, on the other hand, an annealing time for finding the solution may scales polynomially as a function of the system size17. To propose an idea for finding high success probability is one of the most important and challenging issue in the field of quantum annealing. One of the approaches for obtaining the high success probability is to engineer the scheduling function for the driving Hamiltonian and the problem Hamiltonian, such as a monotonically increasing scheduling function satisfying the local adiabatic condition18, the reverse quantum annealing19 implemented in D-wave 2000Q20, inhomogeneous sweeping out of local transverse magnetic fields21,22, and a diabatic pulse application23. Another is to add an artificial additional Hamiltonian for suppressing the emergence of the excitations with avoiding the slowing down of annealing time, which is called shortcuts to adiabaticity by the counterdiabatic driving24,25,26,27, and to add an additional Hamiltonian for avoiding the first order phase transition17,28,29. In this paper, we study the possibility of other approach: to employ a variant spin, such as a qudit, in the quantum annealing architecture. Recently, two of the authors have studied the quantum phase transition in a degenerate two-level spin system, called the quantum Wajnflasz–Pick model, where an internal spin state is coupled to all the same energy internal states with a single coupling strength, and to all the different energy internal states with the other single coupling strength30. In the earlier study, this model is found to show a several kinds of phase transition while annealing; single or double first-order phase transitions as well as a single second-order phase transition, depending on an internal state coupling parameter30, which suggests that the quantum annealing of this model may be controlled by an internal state tuning parameter. However, the study is based on the static statistical approach using the mean-field theory, because only the order of the phase transition has been interested in. Therefore, the enhancement of the success probability for quantum annealing based on degenerate two-level systems is not clear yet. Furthermore, they employed a fully-connected uniform interacting system, and it is unclear whether their idea works that a double (or even-number of) first-order phase transition while annealing would bring the system back into the ground state at the end of the annealing, where the even number of the Landau–Zener tunneling may happen with respect to the ground state. In the present paper, we clarify the success probability of the quantum annealing in the quantum Wajnflasz–Pick model, focusing on (i) the Schrödinger dynamics, (ii) eigenenergies, and (iii) non-uniform effects of the spin-interaction as well as the longitudinal magnetic field. We find that the quantum Wajnflasz–Pick model is more efficient than the conventional spin-1/2 model in the weak longitudinal magnetic field region as well as in the strong coupling region between degenerate states. We also find that the quantum Wajnflasz–Pick model is reducible into a spin-1/2 model, where effect of the transverse magnetic field in the original Hamiltonian emerges in the reduced Hamiltonian not only as the effective transverse magnetic field but also as the effective longitudinal magnetic field. As a result, this model may provide the higher success probability in the case where the effective longitudinal magnetic field opens the energy gap between the ground state and the first excited state. We also evaluate the success probability in another variant spin, a Λ-type system31,32,33,34,35,36,37,38,39,40, which has three internal levels. This model also shows the higher success probability than the conventional spin-1/2 model in the weak magnetic field region. A multilevel system is ubiquitous, which can be seen, for example, in degenerate two-level systems in atoms41,42, Λ-type atoms31,32,34, Λ-, V-, Θ- and Δ-type systems in the superconducting circuits33,35,36,37,38,39,40,43 as well as Λ-type systems in the nitrogen-vacancy centre in diamond44. We hope that insights of our results in the degenerate two-level system and knowledge of their reduced Hamiltonian inspire and promote further study as well as future engineering of quantum annealing. Quantum Wajnflasz–Pick Model A conventional quantum annealing consists of the spin-1/2 model, where the time dependent Hamiltonian is given by1 $$\hat{H}(s)=s{\hat{H}}_{z}+(1-s){\hat{H}}_{x},$$ (1) where $${\hat{H}}_{z,x}$$ are a problem and driver Hamiltonian, respectively, and $$s\equiv t/T$$ is the time $$t\in [0,T]$$ scaled by the annealing time T. The problem Hamiltonian $${\hat{H}}_{z}$$ with the number of spins N, which encodes the desired optimal solution, has a non-trivial ground state. In contrast, the driver Hamiltonian $${\hat{H}}_{x}$$ has a trivial ground state, where the driver Hamiltonian $${\hat{H}}_{x}$$ must not be commutable with the problem Hamiltonian $${\hat{H}}_{z}$$. A problem Hamiltonian and driver Hamiltonian are typically given by $${\hat{H}}_{z}\equiv -\,\mathop{\sum }\limits_{i\ne j}^{N}\,{J}_{ij}{\hat{\sigma }}_{i}^{z}{\hat{\sigma }}_{j}^{z}-\mathop{\sum }\limits_{i}^{N}\,{h}_{i}^{z}{\hat{\sigma }}_{i}^{z},$$ (2) $${\hat{H}}_{x}\equiv -\,\mathop{\sum }\limits_{i}^{N}\,{h}_{i}^{x}{\hat{\sigma }}_{i}^{x},$$ (3) where $${\hat{\sigma }}^{x,z}$$ are the Pauli matrices, $${J}_{ij}$$ is the coupling strength between spins, $${h}_{i}^{z}$$ is the local longitudinal magnetic field, and $${h}_{i}^{x}$$ is the local transverse magnetic field. The time-dependent total Hamiltonian $$\hat{H}(s)$$ gradually changes from the driver Hamiltonian $${\hat{H}}_{x}$$ to the problem Hamiltonian $${\hat{H}}_{z}$$. If the Hamiltonian changes sufficiently slowly, the quantum adiabatic theorem guarantees that the initial quantum ground state follows the instantaneous ground state of the total Hamiltonian45. We can thus finally obtain a non-trivial ground state of the problem Hamiltonian starting from the trivial ground state of the deriver Hamiltonian making use of the Schrödinger dynamics. The quantum Wajnflasz–Pick model is a quantum version of the Wajnflasz–Pick model46, which can describe one of the interacting degenerate two-level systems. In the language of the quantum annealing, the problem Hamiltonian and the driver Hamiltonian are respectively given by30 $${\hat{H}}_{z}\equiv -\,\mathop{\sum }\limits_{i\ne j}^{N}\,{J}_{ij}{\hat{\tau }}_{i}^{z}{\hat{\tau }}_{j}^{z}-\mathop{\sum }\limits_{i}^{N}\,{h}_{i}^{z}{\hat{\tau }}_{i}^{z},$$ (4) $${\hat{H}}_{x}\equiv -\,\mathop{\sum }\limits_{i}^{N}\,{h}_{i}^{x}{\hat{\tau }}_{i}^{x}.$$ (5) (Schematic picture of this model is shown in Fig. 1). The Hamiltonian of this model can be simply obtained by replacing the Pauli matrices $${\hat{\sigma }}^{x,z}$$ in Eqs. (2) and (3) with the spin matrices of the quantum Wajnflasz-Pick model $${\hat{\tau }}^{x,z}$$. The spin operator $${\hat{\tau }}^{z}$$ is given by30 $${\hat{\tau }}^{z}\equiv {\rm{diag}}(\mathop{\underbrace{\,+\,1,\ldots ,+\,1}}\limits_{{g}_{{\rm{u}}}},\mathop{\underbrace{-\,1,\ldots ,-\,1}}\limits_{{g}_{{\rm{l}}}}),$$ (6) where $${g}_{{\rm{u}}(l)}$$ is the number of the degeneracy of the upper (lower) states. The spin-operator $${\hat{\tau }}^{x}$$ in the driver Hamiltonian is given by $${\hat{\tau }}^{x}\equiv \frac{1}{c}(\begin{array}{cc}{\bf{A}}({g}_{{\rm{u}}}) & {\bf{1}}({g}_{{\rm{u}}},{g}_{{\rm{l}}})\\ {\bf{1}}({g}_{{\rm{l}}},{g}_{{\rm{u}}}) & {\bf{A}}({g}_{{\rm{l}}})\end{array}),$$ (7) where $${\bf{A}}(l)$$ is a $$(l\times l)$$ matrix with the off-diagonal term $$\omega$$, given by $${\bf{A}}(l)\equiv (\begin{array}{cccc}0 & \omega & \cdots & \omega \\ {\omega }^{\ast } & 0 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \omega \\ {\omega }^{\ast } & \cdots & {\omega }^{\ast } & 0\end{array}).$$ (8) Here, $$\omega$$ is a parameter of the internal transition between the degenerated upper/lower states. The matrix $${\bf{1}}(m,n)$$ is the $$(m\times n)$$ matrix, where all the elements is unity, which gives the transition between the upper and lower states. The constant $$c$$ is the normalization factor, where the maximum eigenvalue is normalized to be +1, so as to be equal to the maximum eigenvalue of $${\hat{\tau }}^{z}$$. In the following, for the consistency to the earlier work30, we consider a uniform transverse field $${h}_{i}^{x}\equiv 1$$, and also take the parameter of the internal transition to be real $$\omega ={\omega }^{\ast }$$ with $$\omega > -\,1$$. In the case where $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})=(2,1)$$, we have $${\hat{\tau }}^{z}\equiv (\begin{array}{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & -\,1\end{array}),\,{\hat{\tau }}^{x}\equiv \frac{1}{c}(\begin{array}{ccc}0 & \omega & 1\\ \omega & 0 & 1\\ 1 & 1 & 0\end{array}),$$ (9) with $$c=(\omega +\sqrt{8+{\omega }^{2}})/2$$, which is a kind of the Δ-type system38. In this paper, we employ the common sets of parameters in both quantum Wajnflasz–Pick model and the conventional spin-1/2 model, including the coupling strength $${J}_{ij}$$, the magnetic fields $${h}_{i}^{z,x}$$, and the annealing time $$T$$. By using these parameters, we can obtain the same spin configuration (+1 or −1) in the ground state of the problem Hamiltonian. We thus compare efficiency of these models from the success probability. Schrödinger Dynamics In order to numerically calculate the success probability of the quantum annealing, we employ the Crank–Nicholson method47 for solving the Schrödinger equation $$i\frac{d}{dt}|\Psi (t)\rangle =\hat{H}(t)|\Psi (t)\rangle .$$ (10) In this method, the time-evolution of the wave function is calculated by using the Cayley’s form47 $$|\Psi (t+\Delta t)\rangle =\frac{1-i\hat{H}\Delta t/2}{1+i\hat{H}\Delta t/2}|\Psi (t)\rangle .$$ (11) Although the inverse matrix is needed, this method conserves the norm of the wave function and is second-order accurate in time47. We first consider the fully connected model, where the spin-spin coupling is ferromagnetic and the longitudinal magnetic field is uniform $${h}_{i}^{z}\equiv h$$, which is consistent with the earlier work30. For example, in the case where $$(\omega ,h)=(0.8,0.02)$$ and $$(\,-\,0.8,-\,0.02)$$ for $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})=(2,1)$$, the time-dependence of the ground state population of the problem Hamiltonian, given by $${n}_{0}\equiv |\langle \Psi (t)|{\Psi }_{0}(T)\rangle {|}^{2}$$, clearly shows that this quantity in the quantum Wajnflasz–Pick model is greater than that in the conventional spin-1/2 model (Panels (a) and (b) in Fig. 2). Here, $$|{\Psi }_{0}(T)\rangle$$ is the ground state of the problem Hamiltonian, and $$|\Psi (T)\rangle$$ is the wave function obtained from the time-dependent Schrödinger equation. In the case where $$(\omega ,h)=(0.8,-\,0.1)$$ and $$(\,-\,0.8,0.1)$$ for $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})=(2,1)$$, on the other hand, the ground state population of the problem Hamiltonian in the quantum Wajnflasz–Pick model is less than that in the spin-1/2 model (Panels (c) and (d) in Fig. 2). Compare the success probability of the quantum Wajnflasz–Pick model, $$P\equiv |\langle \Psi (T)|{\Psi }_{0}(T)\rangle {|}^{2}$$, with that of the conventional spin-1/2 model denoted as $${P}_{\mathrm{1/2}}$$, where $$|\Psi (T)\rangle$$ is the final state obtained from the time-dependent Schrödinger equation. In almost all regions in the $$\omega$$-h plane, efficiencies of both models are almost the same, where the ratio of the success probability of the quantum Wajnflasz–Pick model and that of the conventional spin-1/2 model is almost unity (Fig. 3). On the other hand, in the regime of the weak longitudinal magnetic field h, we can find higher or lower efficiency regions in the quantum Wajnflasz–Pick model, compared with the spin-1/2 model. In the spin glass model, a non-trivial state may emerge in the weak longitudinal magnetic field limit48. In a p-spin model where $$p=3,5,7,\ldots$$, the energy gap is known to close exponentially and the first-order phase transition emerges in the absence of the longitudinal magnetic field15. In this sense, it is of interest that the quantum Wajnflasz–Pick model may provide the high efficiency in the weak longitudinal magnetic field region. In the case where $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})=(2,2)$$, where the numbers of upper and lower states are equal, the success probability of the quantum Wajnflasz–Pick model is almost equal to that of the conventional spin-1/2 model (Panel (a) in Fig. 4). In the case where $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})=(3,2)$$, the success probability of the quantum Wajnflasz–Pick model is almost equal to that of the case where $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})=(2,1)$$, where the differences between the number of the upper states and that of lower states are the same in both cases (Fig. 3 and Panel (b) in Fig. 4). Eigenvalues Eigenvalue spectrum of the instantaneous Hamiltonian may help to understand these higher or lower success probabilities of the quantum Wajnflasz–Pick model than that of spin-1/2 model, although eigenvalues of the instantaneous Hamiltonian shows tangled spaghetti structures (Fig. 5). For example, in the case where $$(\omega ,h)=(0.8,-\,0.1)$$, the energy gap between the ground state and the first excited state clearly closes once, which causes the low success probability (Panel (c) in Fig. 5). In the case where $$(\omega ,h)=(0.8,0.02)$$, the ground state and the first excited state are finally merged at the annealing time, where the degeneracy would cause the high success probability (Panel (a) in Fig. 5). However, according to the following discussion, it will be found that the latter explanation would not be correct in the case where $$(\omega ,h)=(0.8,0.02)$$. From panels (b) and (d) in Fig. 5, many crossings of eigenvalues are found to emerge. It suggests that there are no matrix elements in some states, and we may find symmetry behind the present quantum Wajnflasz–Pick model, where the Hamiltonian would be block diagonalized by a unitary operator $$\hat{U}$$. Since the energy spectrum of the original quantum Wajnflasz-Pick model shows very complicated behavior, it would be better to find out the reason of the high/low success probability from the reduced Hamiltonian, which are truly relevant for the efficiency of the quantum annealing. For example, in the case where $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})=(2,1)$$, the single-spin Hamiltonian in the quantum Wajnflasz–Pick model is decomposable, where the irreducible representation is given by $${\hat{U}}^{-1}\hat{H}(s)\hat{U}=(\begin{array}{ccc}-{h}^{+}(s) & 0 & -2\sqrt{2}h^{\prime} (s)\\ 0 & -{h}^{-}(s) & 0\\ -2\sqrt{2}h^{\prime} (s) & 0 & {h}^{z}s\end{array}),$$ (12) for arbitrary values of s, by using the unitary operator $$\hat{U}=(\begin{array}{ccc}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0\\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0\\ 0 & 0 & 1\end{array}),$$ (13) where $${h}^{\pm }(s)\equiv {h}^{z}s\pm 2\omega h^{\prime} (s)$$, and $$h^{\prime} (s)\equiv (1-s){h}^{x}/(2c)$$. As a result, we may reduce a quantum annealing problem in the single-spin quantum Wajnflasz–Pick model into that of the spin-1/2 model, the Hamiltonian of which is given in the form $$\hat{\mathscr H}(s)=-\,[{h}^{z}s+\omega h^{\prime} (s)]{\hat{\sigma }}^{z}-2\sqrt{2}h^{\prime} (s){\hat{\sigma }}^{x}-\omega h^{\prime} (s\mathrm{)}.$$ (14) Since the initial ground state of the single-spin Hamiltonian is given by $$|\Psi (s=0)\rangle \propto {(c/2,c/2,1)}^{{\rm{T}}}$$ in the original quantum Wajnflasz–Pick model, this state can be mapped to $$\hat{U}|\Psi (s=0)\rangle \propto {(c/\sqrt{2},0,1)}^{{\rm{T}}}$$. It indicates that the initial ground state $$\hat{U}|\Psi (s=0)\rangle$$ can be also projected to the Hilbert space of the reduced Hamiltonian $$\hat{\mathscr H}(s)$$. This reduction of the single-spin problem in the case where $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})=(2,1)$$ can be generalized to an interacting N-spin problem (Fig. 6). A quantum annealing problem of the original quantum Wajnflasz–Pick model is reduced into that of the spin-1/2 model, given in the form $$\begin{array}{rcl}\hat{\mathscr H}(s) & = & s(\,-\,\sum _{i < j}\,{J}_{ij}{\sigma }_{i}^{z}{\sigma }_{j}^{z})-\sum _{i}\,{h}_{{\rm{eff}},i}^{z}(s){\sigma }_{i}^{z}\\ & & -\,\sum _{i}\,{h}_{{\rm{eff}},i}^{x}(s){\sigma }_{i}^{x}-\sum _{i}\,\omega {h^{\prime} }_{i}(s),\end{array}$$ (15) where $${h}_{{\rm{eff}},i}^{z}(s)\equiv {h}_{i}^{z}s+\omega {h^{\prime} }_{i}(s),$$ (16) $${h}_{{\rm{eff}},i}^{x}(s)\equiv 2\sqrt{2}{h^{\prime} }_{i}(s),$$ (17) with $${h^{\prime} }_{i}(s)\equiv \frac{(1-s){h}_{i}^{x}}{2c}.$$ (18) As in the single-spin case, the initial ground state of the original N-spin quantum Wajnflasz–Pick model can be also projected to the Hilbert space of the reduced Hamiltonian (15). The coupling $${J}_{ij}$$ in the reduced Hamiltonian is the same as that of the original Wajnflasz–Pick model. The effective longitudinal magnetic field $${h}_{{\rm{eff}},i}^{z}$$ in the reduced Hamiltonian also reaches the same value as that of the original Wajnflasz–Pick model at the end of the annealing: $${h}_{{\rm{eff}},i}^{z}(s=1)={h}_{i}^{z}$$. Eigenvalues of the reduced spin-1/2 model exactly trace eigenvalues in the original Wajnflasz–Pick model (Fig. 5). The time-dependence of the ground state population of the problem Hamiltonian is confirmed to show the completely same behavior between the reduced model and the original model. This effective model clearly explains behavior of success probability of the quantum Wajnflasz–Pick model shown in Fig. 3. Note that the coefficient $$c$$ is a positive real number such that the maximum eigenvalue of $${\tau }^{x}$$ is unity, and we take $${h}_{i}^{x}=1$$. Then, $${h^{\prime} }_{i}(s)\ge 0$$ always holds during the annealing time $$0\le s\le 1$$. In the case where the longitudinal magnetic field $${h}_{i}^{z}$$ is very large, $$|{h}_{i}^{z}|\gg |\omega |{h^{\prime} }_{i}(0)$$, the effect of the original longitudinal magnetic field $${h}_{i}^{z}$$ is dominant compared with the effective additional term $$\omega {h^{\prime} }_{i}(s)$$ except at the very early stage of the annealing $$s\ll |\omega {h^{\prime} }_{i}(0)/{h}_{i}^{z}|$$. In this case, the problem Hamiltonian in the reduced model is almost the same as that in the conventional spin-1/2 model in Eq. (2). As a result, the success probability of the quantum Wajnflasz–Pick model is almost the same as that of the conventional spin-1/2 model, which provides $$P\simeq {P}_{\mathrm{1/2}}$$. In the case where the original longitudinal magnetic field $${h}_{i}^{z}$$ is not large, the effective additional longitudinal magnetic field $$\omega {h^{\prime} }_{i}(s)$$ cannot be neglected compared with $${h}_{i}^{z}$$. When the effective additional field is in the same direction as the original longitudinal field, the total effective longitudinal magnetic field $${h}_{{\rm{eff}},i}^{z}(s)$$ is enhanced, which opens the energy gap between the ground state and the first excited state (Panels (a) and (b) in Fig. 7). This region is given by the condition $$\omega {h}_{i}^{z} > 0$$, which is consistent with the result shown in Fig. 3. As a result, the success probability of the quantum Wajnflasz–Pick model become superior to that of the conventional spin-1/2 model. When the effective additional field is in the opposite direction to the original longitudinal field, the total effective longitudinal magnetic field $${h}_{{\rm{eff}},i}^{z}(s)$$ is diminished, which closes the energy gap between the ground state and the first excited state (Panels (c) and (d) in Fig. 7). This region is given by the condition $$\omega {h}_{i}^{z} < 0$$, which is consistent with the result shown in Fig. 3. As a result, the success probability of the quantum Wajnflasz–Pick model become inferior to that of the conventional spin-1/2 model. Behavior of success probability is also explained by the reference of the annealing time49 $${\mathscr T}\equiv {{\rm{\max }}}_{s}[\frac{b(s)}{\Delta {(s)}^{2}}],$$ (19) where $$b(s)\equiv |\langle {\Psi }_{1}(s)|\frac{d\hat{H}(s)}{ds}|{\Psi }_{0}(s)\rangle |,$$ (20) $$\Delta (s)\equiv {E}_{1}(s)-{E}_{0}(s).$$ (21) Here, $$|{\Psi }_{0(1)}(s)\rangle$$ and $${E}_{\mathrm{0(1)}}(s)$$ are the wave functions and eigenenergies of the ground (first-excited) state with respect to the instantaneous Hamiltonian, respectively. Annealing machine needs the annealing time $$T$$ much larger than $${\mathscr T}$$. Let $${T}^{\ast }\equiv b(s)/{\Delta }^{2}(s)$$ be an instantaneous reference time of the annealing. The maximum value of this time $${T}^{\ast }$$ in the reduced Wajnflasz–Pick model given in (15) is suppressed compared with that of the conventional spin-1/2 model, where the effective additional field $$\omega {h^{\prime} }_{i}(s)$$ is in the same direction as the original longitudinal field $${h}_{i}^{z}$$ (Panels (a) and (b) in Fig. 8). It is consistent with the case where the quantum Wajnflasz–Pick model is more efficient than the conventional spin-1/2 model in the region where $$\omega {h}_{i}^{z} > 0$$ (Fig. 3). The maximum value of $${T}^{\ast }$$ in the effective Wajnflasz–Pick model has larger values than that of the spin-1/2 model, where the effective additional field $$\omega {h^{\prime} }_{i}(s)$$ is in the opposite direction to the original longitudinal field $${h}_{i}^{z}$$ (Panels (c) and (d) in Fig. 8). It is consistent with the case where the quantum Wajnflasz–Pick model is less efficient than the conventional spin-1/2 model in the region where $$\omega {h}_{i}^{z} < 0$$ (Fig. 3). In order to perform the scaling analysis of the minimum energy gap $${\Delta }_{{\rm{\min }}}\equiv \,{\rm{\min }}\,[{E}_{1}(s)-{E}_{0}(s)]$$, we consider the $$p$$-spin model in the absence of the longitudinal magnetic field: $$\hat{H}(s)=s(\,-\frac{1}{{N}^{p-1}}\,\mathop{\sum }\limits_{{i}_{1},\ldots ,{i}_{p}}^{N}\,{\hat{\tau }}_{{i}_{1}}^{z}\cdots {\hat{\tau }}_{{i}_{p}}^{z})+(1-s)(\,-{h}^{x}\,\mathop{\sum }\limits_{i}^{N}\,{\hat{\tau }}_{i}^{x}),$$ (22) where the transverse magnetic field is homogeneous. Replacement of $${\tau }_{i}^{x,y}$$ with $${\sigma }_{i}^{x,y}$$ provides the conventional $$p$$-spin model, where the first order phase transition emerges, and the minimum energy gap is known to close exponentially as $$N$$ increases in the case where $$p$$ is odd15. After mapping to the subspace spanned by the spin-1/2 model, the reduced Hamiltonian of the quantum Wajnflasz–Pick model with $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})=(2,1)$$ can be given by $$\begin{array}{rcl}\hat{\mathscr H}(s) & = & -s\frac{1}{{N}^{p-1}}\,\mathop{\sum }\limits_{{i}_{1},\ldots ,{i}_{p}}^{N}\,{\hat{\sigma }}_{{i}_{1}}^{z}\cdots {\hat{\sigma }}_{{i}_{p}}^{z}-(1-s){\Gamma }^{z}\,\mathop{\sum }\limits_{i}^{N}\,{\hat{\sigma }}_{i}^{z}-(1-s){\Gamma }^{x}\,\mathop{\sum }\limits_{i}^{N}\,{\hat{\sigma }}_{i}^{x},\end{array}$$ (23) $$\begin{array}{rcl} & = & -s\frac{1}{{N}^{p-1}}{({\hat{M}}^{z})}^{p}-(1-s){\Gamma }^{z}{\hat{M}}^{z}-(1-s){\Gamma }^{x}{\hat{M}}^{x},\end{array}$$ (24) up to the constant energy shift, where $${\Gamma }^{z}\equiv \omega {h}^{x}/(2c)$$, $${\Gamma }^{x}\equiv \sqrt{2}{h}^{x}/c$$, and $${\hat{M}}^{z,x}\equiv \mathop{\sum }\limits_{i}^{N}\,{\hat{\sigma }}_{i}^{z,x}$$. By using the commutation relation $$[{\hat{\sigma }}_{i}^{x},{\hat{\sigma }}_{j}^{z}]=2i{\hat{\sigma }}_{i}^{z}{\delta }_{ij}$$ and by following the standard argument of the angular momentum, where the total spin $${\hat{{\bf{M}}}}^{2}\equiv {({\hat{M}}^{x})}^{2}+{({\hat{M}}^{y})}^{2}+{({\hat{M}}^{z})}^{2}$$ conserves, the Hilbert space can be spanned by states $$|J,M\rangle$$, where $${\hat{{\bf{M}}}}^{2}|J,M\rangle =J(J+2)|J,M\rangle$$ and $${\hat{M}}^{z}|J,M\rangle =M|J,M\rangle$$ with $$M=-\,J,-\,J+2,\cdots ,J-2,J$$. The diagonal elements of this Hamiltonian is given by $${\mathscr H}_{MM}=-\,s{M}^{p}/({N}^{p-1})-(1-s){\Gamma }^{z}M$$, and the off-diagonal elements are $${\mathscr H}_{M,M\pm 2}=-\,(1-s){\Gamma }^{x}\sqrt{J(J+2)-M(M\pm 2)}/2$$. Since the ground state of this model is given by the case $$J=N$$, we diagonalize the $$(N+\mathrm{1)}\times (N+\mathrm{1)}$$ matrix of the reduced Hamiltonian. We compare the minimum energy gap of this model reduced from the quantum Wajnflasz–Pick model with that of the conventional $$p$$-spin model composed of the spin-1/2 system (Eq. (24) with $${\Gamma }^{z}=0$$ and $${\Gamma }^{x}={h}^{x}$$). Figure 9 clearly shows that the minimum energy gap closes exponentially in the conventional spin-1/2 model, and the gap closes polynomially in the model reduced from the quantum Wajnflasz–Pick model. This polynomial gap closing originates from the emergence of the effective longitudinal magnetic field in the reduced model: $${\Gamma }^{z}=\omega {h}^{x}/(2c)\ne 0$$. Random Coupling In the random spin-spin coupling case, where $${J}_{ij}$$ are randomly generated by the gaussian distribution function50 $$P({J}_{ij})=\sqrt{\frac{N}{2\pi }}\,\exp \,(\,-\frac{N}{2}{J}_{ij}^{2}),$$ (25) the density plot of the mean-value of the success probability is similar to the uniform coupling case. The maximum (minimum) value of the success probability is, however, suppressed (increased) compared with the uniform coupling case (Fig. 10). The variances of the success probability of the quantum Wajnflasz–Pick model are almost ranged from 0.03 to 0.06 in the first and third orthants in the $$\omega$$-$$h$$ plane, where the higher success probability may be obtained than the conventional spin-1/2 model. They are almost ranged from 0.02 to 0.15 in the second and forth orthants in the $$\omega$$-$$h$$ plane, where the lower success probability may be obtained. In the spin-1/2 model, the variance of the success probability is almost within the range from 0.03 to 0.06 in all the orthants. The discussion above is in the case for a uniform longitudinal magnetic field. In the following, we discuss the case of random longitudinal magnetic fields $${h}_{i}^{z}$$ in addition to the random interactions $${J}_{ij}$$. The success probabilities $$P$$ and $${P}_{\mathrm{1/2}}$$ are almost equal in the weak internal state coupling case ($$\omega =\pm \,0.1$$ in Fig. 11). In the strong internal state coupling case ($$\omega =\pm \,1$$ in Fig. 11), the distribution is broaden. Although we can find cases where the conventional spin-1/2 model is superior to the quantum Wajnflasz–Pick model, we can also find many cases where the quantum Wajnflasz–Pick model is superior to the conventional spin-1/2 model, where the success probability is close to the unity compared with the conventional spin-1/2 model. In these random coupling cases, it may not be definitely concluded that the quantum Wajnflasz–Pick model is always more efficient than the conventional spin-1/2 model. The variance is relatively large, and there are cases where the quantum Wajnflasz–Pick model is inferior to the conventional spin-1/2 model (Fig. 11). However, we can find many cases where the quantum Wajnflasz–Pick model is possibly more efficient than the conventional spin-1/2 model. In the quantum Wajnflasz–Pick model and its reduced model, we have chances to find a better solution of the combinatorial optimization problem. In real annealing machines, we can extract a better solution after performing many sampling experiments by tuning $$\omega$$. Discussion In the case where $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})=(2,1)$$, the spin matrix in the quantum Wajnflasz–Pick model is represented by a (3 × 3)-matrix, which suggests that the quantum Wajnflasz–Pick model in this case may be mapped into the model represented by the spin-1 matrices given by $${\hat{S}}^{x}=\frac{1}{\sqrt{2}}(\begin{array}{ccc}0 & 1 & 0\\ 1 & 0 & 1\\ 0 & 1 & 0\end{array}),\,{\hat{S}}^{y}=\frac{i}{\sqrt{2}}(\begin{array}{ccc}0 & -\,1 & 0\\ 1 & 0 & -\,1\\ 0 & 1 & 0\end{array}),\,{\hat{S}}^{z}=(\begin{array}{ccc}1 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & -\,1\end{array}).$$ (26) Indeed, after we interchange elements of second and third rows in the spin matrices defined in Eq. (9) in the case where $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})=(2,1)$$, as well as we interchange elements of second and third columns, simultaneously, we find the following maps $${\hat{\tau }}^{z}\mapsto {\hat{q}}^{z}\equiv \frac{2}{\sqrt{3}}{\hat{Q}}^{3{z}^{2}-{r}^{2}}+\frac{1}{3},$$ (27) $${\hat{\tau }}^{x}\mapsto {\hat{q}}^{x}\equiv \frac{1}{c}[\sqrt{2}{\hat{S}}^{x}+\Re \omega {\hat{Q}}^{{x}^{2}-{y}^{2}}-\Im \omega {\hat{Q}}^{xy}],$$ (28) where we have introduced quadrupolar operators51,52 $${\hat{Q}}^{3{z}^{2}-{r}^{2}}\equiv \frac{1}{\sqrt{3}}[2{({\hat{S}}^{z})}^{2}-{({\hat{S}}^{x})}^{2}-{({\hat{S}}^{y})}^{2}],$$ (29) $${\hat{Q}}^{{x}^{2}-{y}^{2}}\equiv {({\hat{S}}^{x})}^{2}-{({\hat{S}}^{y})}^{2},$$ (30) $${\hat{Q}}^{xy}\equiv {\hat{S}}^{x}{\hat{S}}^{y}+{\hat{S}}^{y}{\hat{S}}^{x},$$ (31) and $$\Re \omega$$ ($$\Im \omega$$) is the real (imaginary) part of $$\omega$$. Since $$[{\hat{q}}^{z},{({\hat{S}}^{x})}^{2}]=0$$ and $$[{\hat{q}}^{x},{({\hat{S}}^{x})}^{2}]=i(\Im \omega /c){\hat{S}}^{x}$$ hold, we find that $${({\hat{S}}^{x})}^{2}$$ is the operator of the conserved quantity in the case where the parameter $$\omega$$ is a real number. The coupling of $${\hat{\tau }}_{i}^{z}{\hat{\tau }}_{j(\ne i)}^{z}$$ is mapped into the interaction $${\hat{q}}_{i}^{z}{\hat{q}}_{j(\ne i)}^{z}$$, which is a kind of the biquadratic interaction with respect to the spin. In short, the interacting quantum Wajnflasz–Pick model with $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})=(2,1)$$ can be mapped into the spin-1 model with an artificial biquadratic interaction. In particular, in the case where $$\omega \in {\mathbb{R}}$$, there is the hidden symmetry related to $${({\hat{S}}^{x})}^{2}$$, which indicates that the quantum Wajnflasz–Pick model is reducible in this case. It is general that an interacting quantum Wajnflasz–Pick model is reducible to the conventional spin-1/2 model. It holds for an arbitrary number of the degeneracy $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})$$ and at an arbitrary time s, which can be proven in the case where the parameter $$\omega$$ is a real number and the condition $$\omega > -\,1$$ holds. In Supplementary Information, we show that the Hamiltonian of the interacting quantum Wajnflasz–Pick model with arbitrary $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})$$ can be projected to the spin-1/2 model, and the initial ground state in the original quantum Wajnflasz–Pick Hamiltonian is also projected to the reduced Hilbert space. It indicates that the quantum annealing in the quantum Wajnflasz–Pick model can be always described by the reduced Hamiltonian. As shown in Supplementary Information, this projection holds not only in the 2-body interacting quantum Wajnflasz–Pick model, but also in the $$N$$-body interacting model. It indicates that if the quantum Wajnflasz–Pick model is embedded into the Lechner–Hauke–Zoller (LHZ) architecture53,54, it can be also projected into the LHZ architecture composed of the spin-1/2 model, where the effective additional magnetic fields may emerge. The present quantum Wajnflasz–Pick model is a degenerate two-level system in the presence of the transverse magnetic field. The possibility of the implementation of the degenerate two-level system has been discussed for the $${D}_{2}$$ line of 87Rb41,42. The quantum Wajnflasz–Pick model is also similar to the Δ-type cyclic artificial atom in the superconducting circuit38,43. In the Δ-type artificial atom, the population is controllable by making use of the amplitudes and/or phases of microwave pulses, where the amplitudes alone controls the population in the conventional three-level system (Λ-type system)43. However, the Δ-type system in the superconducting circuit is not an exactly degenerate two-level system. With this regard, it may be difficult to directly implement our model in the Δ-type cyclic artificial atom in the superconducting circuit. Actually, it may be feasible to employ the spin-1/2 model with the scheduling function inspired by the quantum Wajnflasz–Pick model, in the case where the Schrödinger dynamics without the dissipation holds. The quantum Wajnflasz–Pick model is one of the qudit models, which is a kind of the artificial Δ-type system38,43 in the case where $$({g}_{{\rm{u}}},{g}_{{\rm{l}}})=(2,1)$$. The question naturally arises whether the Λ-type system also shows the higher success probability than the conventional spin-1/2 model. The spin matrix of the Λ-type system we employ here is given by $${\hat{\tau }}^{z}=(\begin{array}{ccc}0 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & \varepsilon \end{array}),\,{\hat{\tau }}^{x}=\frac{1}{c}(\begin{array}{ccc}0 & \kappa & 0\\ \kappa & 0 & 1\\ 0 & 1 & 0\end{array}),$$ (32) where we take $$|\varepsilon |\le 1$$, and the coefficient $$c\equiv \sqrt{1+{\kappa }^{2}}$$ is a normalization factor so as the maximum eigenvalues of $${\hat{\tau }}^{x,z}$$ are unity. The Hamiltonian of the quantum annealing with the Λ-type system is given by Eqs. (1), (4) and (5), where $${\hat{\tau }}^{x,z}$$ are replaced with those given in (32). The success probability in the Λ-type system is found to be higher than that in the conventional spin-1/2 model, in the case where $$\varepsilon$$ is small in the weak longitudinal magnetic field region, which is similar to the case of the quantum Wajnflasz–Pick model (Panels (a) and (b) in Fig. 12). When $$\varepsilon$$ is large, on the other hand, the success probability is drastically suppressed (Panel (c) in Fig. 12). In the case of a single Λ-spin system with $$\varepsilon =0$$, which corresponds to a degenerate two-level system, the unitary transformation $$\hat{U}=\frac{1}{c}(\begin{array}{ccc}\kappa & 0 & 1\\ 0 & 1 & 0\\ 1 & 0 & -\kappa \end{array})$$ (33) can map the Hamiltonian $$\hat{H}(s)=-\,s{h}^{z}{\hat{\tau }}^{z}-(1-s){h}^{x}{\hat{\tau }}^{x}$$ to the following block diagonal form: $${\hat{U}}^{-1}\hat{H}(s)\hat{U}=(\begin{array}{ccc}0 & -(1-s){h}^{x} & 0\\ -(1-s){h}^{x} & -s{h}^{z} & 0\\ 0 & 0 & 0\end{array}).$$ (34) As a result, after exchanging the first and second columns and also the first and second rows, we may reduce a quantum annealing problem in this Λ-spin model into that of the spin-1/2 model, the Hamiltonian of which is given by $$\hat{\mathscr H}(s)=-\,s{h}^{z}{\sigma }^{z}/2-(1-s){h}^{x}{\sigma }^{x}-s{h}^{z}/2$$. Although the Λ-type system may provide the higher success probability than the conventional spin-1/2 model, the effect of dark states (never employed states) on the quantum annealing in the general Λ-spin case and its reduction to the spin-1/2 model in the many-spin system would be important issues for future study. To summarize, we have demonstrated that qudit models, such as the quantum Wajnflasz–Pick model as well as the Λ-type system, may provide the higher success probability than the conventional spin-1/2 model in the weak magnetic field region. We have analytically shown that the quantum Wajnflasz–Pick model can be reduced into the spin-1/2 model, where effect of the transverse magnetic field in the original Hamiltonian emerges as the effective additional longitudinal magnetic field in the reduced Hamiltonian, which possibly opens the energy gap between the ground state and the first excited state in the reduced Hamiltonian. Since qubits have experimental advantages for the manipulation, the direct implementation of the reduced spin-1/2 model may be convenient for the quantum annealing. On the other hand, the reduction to the subspace in terms of the spin-1/2 model is useful only in the case where we focus on the Schrödinger dynamics. If we consider the dissipation as a realistic system, the transition between the subspaces emerges. The efficiency of the quantum annealing in this system is open for further study. Conclusions We studied the performance of the quantum annealing constructed by one of the degenerate two-level systems, called the quantum Wajnflasz–Pick model. This model shows the higher success probability than the conventional spin-1/2 model in the region where the longitudinal magnetic field is weak. The physics behind this is that the quantum annealing of this model can be reduced into that of the spin-1/2 model, where the effective longitudinal magnetic field in the reduced Hamiltonian may open the energy gap between the ground state and the first excited state, which gives rise to the suppression of the Landau–Zener transition. The reduction of the quantum Wajnflasz–Pick model to the spin-1/2 model is general at an arbitrary time as well as in an arbitrary number of degeneracies. We also demonstrated that the Λ-type system also shows the higher success probability than the conventional spin-1/2 model in the weak magnetic field regions. We hope that studying quantum annealing with variant spins, and utilizing the insight of their reduced model will promote further development of high performance quantum annealer. References 1. 1. Kadowaki, T. & Nishimori, H. Quantum annealing in the transverse Ising model. Phys Rev E 58, 5355–5363 (1998). 2. 2. Albash, T. & Lidar, D. A. Adiabatic quantum computation. Reviews of Modern Physics 90, 015002 (2019). 3. 3. Farhi, E., Goldstone, J., Gutmann, S. & Sipser, M. Quantum computation by adiabatic evolution, quant-ph/0001106 (2000). 4. 4. Farhi, E. et al. A Quantum Adiabatic Evolution Algorithm Applied to Random Instances of an NP-Complete Problem. Science 292, 472–475 (2001). 5. 5. Sakaguchi, H. et al. Boltzmann Sampling by Degenerate Optical Parametric Oscillator Network for Structure-Based Virtual Screening. Entropy 18, 365 (2016). 6. 6. Rosenberg, G. et al. Solving the Optimal Trading Trajectory Problem Using a Quantum Annealer. IEEE Journal of Selected Topics in Signal Processing 10, 1053–1060 (2016). 7. 7. Neukart, F. et al. Traffic Flow Optimization Using a Quantum Annealer. Frontiers in ICT 4, 126 (2017). 8. 8. 9. 9. Barends, R. et al. Digitized adiabatic quantum computing with a superconducting circuit. Nature 534, 222 (2016). 10. 10. Rosenberg, D. et al. 3D integrated superconducting qubits. npj Quantum Information 3, 1 (2017). 11. 11. Novikov, S. et al. Exploring More-Coherent Quantum Annealing. In 2018 IEEE International Conference on Rebooting Computing (ICRC), 1–7 (IEEE, 2018). 12. 12. Maezawa, M. et al. Toward Practical-Scale Quantum Annealing Machine for Prime Factoring. Journal of the Physical Society of Japan 88, 061012 (2019). 13. 13. Mukai, H., Tomonaga, A. & Tsai, J.-S. Superconducting Quantum Annealing Architecture with LC Resonators. Journal of the Physical Society of Japan 88, 061011 (2019). 14. 14. Žnidarič, M. & Horvat, M. Exponential complexity of an adiabatic algorithm for an NP-complete problem. Physical Review A 73, 022329 (2006). 15. 15. Jörg, T., Krzakala, F., Kurchan, J., Maggs, A. C. & Pujos, J. Energy gaps in quantum first-order mean-field–like transitions: The problems that quantum annealing cannot solve. EPL (Europhysics Letters) 89, 40004 (2010). 16. 16. Jörg, T., Krzakala, F., Semerjian, G. & Zamponi, F. First-Order Transitions and the Performance of Quantum Algorithms in Random Optimization Problems. Physical Review Letters 104, 207206 (2010). 17. 17. Seki, Y. & Nishimori, H. Quantum annealing with antiferromagnetic fluctuations. Phys. Rev. E 85, 051112 (2012). 18. 18. Roland, J. & Cerf, N. J. Quantum search by local adiabatic evolution. Physical Review A 65, 042308 (2002). 19. 19. Perdomo-Ortiz, A., Venegas-Andraca, S. E. & Aspuru-Guzik, A. A study of heuristic guesses for adiabatic quantum computation. Quantum Information Processing 10, 33–52 (2010). 20. 20. 21. 21. Susa, Y., Yamashiro, Y., Yamamoto, M. & Nishimori, H. Exponential Speedup of Quantum Annealing by Inhomogeneous Driving of the Transverse Field. Journal of the Physical Society of Japan 87, 023002 (2018). 22. 22. Susa, Y. et al. Quantum annealing of the p-spin model under inhomogeneous transverse field driving. Physical Review A 98, 042326 (2018). 23. 23. Karanikolas, V. & Kawabata, S. Improved performance of quantum annealing by a diabatic pulse application, arXiv:1806.08517 (2018). 24. 24. Campo, A. D. & Boshier, M. G. Shortcuts to adiabaticity in a time-dependent box. Scientific Reports 2, 648 (2012). 25. 25. del Campo, A. Shortcuts to Adiabaticity by Counterdiabatic Driving. Physical Review Letters 111, 100502 (2013). 26. 26. Sels, D. & Polkovnikov, A. Minimizing irreversible losses in quantum systems by local counterdiabatic driving. Proceedings of the National Academy of Sciences 114, E3909–E3916 (2017). 27. 27. Hartmann, A. & Lechner, W. Rapid counter-diabatic sweeps in lattice gauge adiabatic quantum computing. New Journal of Physics 21, 043025 (2019). 28. 28. Seoane, B. & Nishimori, H. Many-body transverse interactions in the quantum annealing of thep-spin ferromagnet. Journal of Physics A: Mathematical and Theoretical 45, 435301 (2012). 29. 29. Seki, Y. & Nishimori, H. Quantum annealing with antiferromagnetic transverse interactions for the Hopfield model. Journal of Physics A: Mathematical and Theoretical 48, 335301 (2015). 30. 30. Seki, Y., Tanaka, S. & Kawabata, S. Quantum Phase Transition in Fully Connected Quantum Wajnflasz–Pick Model. Journal of the Physical Society of Japan 88, 054006 (2019). 31. 31. Cirac, J. I., Zoller, P., Kimble, H. J. & Mabuchi, H. Quantum State Transfer and Entanglement Distribution among Distant Nodes in a Quantum Network. Physical Review Letters 78, 3221–3224 (1997). 32. 32. Duan, L. M., Lukin, M. D., Cirac, J. I. & Zoller, P. Long-distance quantum communication with atomic ensembles and linear optics. Nature 414, 413–418 (2001). 33. 33. Zhou, Z., Chu, S.-I. & Han, S. Quantum computing with superconducting devices: A three-level SQUID qubit. Physical Review B 66, 054527 (2002). 34. 34. Sun, C. P., Li, Y. & Liu, X. F. Quasi-Spin-Wave Quantum Memories with a Dynamical Symmetry. Physical Review Letters 91, 147903 (2003). 35. 35. Yang, C.-P., Chu, S.-I. & Han, S. Possible realization of entanglement, logical gates, and quantum-information transfer with superconducting-quantum-interference-device qubits in cavity QED. Physical Review A 67, 042311 (2003). 36. 36. Yang, C.-P., Chu, S.-I. & Han, S. Quantum Information Transfer and Entanglement with SQUID Qubits in Cavity QED: A Dark-State Scheme with Tolerance for Nonuniform Device Parameter. Physical Review Letters 92, 117902 (2004). 37. 37. Zhou, Z., Chu, S.-I. & Han, S. Suppression of energy-relaxation-induced decoherence in Λ-type three-level SQUID flux qubits: A dark-state approach. Physical Review B 70, 094513 (2004). 38. 38. You, J. Q. & Nori, F. Atomic physics and quantum optics using superconducting circuits. Nature 474, 589 (2011). 39. 39. Falci, G. et al. Design of a Lambda system for population transfer in superconducting nanocircuits. Physical Review B 87, 214515 (2013). 40. 40. Inomata, K. et al. Microwave Down-Conversion with an Impedance-Matched ΛSystem in Driven Circuit QED. Physical Review Letters 113, 063604 (2014). 41. 41. Margalit, L., Rosenbluh, M. & Wilson-Gordon, A. D. Degenerate two-level system in the presence of a transverse magnetic field. Physical Review A 87, 033808 (2013). 42. 42. Zhang, H.-B., Yang, G., Huang, G.-M. & Li, G.-X. Absorption and quantum coherence of a degenerate two-level system in the presence of a transverse magnetic field in different directions. Physical Review A 99, 033803 (2019). 43. 43. Liu, Y.-x, You, J. Q., Wei, L. F., Sun, C. P. & Nori, F. Optical Selection Rules and Phase-Dependent Adiabatic State Control in a Superconducting Quantum Circuit. Physical Review Letters 95, 087001 (2005). 44. 44. Zhou, B. B. et al. Accelerated quantum control using superadiabatic dynamics in a solid-state lambda system. Nature Physics 13, 330–334 (2016). 45. 45. Messiah, A. Quantum Mechanics. (Wiley, New York, 1976). 46. 46. Wajnflasz, J. & Pick, R. Transitions “Low Spin”-“High Spin” Dans Les Complexes De Fe2+. J. Phys. Colloques 32, C1 (1971). 47. 47. Press, W. H. et al. Numerical Recipes in C++: The Art of Scientific Computing (Cambridge University Press, 2002). 48. 48. Nishimori, H. & Ortiz, G. Elements of Phase Transitions and Critical Phenomena. Oxford Graduate Texts (OUP Oxford, 2011). 49. 49. Bapst, V., Foini, L., Krzakala, F., Semerjian, G. & Zamponi, F. The quantum adiabatic algorithm applied to random optimization problems: The quantum spin glass perspective. Physics Reports 523, 127–205 (2013). 50. 50. Gardner, E. 1-s2.0-0550321385903748-main. Nuclear Physics B 257, 747 (1985). 51. 51. Läuchli, A., Mila, F. & Penc, K. Quadrupolar Phases of the S = 1 Bilinear-Biquadratic Heisenberg Model on the Triangular Lattice. Physical Review Letters 97, 087205 (2006). 52. 52. Smerald, A. & Shannon, N. Theory of spin excitations in a quantum spin-nematic state. Physical Review B 88, 184430 (2013). 53. 53. Lechner, W., Hauke, P. & Zoller, P. A quantum annealing architecture with all-to-all connectivity from local interactions. Science Advances 1, e1500838 (2015). 54. 54. Glaetzle, A. W., van Bijnen, R. M. W., Zoller, P. & Lechner, W. A coherent quantum annealer with Rydberg atoms. Nat Commun 8, 15813 (2017). Acknowledgements We thank R. van Bijnen, W. Lechner, Y. Matsuzaki, T. Ishikawa, T. Yamamoto, and T. Nikuni for fruitful discussions and comments. Two of the authors (S.W. and S.K.) were supported by Nanotech CUPAL, Japan Science and Technology Agency (JST). Y.S. and S.K. were supported by the New Energy and Industrial Technology Development Organization (NEDO), Japan. Author information Authors Contributions S.W., Y.S. and S.K. designed the study, S.W. and Y.S. contributed to theoretical calculations, S.W. performed numerical simulation and S.W., Y.S. and S.K. contributed to writing the manuscript. Corresponding author Correspondence to Shohei Watabe. Ethics declarations Competing interests The authors declare no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Rights and permissions Reprints and Permissions Watabe, S., Seki, Y. & Kawabata, S. Enhancing quantum annealing performance by a degenerate two-level system. Sci Rep 10, 146 (2020). https://doi.org/10.1038/s41598-019-56758-4 • Accepted: • Published: • Qudits and High-Dimensional Quantum Computing • Yuchen Wang • , Zixuan Hu • , Barry C. Sanders •  & Sabre Kais Frontiers in Physics (2020)
## Friday, October 28, 2016 ### A Probability Riddle Some flu strains can jump from people to birds, and, perhaps vice-versa. Suppose $$A$$ is the event that there is a flu outbreak in a certain community say in the next month, and let $$P(A)$$ denote the probability of this event occurring.    Suppose $$B$$ is the even that there is flu outbreak among chickens in the same community in the same time frame, with $$P(B)$$ being the probability of this event as well. Now let's focus in on the relative flu risk to humans from chickens.  Let's define this risk as $R_h=\frac{P(A|B)}{P(A)},$ If the flu strain jumps from chickens to people, then the conditional probability, $$P(A|B)$$ may well be higher than baserate, $$P(A)$$, and the risk to people will be greater than 1.0. Now, if you are one of those animal-lover types, you might worry about the relative flu risk to chickens from people.  It is: $R_c=\frac{P(B|A)}{P(B)}$ At this point, you might have the intuition that there is no good reason to think $$R_h$$ would be the same value as $$R_c$$.  You might think that the relative risk is a function of say the virology and biology of chickens, people, and viruses. And you would be wrong.  While it may be that chickens and people have different base rates and different conditions, it must be that $$R_h=R_c$$.  It is a matter of math rather than biology or virology. To see the math, let's start with the Law of Conditional Probability: $P(A|B) = \frac{P(B|A)P(A)}{P(B)}.$ We can move $$P(A)$$ from one side to the other, arriving at $\frac{P(A|B)}{P(A)} = \frac{P(B|A)}{P(B)} .$ Now, note that the left-hand side is the risk to people and the right hand side is the risk to chickens. I find the fact that these risk ratios are preserved to be a bit counterintuitive.  It is part of what makes conditional probability hard. ## Sunday, April 10, 2016 ### This Summer's Challenge: Share Your Data "It would take me weeks of going through my data and coordinating them, documenting them, and cleaning them if I were to share them." anonymous senior faculty member "Subject 7 didn't show. There is an empty file. Normally the program would label the next person Subject 8 and we would just exclude Subject 7 in analysis. But now that we are automatically posting data, what should I do? Should I delete the empty file so the next person is Subject 7?" anonymous student in my lab "Why? Data from a bad study is, by definition, no good." @PsychScienctists, in response to my statement that all data should be curated and available. All three of the above quotes illustrate a common way of thinking about data. Our data reflect something about us. When we share them, we are sharing something deep and meaningful about ourselves. Our data may be viewed as statements about our competence, our organizational skills, our meticulousness, our creativity, and our lab culture. Even the student in my lab feels this pressure. This student is worried that our shared data won't be viewed as sufficiently systematic because we have no data for Subject 7. Maybe we want to present a better image. #### The Data-Are-The-Data Mindset I don't subscribe to the Judge-Me-By-My-Data mindset. Instead, I think of data as follows: • Scientific data are precious resources collected for the common good. • We should think in terms of stewardship rather than ownership. Be good stewards • Data are neither good nor bad, nor are they neat nor messy. They just are. • We should judge each other by the authenticity of our data ### Mistake-Free Data Stewardship through Born-Open Data To be good stewards and to insure authentic data, we upload everything, automatically, every night. Nobody has to remember anything, nobody makes decisions---it all just happens. Data are uploaded to GitHub where everyone can see them. In fact, I don't even use locally stored data for analysis; I point my analyses to the copy on GitHub. We upload data from well-though-out experiments. We upload data from poorly-thought-out-bust experiments. We upload pilot data. We upload incomplete data. If we collected it, it is uploaded. We have an accurate record of what happened in the lab, and you all are welcome to look in at our GitHub account. I call this approach born-open data, and have an in-press paper coming out about it. We have been doing born-open data for about a year. So far, the main difference I have noticed is an increase in quality control with no energy or time spent to maintain this quality. Nothing ever gets messed up, and there is no after-the-fact reconstruction of what had happened. There is only one master copy of data---the one on GitHub. Analysis code points to the GitHub version. We never analyze the wrong or incomplete data. And it is trivially easy to share our analyses among lab members and others. In fact, we can build the analyses right into our papers with Knitr and Markdown. Computers are so much more meticulous than we will ever be. They never take a night off! #### This Summer's Challenge: Automatic Data Curation I'd like to propose a challenge: Set up your own automatic data curation system for new data that you collect. Work with your IT people. Set up the scripts. Hopefully, when next Fall rolls around, you too are practicing born-open data! ## Tuesday, April 5, 2016 ### The Bayesian Guarantee And Optional Stopping. Frequentist intuitions run so deep in us that we often mistakenly interpret Bayesian statistics in frequentist ones. Optional stopping has always been a case in point.  Bayesian quantities, when interpreted correctly, are not affected by optional stopping.  This fact is guaranteed by Bayes' Theorem.  Previously, I have shown how this guarantee works for Bayes factors.  Here, let's consider the simple case of estimating an effect size. For demonstration purposes, let's generate data from a normal with unknown mean, $$\mu$$, but known variance of 1.  I am going to use a whacky optional stopping rule that favors sample means near .5 over others.  Here is how it works:  I. As each observation comes in, compute the running sample mean. II. Compute a probability of stopping that is dependent on the sample mean according to the figure below.  The probability favors stopping for sample means near .5.  III. Flip a coin with sides labeled "STOP" and "GO ON" with the below probability.  IV. Do what the coin says (up to a maximum of 50 observations, then stop no matter). The results of this rule is a bias toward sample means near .5.   I ran a simulation with a true mean of zero for ten thousand replicates (blue histogram below).  The key property is a biasing of the observed sample means higher than the true value of zero.   Bayesian estimation seems biased too.    The green histogram shows the posterior means when the prior on $$\mu$$ is a normal with mean of zero and a standard deviation of .5.  The bias is less, but that just reflects the details of the situation where the true value, zero, is also favored by the prior. So it might seem I have proved the opposite of my point---namely that optional stopping affects Bayesian estimation. Nope.  The above case offers a frequentist interpretation, and that interpretation entered when we examined the behavior on a true value, the value zero.  Bayesians don't interpret analyses conditional on unknown "truths". ### The Bayesian Guarantee Bayes' Theorem provides a guarantee.  If you start with your prior and observed data, then Bayes' Theorem guarantees that the posterior is the optimal set of probability statements about the parameter at hand.  It is a bit subtle to see this in simulation because one needs to condition on data rather than on some unknown truth. Here is how a Bayesian uses simulation shows the Bayesian Guarantee. I. On each replicate, sample a different true value from the prior.  In my case, I just draw from a normal centered at zero with standard deviation of .5 since that is my prior on effect size for this  post.  Then, on each replicate, simulate data from that truth value for that replicate.  I have chosen data of 25 observations (from a normal with variance of 1).  A histogram of the sample mean across these varying true values is provided below, left panel.   I ran the simulation for 100,000 replicates. II. The histogram is that of data (sample means) we expect under our prior.  We need to condition on data, so let's condition on an observed sample mean of .3.  I have highlighted a small bin between .25 and .35 with red.  Observations fall in this bin about 6% of the time. III.  Look at all the true values that generated those sample means in the bin with .3.  These true values are shown in the yellow histogram.  This histogram is the target of Bayes' Theorem, that is, we can use Bayes Theorem to describe this distribution without going through the simulations.   I have computed the posterior distribution for a sample mean of .3 and 25 observations under my prior, and plotted it as the line.  Notice the correspondence.  This correspondence is the simulation showing that Bayes Theorem works.  It works, by the way, for every bin though I have just shown it for the one centered on .3. TAKE HOME 1: Bayes Theorem tells the distribution of true values given your prior and the data. ### Is The Bayesian Guarantee Affected By Optional Stopping? So, we come to the crux move.  Let's simulate the whacky optional stopping rule that favors sample means near .5.  Once again, we start with the prior, and for each replicate we choose a different truth value as a sample from the prior.  Then we simulate data using optional stopping, and the resulting sample means are shown in the histogram on the left.  Optional stopping has affected these data dramatically.  No matter, we choose our bin, again around .3, and plot the true values that led to these sample means.  These true values are shown as the yellow histogram on the right.  They are far more spread out than in the previous simulation without optional stopping primarily because stopping occurred often for less than 25 observations.  Now, is this spread predicted?  Yes.  On each replication we obtain a posterior distribution, and these vary from replication-to-replication because the sample size is random.  I averaged these posteriors (as I should), and the result is the line that corresponds well to the histogram. TAKE HOME  II: Bayes Theorem tells you where the true values are given your prior and the data, and it doesn't matter how the data were sampled! And this should be good news. ***************** R code set.seed(123) m0=0 v0=.5^2 runMean=function(y) cumsum(y)/(1:length(y)) minIndex=function(y) order(y)[1] mySampler=function(t.mu,topN) { M=length(t.mu) mean=rep(t.mu,topN) y=matrix(nrow=M,ncol=topN,rnorm(M*topN,mean,1)) ybar=t(apply(y,1,runMean)) prob=plogis((ybar-.6)^2,0,.2) another=matrix(nrow=M,ncol=topN,rbinom(M*topN,1,prob)) stop=apply(another,1,minIndex) return(list("ybar"=ybar[cbind(1:M,stop)],"N"=stop)) } goodSampler=function(t.mu,topN){ M=length(t.mu) mean=rep(t.mu,topN) y=matrix(nrow=M,ncol=topN,rnorm(M*topN,mean,1)) return(apply(y,1,mean))} M=10000 png('freqResults.png',width=960,height=480) par(mfrow=c(1,2),cex=1.3,mar=c(4,4,2,1),mgp=c(2,1,0)) t.mu=rep(0,M) out=mySampler(t.mu,50) ybar=out$ybar N=out$N v=1/(N+1/v0) c=(N*ybar+m0/v0) hist(ybar,col='lightblue',main="",xlab="Sample Mean",breaks=50,xlim=c(-1,1.25),prob=T,ylim=c(0,2.6)) abline(v=mean(ybar),lwd=3,lty=2) hist(v*c,col='lightgreen',main="",xlab="Posterior Mean",xlim=c(-1,1.25),prob=T,ylim=c(0,2.6)) abline(v=mean(v*c),lwd=3,lty=2) dev.off() ############################### set.seed(456) png('bayesGuarantee.png',width=960,height=480) par(mfrow=c(1,2),cex=1.3,mar=c(4,4,2,1),mgp=c(2,1,0)) M=100000 N=25 t.mu=rnorm(M,m0,sqrt(v0)) ybar=goodSampler(t.mu,N) myBreak=seq(-2.45,2.45,.1) bars=hist(ybar,breaks=myBreak,plot=F) mid=.3 good=(ybar >(mid-.05) & ybar<(mid+.05)) myCol=rep("white",length(myBreak)) myCol[round(bars$mids,2)==0.3]='red' plot(bars,col=myCol,xlab="Sample Mean",main="") mtext(side=3,adj=.5,line=0,cex=1.3,"Sample Mean Across Prior") v=1/(N+1/v0) c=(N*mid+m0/v0) hist(t.mu[good],prob=T,xlab=expression(paste("Parameter ",mu)),col='yellow', ylim=c(0,2.2),main="",xlim=c(-1.75,1.75)) myES=seq(-2,2,.01) post=1:length(myES) for (i in 1:length(myES)) post[i]=mean(dnorm(myES[i],c*v,sqrt(v))) lines(myES,post,lwd=2) mtext(side=3,adj=.5,line=0,cex=1.3,"True values for sample means around .3") dev.off() ######################## set.seed(790) png('moneyShot.png',width=960,height=480) par(mfrow=c(1,2),cex=1.3,mar=c(4,4,2,1),mgp=c(2,1,0)) M=100000 t.mu=rnorm(M,m0,sqrt(v0)) out=mySampler(t.mu,50) ybar=out$ybar N=out$N myBreak=seq(-5.95,5.95,.1) bars=hist(ybar,breaks=myBreak,plot=F) mid=.3 good=(ybar >(mid-.05) & ybar<(mid+.05)) myCol=rep("white",length(myBreak)) myCol[round(bars$mids,2)==0.3]='red' plot(bars,col=myCol,xlab="Sample Mean",main="",xlim=c(-4,3)) v=1/(N[good]+1/v0) c=(N[good]*ybar[good]+m0/v0) hist(t.mu[good],prob=T,xlab=expression(paste("Parameter ",mu)),col='yellow',main="", ylim=c(0,2.2),xlim=c(-1.75,1.75)) myES=seq(-2,2,.01) post=1:length(myES) for (i in 1:length(myES)) post[i]=mean(dnorm(myES[i],c*v,sqrt(v))) lines(myES,post,lwd=2) mtext(side=3,adj=.5,line=0,cex=1.3,"True values for sample means around .3") dev.off() ###################### #stop probability png(file="probStop.png",width=480,height=480) par(cex=1.3,mar=c(4,4,1,1),mgp=c(2,1,0)) ybar=seq(-2,3,.01) prob=plogis((ybar-.6)^2,0,.2) plot(ybar,1-prob,typ='l',lwd=2,ylab="Stopping Probability",xlab="Sample Mean",ylim=c(0,.55)) mtext("Optional Stopping Depends on Sample Mean",side=3,adj=.5,line=-1,cex=1.3) dev.off() ## Monday, March 28, 2016 ### The Effect-Size Puzzler, The Answer I wrote the Effect-Size Puzzler because it seemed to me that people have reduced the concept of effect size to a few formulas on a spreadsheet.  It is a useful concept that deserves a bit more thought. In the example I had provided is the simplest case I can think of that is germane to experimental psychologists.  We ask 25 people to perform 50 trials in each of 2 conditions, and ask what is the effect size of the condition effect.  Think Stroop if you need a context. The answer, by the way, is $$+\infty$$.  I'll get to it. ### The good news about effect sizes Effect sizes have revolutionized how we compare and understand experimental results.  Nobody knows whether a 3% change in error rate is big or small or comparable across experiments; everybody knows what an effect size of .3 means.  And our understanding is not associate or mnemonic, we can draw a picture like the one below and talk about overlap and difference.  It is this common meaning and portability that licenses a modern emphasis on estimation.  Sorry estimators, I think you are stuck with standardized effect sizes. Below is a graph from Many Labs 3 that makes the point.  Here, the studies have vastly different designs and dependent measures.  Yet, they can all be characterized in unison with effect size. Even for the simplest experiment above, there is a lot of confusion.  Jake Westfall provides 5 different possibilities and claims that perhaps 4 of these 5 are reasonable at least under certain circumstances.  The following comments were provided on Twitter and Facebook: Daniel Lakens makes recommendations as to which one we shall consider the preferred effect size measure.  Tal Yarkoni and Uli Shimmack wonder about the appropriateness of effect size in within subject designs and prefer unstandarized effects (see Jan Vanhove's blog).  Rickard Carlson prefers effect sizes in physical units where possible, say in milliseconds in my Effect Size Puzzler.   Sanjay Srinivasta needs the goals and contexts first before weighing in.  If I got this wrong, please let me know. From an experimental perspective, The Effect Size Puzzler is as simple as it gets.  Surely we can do better than to abandon the concept of standardized effect sizes or to be mired in arbitrary choices. ### Modeling: the only way out Psychologists often think of statistics as procedures, which, in my view, is the most direct path to statistical malpractice.  Instead, statistical reasoning follows from statistical models.  And if we had a few guidelines and a model, then standardized effect sizes are well defined and useful.  Showing off the power of model thinking rather than procedure thinking is why I came up with the puzzler. ### Effect-size guidelines #1:  Effect size is how large the true condition effect is relative to the true amount of variability in this effect across the population. #2:  Measures of true effect and true amount of variability are only defined in statistical models.  They don't really exist accept within the context of a model.  The model is important.  It needs to be stated. #3: The true effect size should not be tied to the number of participants nor the number of trials per participant.  True effect sizes characterize a state of nature independent of our design. ### The Puzzler Model I generated the data to be realistic.  They had the right amount of skew and offset, and the tails fell like real RTs do.   Here is a graph of the generating model for the fastest and slowest individuals: All data had a lower shift of .3s (see green arrow), because we typically trim these out as being too fast for a choice RT task.  The scale was influenced by both an overall participant effect and a condition effect, and the influence was multiplicative.  So faster participants had smaller effects; slower participants had bigger effects.  This pattern too is typical of RT data.   The best way to describe these data is in terms of percent-scale change.  The effect was to change the scale by 10.5%, and this amount was held constant across all people.  And because it was held constant, that is, there was no variability in the effect,  the standardized effect size in this case is infinitely large. Now, let's go explore the data.  I am going to skip over all the exploratory stuff that would lead me to the following transform, Y = log(RT-.3), and just apply it.  Here is a view of the transformed generating model: So, lets put plain-old vanilla normal models on Y.  First, let's take care of replicates. $Y_{ijk} \sim \mbox{Normal} (\mu_{ij},\sigma^2)$ where $$i$$$indexes individuals, $$j=1,2$$ indexes conditions, and $$k$$ indexes replicates. Now, lets model $$\mu_{ij}$$. A general formulation is $\mu_{ij} = \alpha_i+x_j\beta_i,$ where $$x_j$$ is a dummy code of 0 for Condition 1 and 1 for Condition 2. The term $$\beta_i$$ is the ith individual's effect. We can model it as $\beta_i \sim \mbox{Normal}(\beta_0,\delta^2)$ where $$\beta_0$$ is the mean effect across people and $$\delta^2$$ is the variation of the effect across people. With this model, the true effect size is $d_t = \frac{\beta_0}{\delta}.$ Here, by true, I just mean that it is a parameter rather than a sample statistic. And that's it, and there is not much more to say in my opinion. In my simulations the true value of each individual's effect was .1. So the mean, $$\beta_0$$, is .1 and the standard deviation, $$\delta$$, is, well, zero. Consequently, the true standardized effect size is $$d_t=+\infty$$. I can't justify any other standardized measure that captures the above principles. ### Analysis Could a good analyst have found this infinite value? That is a fair question. The plot below shows individuals' effects, and I have ordered them from smallest to largest. A key question is whether these are spread out more than expected from within-cell sample noise alone. It these individual sample effects are more spread out, then there is evidence for true individual variation in $$\beta_i$$. If these stay as clustered as predicted by sample noise alone, then there is evidence that people's effects do not vary. The solid line is the prediction within within-cell noise alone. It is pretty darn good. (The dashed line is the null that people have the same, zero-valued true effect). I also computed a one-way random-effects F statistic to see if there is a common effect or many individual effects. It was one effect F(24,2450) = 1.03. Seems like one effect. These one-effect results should be heeded. It is a structural element that I would not want to miss in any data set. We should hold plausible the idea that the standardized effect size is exceedingly high as the variation across people seems very small if not zero. To estimate effect sizes, we need a hierarchical model. You can use Mplus, AMOS, LME4, WinBugs, JAGS, or whatever you wish. Because I am an old and don't learn new tricks easily, I will do what I always do and program these models from scratch. I used the general model above in the Bayesian context. The key specification is the prior on $$\delta^2$$. In the log-normal, the variance is a shape parameter, and it is somewhere around $$.4^2$$. Effects across people are usually about 1/5th of this say $$.08^2$$. To capture variances in this range, I would use a $$\delta^2 \sim \mbox{Inverse Gamma(.1,.01)}$$ prior for general estimation. This is a flexible prior tuned for the 10 to 100 millisecond range for variation in effects across people. The following plot shows the resulting estimates of individual effects as a function of the sample effect values. The noteworthy feature is the lack of variation in model estimates of individual's effects! This type of pattern where variation in model estimates are attenuated compared to sample statistics is called shrinkage, and it occurs because the hierarchical models don't chase within-cell sample noise. Here the shrinkage is nearly complete, leading again to the conclusion that there is no real variation across people, or an infinitely large standardized effect size. For the record, the estimated effect size here is 5.24, which, in effect size units, is getting quite large! The final step for me is comparing this variable effect model to a model with no variation, say $$\beta_i = \beta_0$$ for all people. I would do this comparison with Bayes factor. But, I am out of energy and you are out of patience, so we will save it for another post. ### Back To Jake Westfall Jake Westfall promotes a design-free version of Cohen's d where one forgets that the design is within-subject and uses an all-sources-summed-and-mashed-together variance measure. He does this to stay true to Cohen's formulae. I think it is a conceptual mistake. I love within-subject designs precisely because one can separate variability due to people, variability within a cell, and variability in the effect across people. In between-subject designs, you have no choice but to mash all this variability together due to the limitations of the design. Within-subject designs are superior, so why go backwards and mash the sources of variances together when you don't have to? This advise strikes me as crazy. To Jake's credit, he recognizes that the effect-size measures promoted here are useful, but doesn't want us to call them Cohen's d. Fine, we can just call them Rouder's within-subject totally-appropriate standardized effect-size measures. Just don't forget the hierarchical shrinkage when you use it! ## Thursday, March 24, 2016 ### The Effect-Size Puzzler Effect sizes are bantered around as useful summaries of the data. Most people think they are straightforward and obvious. So if you think so, perhaps you won't mind a bit of a challenge? Let's call it "The Effect-Size Puzzler," in homage to NPR's CarTalk. I'll buy the first US winner a nice Mizzou sweatshirt (see here). Standardized effect size please. I have created a data set with 25 people each observing 50 trials in 2 conditions. It's from a priming experiment. It looks about like real data. Here is the download. The three columns are: • id (participant: 1...25) • cond (condition: 1,2) • rt (response time in seconds). There are a total of 2500 rows. I think it will take you just a few moments to load it and tabulate your effect size for the condition effect. Have fun. Write your answer in a comment or write me an email. I'll provide the correct answer in a blog next week. HINT: If you wish to get rid of the skew and stabilize the variances, try the transform y=log(rt-.3) ## Monday, March 21, 2016 ### Roll Your Own II: Bayes Factors With Null Intervals The Bayes factors we develop compare the null model to an alternative model. This null model is almost always a single point---the true effect is identically zero. People sometimes confuse our advocacy for Bayes factor with that for point-null-hypothesis testing. They even critique Bayes factor with the Cohenesque claim that the point null is never true. Bayes factor is a general way of measuring the strength of evidence from data for competing models. It is not tied to the point null. We develop for the point null because we think it is a useful, plausible, theoretically meaningful model. Others might disagree, and these disagreements are welcome as part of the exchange of viewpoints in science. In the blog post Roll Your Own: How to Compute Bayes Factors for Your Priors, I provided R code to compute a Bayes factor between a point-null and a user-specified alternative for a simple setup motivated by the one-sample t-test. I was heartened by the reception and I hope a few of you are using the code (or the comparable code provided by Richard Morey). There have been some requests to generalize the code for non-point nulls. Here, let's explore the Bayes factor for any two models in a simple setup. As it turns out, the generalization is instructive and computationally trivial. We have all we need from the previous posts. ### Using Interval Nulls: An Example Consider the following two possibilities: I. Perhaps you feel the point null is too constrained and would rather adopt a null model with mass on a small region around zero rather than at the point. John Kruschke calls these regions ROPEs (regions of posterior equivalence). II. Perhaps you are more interested in the direciton in an effect rather than whether it is zero or not. In this case, you might consider testing two one-sided models against each other. For this blog, I am going to retain four different priors. Let’s start with a data model. Data are independent normal draws with mean $$\mu$$ and variance $$\sigma^2$$. It is more convenient re-express the normal as a function of effect size, $$\delta$$ and $$\sigma^2$$ where $$\delta=\mu/\sigma)$$. Here is the formal specification: $Y_i \mid \delta,\sigma^2 \stackrel{iid}{\sim} \mbox{Normal}(\sigma\delta,\sigma^2).$ Now, the substantive positions as prior models on effect size: 1. $$M_0$$, A Point Null Model: $$\delta=0$$ 2. $$M_1$$, A ROPE Model: $$\delta \sim \mbox{Unif}(-.25,.25)$$ 3. $$M_2$$, A Positive Model: $$\delta \sim \mbox{Gamma(3,2.5)}$$ 4. $$M_3$$, A Negative Model: $$-\delta \sim \mbox{Gamma(3,2.5)}$$ Here are these four models expressed graphically as distributions: I picked these four models, but you can pick as many ones as you wish. For example, you can include a normal if you wish. Oh, let's look at some data. Suppose the observed effect size is .35 for an N of 60. ### Going Transitive Bayes factors are the comparison between two models. Hence we would like to compute the Bayes factors between any of these models. Let $$B_{ij}$$ be the comparison between the ith and jth model. We want a Table like this: $$B_{00}$$ $$B_{01}$$ $$B_{02}$$ $$B_{03}$$ $$B_{10}$$ $$B_{11}$$ $$B_{12}$$ $$B_{13}$$ $$B_{20}$$ $$B_{21}$$ $$B_{22}$$ $$B_{23}$$ $$B_{30}$$ $$B_{31}$$ $$B_{32}$$ $$B_{33}$$ Off the bat, we know the Bayes factor between a model and itself is 1 and that $$B_{ij} = 1/B_{ji}$$. So we only need to worry about the lower corner. 1 $$B_{10}$$ 1 $$B_{20}$$ $$B_{21}$$ 1 $$B_{30}$$ $$B_{31}$$ $$B_{32}$$ 1 We can use the code below, from the previous post to figure out the null vs. all the other models. $B_{10} = 4.9, \quad B_{20} = 4.2, \quad B_{30} = .0009$ Here we see that the point null is not as attractive or the ROPE null or the positive model. It is more attractive, however, than the negative model. Suppose, however, that you are most interested in the ROPE null and its comparison to the positive and negative model. The missing Bayes factors are $$B_{12}$$, $$B_{13}$$, and $$B_{23}$$. The key application of transitivity is as follows: $B_{ij} = B_{ik} \times B_{kj}.$ So, we can compute $$B_{12}$$ as follows: $$B_{12} = B_{10} \times B_{02} = B_{10}/B_{20} = 4.9/4.2 = 1.2$$. The other two Bayes factors are computed likewise: $$B_{13} = 5444$$ and $$B_{23} = 4667$$ So what have we learned. Clearly, if you were pressed to choose a direction, it is in the positive direction. That said, the evidence for a positive effect is slight when compared to a ROPE null. ### Snippets of R Code #First, Define Your Models as a List #lo, lower bound of support #hi, upper bound of support #fun, density function #here are Models M1, M2, M3 #add or change here for your models mod1=list(lo=-.25,hi=.25,fun=function(x,lo,hi) dunif(x,lo,hi)) mod2=list(lo=0,hi=Inf,fun=function(x,lo,hi) dgamma(x,shape=3,rate=2.5)) mod3=list(lo=-Inf,hi=0,fun=function(x,lo,hi) dgamma(-x,shape=3,rate=2.5)) #note, we dont need to specify the point null, it is built into the code #Lets make sure the densities are proper, here is a function to do so: normalize=function(mod) return(c(mod,K=1/integrate(mod$fun,lower=mod$lo,upper=mod$hi,lo=mod$lo,hi=mod$hi)$value)) #and now we normalize the three models mod1=normalize(mod1) mod2=normalize(mod2) mod3=normalize(mod3) #Observed Data es=.35 N=60 #Here is the key function that computes the Bayes factor between a model and the point null BF.mod.0=function(mod,es,N) { f= function(delta) mod$fun(delta,mod$lo,mod$hi)*mod$K pred.null=dt(sqrt(N)*es,N-1) altPredIntegrand=function(delta,es,N) dt(sqrt(N)*es,N-1,sqrt(N)*delta)*f(delta) pred.alt= integrate(altPredIntegrand,lower=mod$lo,upper=mod$hi,es=es,N=N)$value return(pred.alt/pred.null) } B10=BF.mod.0(mod1,es,N) B20=BF.mod.0(mod2,es,N) B30=BF.mod.0(mod3,es,N) print(paste("B10=",B10,"   B20=",B20,"   B30=",B30)) B12=B10/B20 B13=B10/B30 B23=B20/B30 print(paste("B12=",B12,"   B13=",B13,"   B23=",B23)) ## Tuesday, March 15, 2016 ### Statistical Difficulties from the Outer Limits You would think that the more data we collect, the closer we should be to the truth. This blog post falls into the "I may be wrong" category.  I hope many of you comment. ### ESP: God's Gift To Bayesians? It seems like ESP is God's gift to Bayesians.  We use it like a club to reinforce the plausibility of null hypotheses and to point out the difficulties of frequentist analysis. In the 1980s, a group of Princeton University engineers set out to test ESP by asking people to use their minds to change the outcome of a random noise generator (check out their website).   Over the course of a decade, these engineers collected an astounding 104,490,000 trials.  On each trial, the random noise generator flipped a gate with known probability of exactly .5.  The question was whether a human operator using only the power of his or her mind could increase this rate.  Indeed, they found 52,263,471 gate flips, or 0.5001768 of all trials.  This proportion, though only slightly larger than .5, is nonetheless significantly larger with a damn low p-value of .0003.   The figure below shows the distribution of successes under the null, and the observation is far to the right.  The green interval is the 99% CI, and it does not include the null. Let's assume these folks have a decent set up and the true probability should be .5 without human ESP intervention.  Did they show ESP? What do you think?  There data are numerous, but do you feel closer to the truth?  Impressed by the low p-value?  Bothered by the wafer-thin effect?  Form an opinion; leave a comment. Bayesians love this example because we can't really fathom what a poor frequentist would do?  The p-value is certainly lower than .05, even lower than .01, and even lower than .001.  So, it seems like a frequentist would need to buy in.  The only way out is to lower the Type I error rate in response to the large sample size.  But to what value and why? ### ESP: The Trojan Horse? ESP might seem like God's gift to Bayesians, but maybe it is a Trojan Horse.  A Bayes factor model comparison analysis goes as following.  The no-ESP null model is $M_0: Y \sim \mbox{Binomial}(.5,N)$ The ESP alternative is $M_1: Y|\theta \sim \mbox{Binomial}(\theta,N)$ A prior on $$\theta$$ is needed to complete the specification.  For the moment, let's use a flat one, $$\theta \sim \mbox{Unif}(0,1)$$. It is pretty easy to calculate a Bayes factor here, and the answer is 12-to-1 in favor of the null.   What a relief. ESP proponents might rightly criticize this prior as too dispersed.  We may reasonably assume that $$\theta$$ should not be less than .5 as we can assume the human operators are following the direction to increase rather than decrease the proportion of gate flips.   Also, the original investigators might argue that it is unreasonable to expect anything more than a .1% effect, so the top might be .501.  In fact, they might argue they ran such a large experiment because they expected a prior such a small effect.  The prior is   $$\theta \sim \mbox{Unif}(.5,.501)$$, then the Bayes factor is 84-to-1 for an ESP effect. The situation seems tenuous.  The below figure shows the Bayes factors for both priors as a function of the number of trials.  To draw these curves, I simply kept the proportion of success constant at 0.5001768.  The line is for the observed number of trials.  With this proportion, the Bayes factor not only depend on the prior, but they also depend in unintuitive ways on sample size.  For instance, if we doubled the number of trials and successes, the Bayes factors become 40-to-1 and 40,000-to-1, respectively, for the flat prior and the very small interval one. Oh, I can see the anti-Bayes crowd getting ready to chime in should they read this.   Sanjay Srivastava may take the high road and discuss the apparent lack of practicality of the Bayes factor.  Uri Simonsohn may boldly declare that Bayesians can't find truth.   And perhaps Uli Shimmack will create a new index, the M-Index, where M stands for moron.  Based on his analysis of my advocacy, he may declare I have the second highest known M-Index, perhaps surpassed only by E.-J. Wagenmakers. Seems like ESP was a bit of a Trojan Horse.  It looked all good, and then turned on us. ### But What Happened? Bayes' rule is ok of course.  The problem is us.  We tend to ask too much of statistics.   Before I get to my main points, I need to address one issue,  What is the model?  Many will call the data model, the binomial specification in this case, "the model."  The other part, the priors on parameters, is not part of "the model", it is the prior.  Yet, it is better to think of "the model" as the combination of the binomial and prior specification.  It's all one model, and this one model provides a priori predictive distribution about where the data should fall (see my last blog post).  The binomial is a conditional specification, and the prior completes the model. With this in mind that the above figure strikes me as quite reasonable.  Consider the red line, the one that compares the null to the model where the underlying probability ranges across the full interval.  Take the point for 10,000 trials.   The number of successes is 5,002 which is almost 1/2 of all trials.  Not surprisingly,  this value is evidence for the null compared to this diffuse alternative.  But the same value is not evidence for the null compared to the more constrained alternative model where $$.5<\theta<.501$$.  Both the null and this alternative are about the same for 10,000 trials, and each predict 5,002 successes out of 10,000 trials equally well.  Hence,  the Bayes factor is equivocal.  This alternative and the null are so similar that it takes way more data to discriminate among them.   As we gain more and more data, say 100,000,000 trials, the  slight discrepancy from 1/2 can be resolved, and the Bayes factors start to favor the alternative models.  As the sample size is increased further, the discrepancy becomes more pronounced.  Everything in that figure makes beautiful sense to me--- it all is as it should be.  Bayes rule is ok. Having more and more data doesn't get us closer to the truth.  It does, however, is give us greater resolution to more finely discriminate among models. ### Loose Ends The question, "is there an effect" strikes me as ill formed.   Yet, we answer the question affirmatively daily.  Sometimes, effects are obvious, and they hit you between the eyes.  How can that be if the question is not well formed? I think when there are large effects, just about any diffuse alternative model will do.  As long as the alternative is diffuse, data with large effects easily discriminate this diffuse alternative from the null.  It is in this sense that effects are obviously large. What this example shows that if one tries to resolve small effects with large sample sizes, there is intellectual tension.  Models matter.  Models are all that matter.  Large data gives you greater resolution to discriminate among similar models.  And perhaps little else. ### The Irony Is... This ESP example is ironic.  The data are so numerous that they are capable of finely discriminating among just about any set of models we wish, even the difference between a point null and a uniform null subtending .001 in width on the probability scale.  The irony is that we have no bona-fide competing models to discriminate.  ESP by definition seemingly precludes any scientific explanation, and without such explanation, all alternatives to the null are a bit contrived.  So while we can discriminate among models, there really is only one plausible one, the null, and no need for discrimination at all. If forced to do inference here (which means someone buys me a beer),  I would choose the full-range uniform as the alternative model and state the 12-to-1 ratio for the null.  ESP is such a strange proposition that why would values of $$\theta$$ near .5 be any more a priori plausible than those away from it? ## Saturday, February 27, 2016 ### Bayesian Analysis: Smear and Compare This blog post is co-written with Julia Haaf (@JuliaHaaf). Suppose Theory A makes a prediction about main effects and Theory B makes a prediction about interactions.  Can we compare the theories with data?  This question was posed on the Facebook Psychological Methods Group by Rickard Carlsson. Uli Schimmak (@R_Index) put the discussion on general terms with this setup: Should patients take Drug X to increase their life expectancy? Theory A = All patients benefit equally (unlikely to be true). Theory B1 = Women benefit, but men's LE is not affected. Theory B2 = Women benefit, but men's LE decreases (cross-over). Theory B3 = Women benefit, and men benefit too, but less. We are going to assume that each of these statements represents a theoretically interesting position of constraint. The goal is to use data to state the relative evidence for or against these positions.  This question is pretty hard from a frequentist perspective as there are difficult order constraints to be considered.  Fortunately, it is relatively simple from a Bayesian model-comparison perspective. ### Model Specifications The first and perhaps most important steps is representing these verbal positions as competing statistical models, or performing model specification.  Model specification is a bit of an art, and here is our approach: Let $$Y_{ijk}$$ be life expectancy where $$i$$ denotes gender ($$i=w$$ for women; $$i=m$$ for men), where $$j$$ denotes drug status ($$j=0$$ for placebo; $$j=1$$ for treatment), and where $$k$$ denote the replicate as there are several people per cell. We can start with the standard setup: $Y_{ijk} \sim \mbox{Normal}(\mu_{ij},\sigma^2).$ The next step is building in meaningful constraints on true cell means $$\mu_{ij}$$.  The standard approach is to think in terms of the grand mean, main effects, and interactions.  We think in this case and for these positions, the standard approach is not as suitable as the following two-cornerstone approach: $$\mu_{w0} = \alpha$$ $$\mu_{w1} = \alpha + \beta$$ $$\mu_{m0} = \gamma$$ $$\mu_{m1} = \gamma+\delta$$ With this parameterization, all the models can be expressed as various constraints on the relationship between $$\beta$$, the effect for women, and $$\delta$$, the effect for men. Model A: The constraint in Theory A is instantiated by setting $$\beta=\delta$$. We place a half normal on this single parameter, the equal effect of the drug on men and women. See the Figure below. Model B1: The constraint in Theory B1 instantiated by setting $$\beta>0$$ and $$\delta=0$$.  A half normal on $$\beta$$ will do. Model B2: The constraint in Theory B2 is instantiated by setting $$\beta>0$$ and $$\delta<0$$.  We used independent half normals here. Model B3: The constrain in Theory B3 is that $$0<\delta<\beta$$. This model is also shown, and it is similar to the preceding one; the difference is in the form of the constraints. Of course, there are other models which might be useful including the null, models with no commitment to benefit for women, or models that do not assume women benefit more than men. Adding them presents no additional difficulties in the Bayesian approach. ### Analysis: Smear & Compare If we are willing to make fine specifications, as above, then it is straight forward to derive predictions for data of a set sample size. These predictions are shown as a function of the sample effect for men and women, that is, the change in lifespan between treatment and placebo for each gender.   These effects are denoted as as $$\hat{\beta}$$ and $$\hat{\delta}$$, respectively.  Here are the predictions: Notice how these predictions are smeared versions of the model.  That is what sample noise does! With these predictions, we are ready to observe data. Suppose we observe that the treatment extends women's lives by 2 years and men's lives by one year.  We have now included this observed value as a red dot in the below figures. As can be seen, the observation is best predicted by Model B3. The Bayes factor is the relative comparisons of these predictions.  We can ask how much B3 beats the other models.  Here it is: B3 beats A by 3.7-to-1 B3 beats B1 by 18.2-to-1 B3 beats B2 by 212-to-1 ## Saturday, February 6, 2016 ### What It Would Take To Believe in ESP? "Bem (2011) is still not retracted.  Not enough public critique?"  -- R-Index, Tweet on February 5th, 2016. Bem's 2011 paper remains controversial because of the main claim of ESP.  Many researchers probably agree with me that the odds that ESP is true is quite small.  My subjective belief is that it is about three times as unlikely as winning the PowerBall jackpot.  Yet, Bem's paper is well written and well argued.  In many ways it is a model of how psychology papers should be written.  And so we have a problem---either there is ESP or the everyday way we produce and communicate knowledge is grievously flawed.   One benefit of Bem (2011) is that it forces us to reevaluate our production of knowledge perhaps more forcefully than any direct argument could.  How could the ordinary applications of our methods lead to the ESP conclusion? There Is Evidence for an ESP Effect The call to retract Bem is unfortunate.   There is no evidence of any specific fraud nor any element of substantial incompetence.  That does not mean the paper is free from critique---there is much to criticize as I will briefly mention subsequently (see also Tal Yarkoni's blog).  Yet, even when the critiques are taken into account, there is evidence from the reported data of an ESP effect.  Morey and I found a Bayes factor of about 40-to-1 in favor of an ESP effect. In getting this value, we noted a number of issues as follows:  We feel Experiments 5, 6, and 7 were too opportunistic.  There was no clear prediction for the direction of the effect---either retroactive mere exposure where future repeats increase the feeling of liking, or retroactive habituation where future repeats decrease the feeling of liking.  Both of these explanations were used post-hoc to explain different ESP trends, and we argue this usage is suspect and discarded these results.  We also worried about the treatment of non-erotic stimuli.  In Experiments 2-4, emotional non-erotic stimuli elicited ESP; in Experiments 8-9 neutral stimuli elicited ESP.  In Experiment 1, however, these non-erotic stimuli did not elicit ESP, in fact only the erotic ones did.  So, we feel Experiment 1 is a failure of ESP for these non-erotic stimuli and treated it as such in our analysis.  Even with  these corrections, there was 40-to-1 evidence for an ESP effect. In fact, the same basic story holds for telepathy.  Storm et al. meta-analytically  reviewed 67 studies and found a z of about 6, indicating overwhelming evidence for this ESP effect.  We went in, examined a bunch of these studies and trimmed out several that did not meet the criterion.  Even so, the Bayes factor was as much as 330-to-1 in favor of an ESP effect!  (see Rouder et al,, 2013) Do I Believe In ESP No.  I believe that there is some evidence in the data for something, but the odds that it is ESP is too remote.  Perhaps there are a lot of missing negative studies. Toward Believing In ESP: The Movie Theatre Experiment So what would it take to believe in ESP?  I think  Feynman once noted that a physicist would not be satisfied with such small effects.  She would build a better detector or design a better experiment.  (I can't find the Feyman cite, please help).  So here is what would convince me: I'd like to get 500 people in a movie theatre and see if they could feel the same future.  Each would have an iPad, and before hand, each would have provided his or her preferences for erotica.  A trial would start with a prediction---each person would have to predict whether an ensuing coin flip will land heads or tails.  From this, we tally the predictions to get a group point prediction.  If more people predict a head than a tail, the group prediction is heads; if more people predict a tails, the group prediction is tails.  Now we flip the coin.  If the group got it right, then everyone is rewarded with the erotica of their choice.  If the group got it wrong, then everyone is shown  a gross IAPS photo of decapitated puppies and the like.   We can run some 100 trials.  I bet people would have fun. Here is the frequentist analysis:  Let's suppose under the ESP alternative that people feel the future with a rate of .51 compared to the .50 baseline.  So, how often is the group prediction from 500 people correct? The answer is .66. Telling whether performance is .66 or .50 is not too hard.  If we run 100 total trials, we can divide up at 58: 58 or less group-correct trials is evidence for the null; 59 or more group-correct trials is evidence for ESP.  The odds of getting over 58 group-correct trials under the null is .044.  The odds of getting under 59 group-correct trials under the ESP alternative is .058    The group prediction about a shared future is a better detector than the usual way. Of course, I would perform a Bayesian analysis of the data.  I would put a distribution on the per person ESP effect, allowsing some people to not feel the future at all.  Then I would generalize this to a distribution for the group, derive predictions for this mode and the null, and do the usual Bayes factor comparison.  I am not sure this experiment would fully convince me, but it would change my skeptical beliefs by a few orders of magnitude.  Do it twice and I might even be a believer! Now, how to get funding to run the experiment?  Mythbusters? Closing Thoughts: Retractions and Shaming The claim that Bem (2011) should be retracted perhaps comes from the observations that getting 9 or 9 significant effects with such a small effect size and with the reported sample sizes is pretty rare.  I am not a fan of this type of argument for retraction.  I would much rather the critique be made, and we move on.  Bem's paper has done the field much good.  Either Bem has found the most important scientific finding in the last 100 years or has taught us much about how we do research.  Either way, it is a win-win.  I welcome his meta-analysis on the same grounds. ## Sunday, January 24, 2016 ### Roll Your Own: How to Compute Bayes Factors For Your Priors Sometimes people ask what prior they should use in computing Bayes factor. It’s a great question, and trying to answer it leads to a deeper understanding of Bayesian methods.  Here I provide some R code so that you can visualize and compute Bayes factors for (almost) any prior you wish. At the end of this blog I provide all the R code in one block. You might even want to go down there and cut-and-paste it into your R editor now. The Bayes factor compares two models. Let’s take one of them to be a point null. The other, the alternative, is up to you. ### Models Let’s first start with a data model. Data are independent normal draws with mean μ and variance σ2. It is more convenient re-express the normal as a function of effect size, δ and σ2 where δ = μ/σ). Here is the formal specification: $$Y_i \mid \delta,\sigma^2 \stackrel{iid}{\sim} \mbox{Normal}(\sigma\delta,\sigma^2).$$ A null model is implemented by setting δ = 0. The alternative model on effect size is up to you. Here is how you do it. Below is the specification for the R function altDens, the density of the alternative. I have chosen a normal with mean of .5 and a standard deviation of .3. I have also truncated this distribution at zero, and negative values are not allowed. You can see that in the specification of the function altDens. Set lo and hi, the support of the alternative. Here I set them to 0 and infinity, respectively. #Specify Alternative (up to constant of propertionality) lo=0 #lower bound of support hi=Inf #upper bound of support altDens=function(delta) dnorm(delta,.5,.3)*as.integer(delta>lo)*as.integer(delta<hi) You may notice that this alternative is not quite a proper density because it does not integrate to 1 That’s ok, the following function scales the alternative so the density does indeed integrate to 1.0. #Normalize alternative density in case user does not, K=1/integrate(altDens,lower=lo,upper=hi)$value f=function(delta) K*altDens(delta) ### Visualizing the Models Let’s now take a look at the competing models on δ, the effect size. Here is the code and the graphs: delta=seq(-2,3,.01) maxAlt=max(f(delta)) plot(delta,f(delta),typ='n',xlab="Effect Size Parameter Delta",ylab="Density",ylim=c(0,1.4*maxAlt),main="Models") arrows(0,0,0,1.3*maxAlt,col='darkblue',lwd=2) lines(delta,f(delta),col='darkgreen',lwd=2) legend("topright",legend=c("Null","Alternative"),col=c('darkblue','darkgreen'),lwd=2) ### Bayes Factor Computation Bayes factor is based on prediction. It is the ratio of the predicted density of the data under one model relative to that under another. The predicted density of data for the null is easy to compute and is related to the central t distribution. Here is the code. nullPredF=function(obs,N) dt(sqrt(N)*obs,N-1) We can compute this predicted density for any observed effect size or for all of them. The following code does it for a reasonable range of effect sizes for a sample size of 30. You can change N, the sample size, as needed. obs=seq(-2,3,.01) N=30 nullPred=nullPredF(obs,N) Getting the predictive density for the alternative is a bit harder. For each nonzero effect size parameter, the distribution of the observed effect follows a noncentral t distribution. Hence, to obtain predictions across all nonzero effect size parameters, we need to integrate the alternative model against the noncentral t distribution. Here is the code with a simple loop: altPredIntegrand=function(delta,obs,N) dt(sqrt(N)*obs,N-1,sqrt(N)*delta)*f(delta) altPredF=function(obs,N) integrate(altPredIntegrand,lower=lo,upper=hi,obs=obs,N=N)$value I=length(obs) altPred=1:I for (i in 1:I) altPred[i]=altPredF(obs[i],N) Now we can plot the predictions for all observed effect sizes: top=max(altPred,nullPred) plot(type='l',obs,nullPred,ylim=c(0,top),xlab="Observed Effect Size",ylab="Density",main="Predictions",col='darkblue',lwd=2) lines(obs,altPred,col='darkgreen',lwd=2) legend("topright",legend=c("Null","Alternative"),col=c('darkblue','darkgreen'),lwd=2) ### Let’s Run an Experiment Suppose we just ran an experiment and observed a sample effect size of .4. Let’s look at the predictions for this value. plot(type='l',obs,nullPred,ylim=c(0,top),xlab="Observed Effect Size",ylab="Density",main="Predictions",col='darkblue',lwd=2) lines(obs,altPred,col='darkgreen',lwd=2) legend("topright",legend=c("Null","Alternative"),col=c('darkblue','darkgreen'),lwd=2) my.es=.4 abline(v=my.es,lty=2,lwd=2,col='red') valNull=nullPredF(my.es,N) valAlt=altPredF(my.es,N) points(pch=19,c(my.es,my.es),c(valNull,valAlt)) #cat("Predictive Density under the null is ",valNull) #cat("Predictive Density under the specified alternative is ",valAlt) #cat("Bayes factor (alt/null) is ",valAlt/valNull) How well did the models do? Well, this value was clearly better predicted under the specified alternative than under the null. The density under the null is .04; the density under the alternative .205, and the ratio between them is 5.16-to-1. This ratio is the Bayes factor. In this case the Bayes factor indicate support of slightly more than 5-to-1 for the specified alternative. ### Other Alternatives The alternative is specified with just a bit of R code that can be changed. Suppose, for example, one wished a flat or uniform alternative from -.2 to 1.2. Here is the bit of code #Specify Alternative (up to constant of propertionality) lo=-.2 #lower bound of support hi=1.2 #upper bound of support altDens=function(delta) dunif(delta,lo,hi) Run this first, and then run the rest to see what happens. Here is another specification; this alternative is a gamma #Specify Alternative (up to constant of propertionality) lo=0 #lower bound of support hi=Inf #upper bound of support altDens=function(delta) dgamma(delta,shape=2,scale=.5) ### The R Code in One Block N=30 my.es=.4 ############################ #Specify Alternative (up to constant of propertionality) #Change this section to change alternative lo=0 #lower bound of support hi=Inf #upper bound of support altDens=function(delta) dnorm(delta,.5,.3)*as.integer(delta>lo)*as.integer(delta<hi) ########################### #Normalize alternative density in case user does not, K=1/integrate(altDens,lower=lo,upper=hi)$value f=function(delta) K*altDens(delta) delta=seq(-2,3,.01) #Plot Alternative as a density and Null as a point arrow maxAlt=max(f(delta)) plot(delta,f(delta),typ='n',xlab="Effect Size Parameter Delta",ylab="Density",ylim=c(0,1.4*maxAlt),main="Models") arrows(0,0,0,1.3*maxAlt,col='darkblue',lwd=2) lines(delta,f(delta),col='darkgreen',lwd=2) legend("topright",legend=c("Null","Alternative"),col=c('darkblue','darkgreen'),lwd=2) #Prediction Function Under Null nullPredF=function(obs,N) dt(sqrt(N)*obs,N-1) #Prediction Function Under the Alternative altPredIntegrand=function(delta,obs,N) dt(sqrt(N)*obs,N-1,sqrt(N)*delta)*f(delta) altPredF=function(obs,N) integrate(altPredIntegrand,lower=lo,upper=hi,obs=obs,N=N)$value obs=delta I=length(obs) nullPred=nullPredF(obs,N) altPred=1:I for (i in 1:I) altPred[i]=altPredF(obs[i],N) #Plot The Predictions top=max(altPred,nullPred) plot(type='l',obs,nullPred,ylim=c(0,top),xlab="Observed Effect Size",ylab="Density",main="Predictions",col='darkblue',lwd=2) lines(obs,altPred,col='darkgreen',lwd=2) legend("topright",legend=c("Null","Alternative"),col=c('darkblue','darkgreen'),lwd=2) #Evaluate Predicted Density at Observed Value my.es abline(v=my.es,lty=2,lwd=2,col='red') valNull=nullPredF(my.es,N) valAlt=altPredF(my.es,N) points(pch=19,c(my.es,my.es),c(valNull,valAlt)) cat("Bayes factor (alt/null) is ",valAlt/valNull)
Subjects -> ASTRONOMY (Total: 94 journals) Showing 1 - 46 of 46 Journals sorted alphabetically Advances in Astronomy       (Followers: 49) Annual Review of Astronomy and Astrophysics       (Followers: 50) Annual Review of Earth and Planetary Sciences       (Followers: 67) Artificial Satellites       (Followers: 21) Astrobiology       (Followers: 11) Astronomical & Astrophysical Transactions: The Journal of the Eurasian Astronomical Society       (Followers: 7) Astronomical Review       (Followers: 5) Astronomische Nachrichten       (Followers: 4) Astronomy & Geophysics       (Followers: 49) Astronomy and Astrophysics       (Followers: 67) Astronomy and Computing       (Followers: 6) Astronomy Letters       (Followers: 22) Astronomy Reports       (Followers: 22) Astronomy Studies Development       (Followers: 15) Astroparticle Physics       (Followers: 10) Astrophysical Bulletin       (Followers: 4) Astrophysics       (Followers: 35) Astrophysics and Space Science       (Followers: 49) Astrophysics and Space Sciences Transactions (ASTRA)       (Followers: 60) Astropolitics: The International Journal of Space Politics & Policy       (Followers: 13) Celestial Mechanics and Dynamical Astronomy       (Followers: 15) Chinese Astronomy and Astrophysics       (Followers: 25) Colloid Journal       (Followers: 2) Comptes Rendus : Physique       (Followers: 2) Computational Astrophysics and Cosmology       (Followers: 6) Earth and Planetary Science Letters       (Followers: 143) Earth, Moon, and Planets       (Followers: 47) Earth, Planets and Space       (Followers: 77) EAS Publications Series       (Followers: 8) EPL Europhysics Letters       (Followers: 8) Experimental Astronomy       (Followers: 38) Expert Opinion on Astronomy and Astrophysics       (Followers: 8) Extreme Life, Biospeology & Astrobiology - International Journal of the Bioflux Society       (Followers: 4) Few-Body Systems       (Followers: 1) Foundations of Physics       (Followers: 40) Frontiers in Astronomy and Space Sciences       (Followers: 15) Galaxies       (Followers: 6) Globe, The       (Followers: 3) Gravitation and Cosmology       (Followers: 6) Icarus       (Followers: 71) International Journal of Advanced Astronomy       (Followers: 21) International Journal of Astrobiology       (Followers: 4) International Journal of Astronomy       (Followers: 22) International Journal of Astronomy and Astrophysics       (Followers: 36) International Journal of Satellite Communications Policy and Management       (Followers: 15) International Letters of Chemistry, Physics and Astronomy       (Followers: 8) ISRN Astronomy and Astrophysics       (Followers: 14) Journal for the History of Astronomy       (Followers: 20) Journal of Astrobiology & Outreach       (Followers: 5) Journal of Astronomical Instrumentation       (Followers: 3) Journal of Astrophysics       (Followers: 33) Journal of Astrophysics and Astronomy       (Followers: 58) Journal of Atmospheric and Solar-Terrestrial Physics       (Followers: 133) Journal of Geophysical Research : Planets       (Followers: 116) Journal of Geophysical Research : Space Physics       (Followers: 136) Journal of High Energy Astrophysics       (Followers: 25) Kinematics and Physics of Celestial Bodies       (Followers: 11) KronoScope       (Followers: 1) Macalester Journal of Physics and Astronomy       (Followers: 5) Monthly Notices of the Royal Astronomical Society       (Followers: 13) Monthly Notices of the Royal Astronomical Society : Letters       (Followers: 2) Nature Astronomy       (Followers: 14) New Astronomy       (Followers: 26) New Astronomy Reviews       (Followers: 19) Nonlinear Dynamics       (Followers: 19) NRIAG Journal of Astronomy and Geophysics       (Followers: 4) Physics of the Dark Universe       (Followers: 4) Planetary and Space Science       (Followers: 106) Planetary Science       (Followers: 52) Proceedings of the International Astronomical Union       (Followers: 2) Publications of the Astronomical Society of Australia       (Followers: 3) Publications of the Astronomical Society of Japan       (Followers: 4) Research & Reviews : Journal of Space Science & Technology       (Followers: 20) Research in Astronomy and Astrophysics       (Followers: 38) Revista Mexicana de Astronomía y Astrofísica       (Followers: 3) Science China : Physics, Mechanics & Astronomy       (Followers: 4) Science China Physics, Mechanics & Astronomy       (Followers: 4) Solar Physics       (Followers: 29) Solar System Research       (Followers: 15) Space Science International       (Followers: 118) Space Science Reviews       (Followers: 92) Space Weather       (Followers: 27) Transport and Aerospace Engineering       (Followers: 13) Universe       (Followers: 6) Similar Journals Science China Physics, Mechanics & AstronomyJournal Prestige (SJR): 0.488 Citation Impact (citeScore): 2Number of Followers: 4      Hybrid journal (It can contain Open Access articles) ISSN (Print) 1674-7348 - ISSN (Online) 1869-1927 Published by Springer-Verlag  [2469 journals] • Interpretation of the η1 (1855) as a KK̄1(1400) + c.c. molecule Abstract: Abstract An exotic state with JPC = 1−+, denoted by η1(1855), was observed by BESIII Collaboration recently in J/ψ → γηη′. The fact that its mass is just below the threshold of KK̄1(1400) stimulates us to investigate whether this exotic state can be interpreted as a KK̄1(1400) + c.c. molecule or not. Using the one boson exchange model, we show that it is possible for KK̄1(1400) with JPC = 1−+ to bind together by taking the momentum cutoff Λ ≳ 2 GeV and yield the same binding energy as the experimental value when Λ ≈ 2.5 GeV. In this molecular picture, the predicted branch ratio Br(η1(1855) → ηη′) ≈ 15% is consistent with the experimental results, which again supports the molecular explanation of η1(1855). Relevant systems, namely KK̄1(1400) with JPC = 1−− and KK̄1(1270) with JPC = 1−±, are also investigated, some of which can be searched for in the future experiments. PubDate: 2022-05-05 • Modeling the arc and ring structures in the HD 143006 disk Abstract: Abstract Rings and asymmetries in protoplanetary disks are considered as signposts of ongoing planet formation. In this work, we conduct three-dimensional radiative transfer simulations to model the intriguing disk around HD 143006 that has three dust rings and a bright arc. A complex geometric configuration, with a misaligned inner disk, is assumed to account for the asymmetric structures. The two-dimensional surface density is constructed by iteratively fitting the ALMA data. We find that the dust temperature displays a notable discontinuity at the boundary of the misalignment. The ring masses range from 0.6 to 16 M⊕ that are systematically lower than those inferred in the younger HL Tau disk. The arc occupies nearly 20% of the total dust mass. Such a high mass fraction of dust grains concentrated in a local region is consistent with the mechanism of dust trapping into vortices. Assuming a gas-to-dust mass ratio of 30 that is constant throughout the disk, the dense and cold arc is close to the threshold of being gravitationally unstable, with the Toomre parameter Q ∼ 1.3. Nevertheless, our estimate of Q relies on the assumption for the unknown gas-to-dust mass ratio. Adopting a lower gas-to-dust mass ratio would increase the inferred Q value. Follow-up high resolution observations of dust and gas lines are needed to clarify the origin of the substructures. PubDate: 2022-05-05 • Search for gamma-ray line signals around the black hole at the galactic center with DAMPE observation Abstract: Abstract The adiabatic growth of a black hole (BH) may enhance the dark matter (DM) density surrounding it, causing a spike in the DM density profile. The spike around the supermassive BH at the center of the Milky Way may lead to a dramatic enhancement of the gamma-ray flux of DM annihilation from the galactic center (GC). In this work, we analyze the gamma-ray data of the innermost region (i.e., the inner 1°) of the GC to search for potential line-like signals from the BH spike. Such line-like signals could be generated in the process of DM particles annihilating into double photons. We adopt the gamma-ray data from the Dark Matter Particle Explorer (DAMPE). Although the DAMPE has a much smaller effective area than the Fermi-LAT, the gamma-ray line search can benefit from its unprecedented high energy resolution. No significant line-like signals are found in our analysis. We derive upper limits on the cross section of the annihilation based on this non-detection. We find that despite the DAMPE’s small effective area for photon detection, we can still place strong constraints on the cross section (〈σν〉 ≲ 10−27 cm3 s−1) in the spike scenario due to the very bright model-expected flux from the spike. Our results indicate that either DM does not annihilate primarily through the γγ channel in the mass range we considered or no sharp density spike is present at the GC. PubDate: 2022-05-05 • Ambi-chiral anomalous Hall effect in magnetically doped topological insulators Abstract: Abstract The chirality associated with broken time-reversal symmetry in magnetically doped topological insulators has important implications for the quantum transport phenomena. Here we report anomalous Hall effect studies in Mn- and Cr-doped Bi2Te3 topological insulators with varied thicknesses and doping contents. By tracing the magnitude of the anomalous Hall resistivity, we find that the Mn-type anomalous Hall effect characterized with clockwise chirality is strengthened by the reduction of film thickness, which is opposite to that of the Cr-type anomalous Hall effect with counterclockwise chirality. We provide a phenomenological physical picture to explain the evolution of the magnetic order and the anomalous Hall chirality in magnetically doped topological insulators. PubDate: 2022-04-29 • Correlation-enhanced electron-phonon coupling for accurate evaluation of the superconducting transition temperature in bulk FeSe Abstract: Abstract It has been widely recognized that, based on standard density functional theory calculations of the electron-phonon coupling, the superconducting transition temperature (Tc) in bulk FeSe is exceptionally low (almost 0 K) within the Bardeen-Cooper-Schrieffer formalism. Yet the experimentally observed Tc is much higher (∼10 K), and the underlying physical origin remains to be fully explored, especially at the quantitative level. Here we present the first accurate determination of Tc in FeSe where the correlation-enhanced electron-phonon coupling is treated within first-principles dynamical mean-field theory. Our studies treat both the multiple electronic bands across the Fermi level and phononic bands, and reveal that all the optical phonon modes are effectively coupled with the conduction electrons, including the important contributions of a single breathing mode as established by previous experiments. Accordingly, each of those phonon modes contributes pronouncedly to the electron pairing, and the resultant Tc is drastically enhanced to the experimentally observed range. The approach developed here should be broadly applicable to other superconducting systems where correlation-enhanced electron-phonon coupling plays an important role. PubDate: 2022-04-29 • Unraveling the threshold stress of structural rejuvenation of metallic glasses via thermo-mechanical creep Abstract: Abstract The competition between physical aging and structural rejuvenation determines the physical and mechanical properties of glassy materials. Thus, the rejuvenation-aging boundary must be identified quantitatively. In this work, we unravel a stress boundary to distinguish rejuvenation from aging via the thermo-mechanical creep of a typical Zr-based metallic glass. The crept glasses were rejuvenated into high-enthalpy disordered states when the applied stress exceeded a threshold that was numerically close to the steady-state flow stress; otherwise, the glasses were aged. A theoretical model for glass creep was adopted to demystify the observed stress threshold of rejuvenation. The model revealed that the thermo-mechanical creep beyond the threshold stress could activate sufficient shear transformations to create a net free volume, thus leading to structural rejuvenation. Furthermore, we derived the analytical expressions for the threshold and flow stresses. Both stresses can act as the rejuvenation-aging boundary, which is well supported by experimental creep data. The present work procures a deeper understanding of the rejuvenation mechanism of glasses and provides useful implications for abstaining from glass aging. PubDate: 2022-04-29 • A numerical method to predict the membrane tension distribution of Abstract: Abstract Changes in membrane tension significantly affect the physiological functions of cells in various ways. However, directly measuring the spatial distribution of membrane tension remains an ongoing issue. In this study, an algorithm is proposed to determine the membrane tension inversely by executing a particle-based method and searching for the minimum deformation energy based on the cell images and focal adhesions. A standard spreading cell model is established using 3D reconstructions with images from structured illumination microscopy as the reference cell shape. The membrane tension distribution, forces across focal adhesions, and profile of the spread cell are obtained using this method, until the cell deformation energy function optimization converges. Qualitative and quantitative comparisons with previous experimental results validated the reliability of this method. The results show that in the standard spreading cell model, the membrane tension decreases from the bottom to the top of the membrane. This method can be applied to predict the membrane tension distribution of cells freely spreading into different shapes, which could improve the quantitative analysis of cellular membrane tension in various studies for cell mechanics. PubDate: 2022-04-26 • Limits on sequential sharing of nonlocal advantage of quantum coherence Abstract: Abstract Sequential sharing of nonlocal correlation is inherently related to its application. We address the question as to how many observers can share the nonlocal advantage of quantum coherence (NAQC) in a (d × d)-dimensional state, where d is a prime or a power of a prime. We first analyze the trade-off between disturbance and information gain of the general d-dimensional unsharp measurements. Then in a scenario where multiple Alices perform unsharp measurements on one party of the state sequentially and independently and a single Bob measures coherence of the conditional states on the other party, we show that at most one Alice can demonstrate NAQC with Bob. This limit holds even when considering the weak measurements with optimal pointer states. These results may shed light on the interplay between nonlocal correlations and quantum measurements on high-dimensional systems and the hierarchy of different quantum correlations. PubDate: 2022-04-26 • Self-consistent effective-one-body theory for spinless binaries based on post-Minkowskian approximation I: Hamiltonian and decoupled equation for $$\psi _4^{\rm{B}}$$ ψ 4 B Abstract: Abstract To build a self-consistent effective-one-body (EOB) theory, in which the Hamiltonian, radiation-reaction force and waveform for the “plus” and “cross” modes of the gravitational wave should be based on the same effective background spacetime, the key step is to look for the decoupled equation for $$\psi _4^{\rm{B}} = {\ddot h_ +} - {\rm{i}}{\ddot h_ \times}$$ , which seems a very difficult task because there are non-vanishing tetrad components of the tracefree Ricci tensor for such spacetime. Fortunately, based on an effective spacetime obtained in this paper by using the post-Minkowskian (PM) approximation, we find the decoupled equation for $$\psi _4^{\rm{B}}$$ by dividing the perturbation part of the metric into the odd and even parities. With the effective metric and decoupled equation at hand, we set up a frame of self-consistent EOB model for spinless binaries. PubDate: 2022-04-26 • A semiclassical approach to surface Fermi arcs in Weyl semimetals Abstract: Abstract We present a semiclassical explanation for the morphology of the surface Fermi arcs of Weyl semimetals. Viewing the surface states as a two-dimensional Fermi gas subject to band bending and Berry curvatures, we show that it is the non-parallelism between the velocity and the momentum that gives rise to the spiral structure of Fermi arcs. We map out the Fermi arcs from the velocity field for a single Weyl point and a lattice with two Weyl points. We also investigate the surface magnetoplasma of Dirac semimetals in a magnetic field, and find that the drift motion, the chiral magnetic effect and the Imbert-Fedorov shift are all involved in the formation of surface Fermi arcs. Our work not only provides an insightful perspective on the surface Fermi arcs and a practical way to find the surface dispersion, but also paves the way for the study of other physical properties of the surface states of topological semimetals, such as transport properties and orbital magnetization, using semiclassical methods. PubDate: 2022-04-26 • Molecular transport under extreme confinement Abstract: Abstract Mass transport through the nanoporous medium is ubiquitous in nature and industry. Unlike the macroscale transport phenomena which have been well understood by the theory of continuum mechanics, the relevant physics and mechanics on the nanoscale transport still remain mysterious. Recent developments in fabrication of slit-like nanocapillaries with precise dimensions and atomically smooth surfaces have promoted the fundamental research on the molecular transport under extreme confinement. In this review, we summarized the contemporary progress in the study of confined molecular transport of water, ions and gases, based on both experiments and molecular dynamics simulations. The liquid exhibits a pronounced layered structure that extends over several intermolecular distances from the solid surface, which has a substantial influence on static properties and transport behaviors under confinement. Latest studies have also shown that those molecular details could provide some new understanding on the century-old classical theory in this field. PubDate: 2022-04-25 • Dissipation-induced nonreciprocal magnon blockade in a magnon-based hybrid system Abstract: Abstract We propose an experimentally realizable nonreciprocal magnonic device at the single-magnon level by exploiting magnon blockade in a magnon-based hybrid system. The coherent qubit-magnon coupling, mediated by virtual photons in a microwave cavity, leads to the energy-level anharmonicity of the composite modes. In contrast, the corresponding dissipative counterpart, induced by traveling microwaves in a waveguide, yields inhomogeneous broadenings of the energy levels. As a result, the cooperative effects of these two kinds of interactions give rise to the emergence of the direction-dependent magnon blockade. We show that this can be demonstrated by studying the equal-time second-order correlation function of the magnon mode. Our study opens an avenue to engineer nonreciprocal magnonic devices in the quantum regime involving only a small number of magnons. PubDate: 2022-04-19 • Extracting governing system for the plastic deformation of metallic glasses using machine learning Abstract: Abstract This paper shows hidden information from the plastic deformation of metallic glasses using machine learning. Ni62Nb38 (at.%) metallic glass (MG) film and Zr64.13Cu15.75Al10Ni10.12 (at.%) BMG, as two model materials, are considered for nano-scratching and compression experiment, respectively. The interconnectedness among variables is probed using correlation analysis. The evolvement mechanism and governing system of plastic deformation are explored by combining dynamical neural networks and sparse identification. The governing system has the same basis function for different experiments, and the coefficient error is ≤ 0.14% under repeated experiments, revealing the intrinsic quality in metallic glasses. Furthermore, the governing system is conducted based on the preceding result to predict the deformation behavior. This shows that the prediction agrees well with the real value for the deformation process. PubDate: 2022-04-18 • Investigation of the effect of quantum measurement on parity-time symmetry Abstract: Abstract Symmetry, including the parity-time (PT)-symmetry, is a striking topic, widely discussed and employed in many fields. It is well-known that quantum measurement can destroy or disturb quantum systems. However, can and how does quantum measurement destroy the symmetry of the measured system' To answer the pertinent question, we establish the correlation between the quantum measurement and Floquet PT-symmetry and investigate for the first time how the measurement frequency and measurement strength affect the PT-symmetry of the measured system using the 40Ca+ ion. It is already shown that the measurement at high frequencies would break the PT symmetry. Notably, even for an inadequately fast measurement frequency, if the measurement strength is sufficiently strong, the PT symmetry breaking can occur. The current work can enhance our knowledge of quantum measurement and symmetry and may inspire further research on the effect of quantum measurement on symmetry. PubDate: 2022-04-15 • Stability of superconducting Nd0.8Sr0.2NiO2 thin films Abstract: Abstract The discovery of superconducting states in the nickelate thin film with infinite-layer structure has paved a new way for studying unconventional superconductivity. So far, research in this field is still very limited due to difficulties in sample preparation. Here we report the successful preparation of the superconducting state of Nd0.8Sr0.2NiO2 thin film (Tc = 8.0–11.1 K) and study the stability of such films in the ambient environment, water, and under electrochemical conditions. Our work demonstrates that the superconducting state of Nd0.8Sr0.2NiO2 is remarkably stable, which can last for at least 47-day continuous exposure to air at 20°C and 35% relative humidity. We also show that the superconductivity disappears after being immersed in de-ionized water at room temperature for 5 h. Surprisingly, it can also survive under ionic liquid gating conditions with an applied voltage of about 4 V, which is even more stable than conventional perovskite complex oxides. PubDate: 2022-04-14 • Freezing crystallographic defects into nanoparticles: The development of pulsed laser defect engineering in liquid (PUDEL) PubDate: 2022-04-11 • Generation of nanomaterials by reactive laser-synthesis in liquid Abstract: Abstract Nanomaterials with tailored structures and surface chemistry are in high demand, as these materials play increasingly important roles in biology, catalysis, energy storage, and manufacturing. Their heightened demand has attracted attention towards the development of synthesis routes, particularly, laser-synthesis techniques. These efforts drove the refinement of laser ablation in liquid (LAL) and related methods over the past two decades and have led to the emergence of reactive laser-synthesis techniques that exploit these methods’ characteristic, non-equilibrium conditions. Reactive laser-synthesis approaches foster unique chemical reactions that enable the formation of composite products like multimetallic nanoparticles, supported nanostructures, and complex minerals. This review will examine emerging reactive laser-synthesis methods in the context of established methods like LAL. The focus will be on the chemical reactions initiated within the laser plasma, with the goal of understanding how these reactions lead to the formation of unique nanomaterials. We will provide the first systematic review of laser reaction in liquid (LRL) in the literature, and bring a focus to the chemical reaction mechanisms in LAL and reactive-LAL techniques that have not yet been emphasized in reviews. Discussion of the current challenges and future investigative opportunities into reactive laser-synthesis will impart guidance for researchers interested in designing reactive laser-synthesis approaches to novel nanomaterial production. PubDate: 2022-04-11 • The circuit design and optimization of quantum multiplier and divider Abstract: Abstract A fault-tolerant circuit is required for robust quantum computing in the presence of noise. Clifford + T circuits are widely used in fault-tolerant implementations. As a result, reducing T-depth, T-count, and circuit width has emerged as important optimization goals. A measure-and-fixup approach yields the best T-count for arithmetic operations, but it requires quantum measurements. This paper proposes approximate Toffoli, TR, Peres, and Fredkin gates with optimized T-depth and T-count. Following that, we implement basic arithmetic operations such as quantum modular adder and subtracter using approximate gates that do not require quantum measurements. Then, taking into account the circuit width, T-depth, and T-count, we design and optimize the circuits of two multipliers and a divider. According to the comparative analysis, the proposed multiplier and divider circuits have lower circuit width, T-depth, and T-count than the current works that do not use the measure-and-fixup approach. Significantly, the proposed second multiplier produces approximately 77% T-depth, 60% T-count, and 25% width reductions when compared to the existing multipliers without quantum measurements. PubDate: 2022-04-08 • A unified theory of ferromagnetic quantum phase transitions in heavy fermion metals Abstract: Abstract Motivated by the recent discovery of a continuous ferromagnetic quantum phase transition in CeRh6Ge4 and its distinction from other U-based heavy fermion metals such as UGe2, we develop a unified explanation of their different ground state properties based on an anisotropic ferromagnetic Kondo-Heisenberg model. We employ an improved large-N Schwinger boson approach and predict a full phase diagram containing both a continuous ferromagnetic quantum phase transition for large magnetic anisotropy and first-order transitions for relatively small anisotropy. Our calculations reveal three different ferromagnetic phases including a half-metallic spin selective Kondo insulator with a constant magnetization. The Fermi surface topologies are found to change abruptly between different phases, consistent with that observed in UGe2. At finite temperatures, we predict the development of Kondo hybridization well above the ferromagnetic long-range order and its relocalization near the phase transition, in good agreement with band measurements in CeRh6Ge4. Our results highlight the importance of magnetic anisotropy and provide a unified theory for understanding the ferromagnetic quantum phase transitions in heavy fermion metals. PubDate: 2022-04-01 • Unprotected quadratic band crossing points and quantum anomalous Hall effect in FeB2 monolayer Abstract: Abstract Quadratic band crossing points (QBCPs) and quantum anomalous Hall effect (QAHE) have attracted the attention of both theoretical and experimental researchers in recent years. Based on first-principle calculations, we find that the FeB2 monolayer is a nonmagnetic semimetal with QBCPs at K. Through symmetry analysis and k · p invariant theory, we find that the QBCP is not protected by rotation symmetry and consists of two Dirac points with the same chirality (Berry phase of 2π). Once introducing Coulomb interactions, we find that there is a spontaneous-time-reversal-breaking instability of the spinful QBCPs, which gives rise to a C = 2 QAH insulator with orbital moment ordering. PubDate: 2022-04-01 JournalTOCs School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh, EH14 4AS, UK Email: [email protected] Tel: +00 44 (0)131 4513762
# Development of topology and differential geometry in ETCS I would like to ask for some reference textbooks or articles, or any information that you know about developing topological concepts and differential geometric concepts using as a foundation ETCS which is elementary theory of category of sets by Lawvere. My motivation for it is that recently I have seen a reasonable discussion that ETCS could be a valid alternative foundation for set theory and being young I would like to take the risk and develop my machinery for differential geometry using this foundation with a hope that it could give me some useful intuition and an interesting point of view that those who used conventional set theory like ZFC/NBG don't have (interesting question is, is it really possible? I am pretty sure that both foundational systems can be shown to be equivalent if you equip ETCS with some additional axioms, but I would argue that the method of developing set theory differently makes you think about set theory differently as well). As a sidenote, I would also be very interested in your opinions about developing topology and differential geometry using ETCS. Maybe the set theoretic equivalents are not possible, but it is possible to construct something equivalent or more general than topological spaces/manifolds etc.? • Do differential geometers really "use" ZFC/NBG? Don't they, like most working mathematicians, just use set theory in an informal way without caring too much about foundational issues? – Lord Shark the Unknown Jan 15 at 6:15 • @LordSharktheUnknown in principle set theory is first order logic and topology and differential geometry is developed using this theory. Of course, after mathematicians train their intuition they are using informal shortcuts for developing theorems, but there are still rigorous foundations. I am interested in formulating these rigorous foundations and then I can proceed as informal as I want once I am sure I understand what I am assuming and what not, and once I understand how everything is defined. – Daniels Krimans Jan 15 at 6:17 • I'm sure you can use whatever foundations you want, and I'm sure they will give you intuition for neither topology nor geometry. – user98602 Jan 15 at 17:20 • I agree completely with the other responses you have received. I highly doubt there is a book on differential geometry that spells out its foundations in ETCS... I’m not even sure if there would be one that explicitly uses ZFC. I would be surprised, but less surprised in the case of topology since there are topology books out there that make set theoretical foundations explicit. On a more positive note, if you haven’t seen Lawvere’s “Sets for Mathematics,” you may want to have a look. – spaceisdarkgreen Jan 15 at 19:50 • @spaceisdarkgreen Not sure I agree. For example, most constructions in differential geometry are topological spaces. So you need to know what topological space is. Topological spaces are then defined using unions and intersections. So, you really need to know what unions and intersections are. In ZFC and ETCS these are pretty different. In ZFC they are pretty obvious but in ETCS you have to know how to make sense of them. So, there are observable differences between foundations you use in the most basic level. – Daniels Krimans Jan 15 at 20:27 I agree with Mike and Lord Shark. As you say, ETCS is not really a different theory than ZFC, up to the more abstruse axioms. It's not clear what it would even mean to develop topology in ETCS as opposed to ZFC. Indeed, I think most enthusiasts of ETCS would agree that most mathematicians effectively already work in ETCS moreso than in ZFC, insofar as nobody ever asks whether $$3$$ is an element of $$\pi$$; in fact, this observation was a core motivation for ETCS, in my understanding.
#### Archived This topic is now archived and is closed to further replies. # 3D Pong Collision This topic is 5421 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi guys, I hope this post does not seen too simple, but ive been stuck for weeks and its stopping my development in games. Im trying to make a nice 3D pong game, so far its awesome. Ive got a nice sphere mapped ball, reflective patterns and it all looks so slick. My problem is how to make the ball rebound off the paddle. Ive researched Sphere to Box collsion (get nearest point in box etc..) but i don''t know how to calculate the rebound (new velocity values upon hit)! Also, what if the ball has gone ''inside'' the box upon collision, just setting the inverse of the velocity of the ball, the ball would repeatedly rebound every frame. Im confused, since there are 4 sides on the paddle the ball can bounce off, so far all i have is a IsCollided() function. Thanks anyone who can help. - MeshMan ##### Share on other sites Simple 2D pong never needed distance between a 2D rectangle and a circle. Your 3D version shouldn't either. Treat your paddles as rectangles in a plane. When the ball touches (about to go through it) that plane (Ball.Z + Ball.Radius == Paddle.Z), check whether there is a paddle there (2D check), and if there is, then the simplest thing you could do is keep the X and Y velocities the same (X and Y being both perpendicular to the paddle), and invert Z velocity (Ball.Velocity.Z = -Ball.Velocity.Z). That would be basically the ball bouncing off. If there is no paddle, the player's missed, so do nothing (reset ball position, increment appropriate scores, etc.) I've made a 3D pong in VB for my computers class once. If you're interested in seeing it (with the source), I'll see if I can find it. It's quite messy though, but should give you a basic idea on how to do simple physics. --- shurcooL [edited by - shurcooL on December 12, 2003 4:56:19 PM] ##### Share on other sites Woops, i was a little vague and not clear on exactly what im doing. My version of pong isnt exactly 2 paddles that just reflect the ball when they hit one axis. I have two paddles situated in a snooker table type arena, like so: -------------------- | | | ---- | | | | o | | | | ---- | | | -------------------- The paddles are a little away from the edge of the boards sides so the ball can rebound from the sides of the board but also get behind the paddles and rebound off the paddle that way (kinda like own goals). The score is calculated by hitting the ball on your opponents back-board, first to 10 wins. So the ball needs to check collision from all sides of the paddles (3d paddles and 3D ball). Is this clear? Or would you like me to upload a screenie. - MeshMan ##### Share on other sites Say hello to Mr Clickerson That explains how to reflect a vector about a normal (of a plane / line). Reflect your velocity vector using that and you''re done. ##### Share on other sites Man, thats way over my head, i cant understand a word of that math article besides the normal vector to a plane. *shrugs* All i have is 2 bounding rect''s that say *yeah, we collide*, my math sux so that doesnt help, thanks anyways. ##### Share on other sites //-------------------------------------------------------// dot product // d = A . B//-------------------------------------------------------float Dot(const Vector& A, const Vector& B) { return A.x*B.x + A.y*B.y + A.z*B.z;}//-------------------------------------------------------// multiplication // B = A * k//-------------------------------------------------------Vector Mul(float k, const Vecor& A) { return Vector(A.x * k, A.y * k, A.z * k);}//-------------------------------------------------------// substraction // C = A - B//-------------------------------------------------------Vector Sub(const Vector& A, const Vector& B) { return Vector(A.x-B.x, A.y-B.y, A.z-B.z);}//-------------------------------------------------------// reflect a vector off a plane// V -= 2.0f * (V . N) * N// if normal not normalised, // V -= 2.0f * [(V . N) / (N . N)] * N//-------------------------------------------------------bool Reflect(Vector& V, Vector& N) { float VdotN = Dot(V, N); // V . N float NdotN = Dot(N, N); if (VdotN > 0.0f) // ball moving away from plane, don;t reflect return false; if (NdotN < 0.0001f) // normal too small. return false; float k = 2.0f * (VdotN / NdotN); // the reflexion amount Vector Vn = Mul(k, N); // the reflexion vector V = Sub(V, Vn); // the new velocity, reflected off the plane return true;}//-------------------------------------// process collision of a ball and a paddle//-------------------------------------void ProcessCollision(const Vector& PointOnBall, const Vector& PointOnPaddle, Vector& BallVelocity){ Vector N = Sub(PointOnPaddle, PointOnBall); Reflect(BallVelocity, N);}` ##### Share on other sites oliii, why are you doing that and not using operators? ##### Share on other sites I still dont know where to go with that code, since i need a closest point for the sphere and the paddle bounding box but to get the closest point of a bounding rect / sphere i need a relative point from. For example, to get closest point on paddle box i would need the closest point from the ball, but to get the closest point on the ball, i would need the closest point on box. And also to add to the ultimate confusion im in, what about if the ball hits a paddle and intersects the paddle and goes inside it, since it will ''never'' collide exactly on the pixels of the sides of the box. ##### Share on other sites if its basically just moving side to side, why not use 2d collision detection ##### Share on other sites Its 3d but collision is 2D since nothing moves on Y axis. 1. 1 Rutin 47 2. 2 3. 3 4. 4 5. 5 • 13 • 10 • 12 • 10 • 13 • ### Forum Statistics • Total Topics 632994 • Total Posts 3009769 • ### Who's Online (See full list) There are no registered users currently online ×
Strange definition of a two-level system by the Bloch vector A two-level system can be described by a density operator involving the Bloch vector $$\vec{r}; \quad r_x = Tr(\rho X); \quad r_y = Tr(\rho Y); \quad r_z = Tr(\rho Z)$$ as $$\rho = \frac{I + \vec{r}\cdot \vec{\sigma}}{2}$$ where $X$, $Y$, and $Z$ are the Pauli operators. What is the physical idea behind defining the density operator for a two level system like this, and in particular what is $\vec{\sigma}$ here? In this example, the mixed state is represented as a Block sphere, and the $\vec{\sigma}$ is a pauli matrix. The Bloch sphere is essentially a representation of the system that can be thought of as a sphere with basis vectors X, Y, Z in your case, which each represent the pure states. The Bloch vector points somewhere in the sphere, pointing to a mixed state (which, if it were pointing only along X, would be a pure state)
# Browse Dissertations and Theses - Food Science and Human Nutrition by Title • (1995) These studies utilized the preruminant calf animal model to compare the absorption, serum appearance and lipoprotein transport of several carotenoids commonly found in the human diet. In addition, this work examined the ... application/pdf PDF (7MB) • (1989) This study investigated the thermal inactivation by immersion cooking of antinutritional factors in soybeans while minimizing protein insolubilization. The comparison of inactivation kinetics of lipoxygenase (LO), trypsin ... application/pdf PDF (6MB) • (2015-07-21) Long-term care of elderly with diabetes in nursing homes has been a national issue. Guidelines focusing on long-term care of elderly with diabetes in nursing homes have been few, and the practice in nursing homes has faced ... application/pdf PDF (15MB) • (2013-08-22) Water is a key component of food materials. One of the most useful aspects of the water activity (aw) concept is the moisture sorption isotherm, which plots the moisture content of a material as a function of aw at the ... application/pdf PDF (2MB) • (1985) The utilization of whey permeate as a growth medium for Bacillus polymyxa and the improved productivity of 2,3-butylene glycol by B. polymyxa through continuous fermentation were accomplished. B. polymyxa and Klebsiella ... application/pdf PDF (3MB) • (1999) Palm oil contains high concentration of valuable carotenoids which are usually destroyed during conventional oil processing. The objective of this research was to develop a new process for recovering carotenoids by converting ... application/pdf PDF (5MB) • (1972) application/pdf PDF (6MB) • (1995) The overall objective of this research was to optimize the ceramic cross-flow microfiltration of ethanol fermentation broths. The ceramic membranes from three different manufacturers were made of $\alpha$-alumina with pore ... application/pdf PDF (6MB) • (2012-05-22) Flavor is a major determinant of the consumer acceptance of a food product. The availability of a flavor compound for sensory perception is greatly influenced by its interaction with non-volatile food constituents including ... application/pdf PDF (2MB) • (2010-08-31) Picky eating is a mealtime struggle for many parents and, thus, a topic of interest for many researchers. Yet, the lack of an operational definition is a great limitation in measuring, quantifying, and truly understanding ... application/pdf PDF (4MB) • (1979) application/pdf PDF (4MB) • (1986) Calcium binding to casein affects the formation of casein micelles and the behavior of the casein proteins during processing. Since calcium binding is pH dependent, it is important to understand the binding properties of ... application/pdf PDF (4MB) • (1994) Hydration, aggregation, ion binding, and water sorption properties of soy proteins were determined by $\sp$O, $\sp2$H and $\sp1$H nuclear magnetic resonance (NMR), combined with rheological and computational techniques. ... application/pdf PDF (7MB) • (1990) Initially, electroporation-induced transformation of intact cells of C. perfringens 3624A-Rif$\sp{\rm r}$/Str$\sp{\rm r}$ with plasmids pAM$\beta$1 and pHR106 resulted in transformation efficiencies of 1.4 $\times$ 10$\sp2$ ... application/pdf PDF (5MB) • (2001) An examination of the methods for nuclear magnetic cross relaxation spectroscopy (CRS) data collection and analysis was conducted using water and an aqueous waxy corn starch suspension to better perform and interpret the ... application/pdf PDF (14MB) • (1976) application/pdf PDF (9MB) • (1962) application/pdf PDF (6MB) • (1989) Several model systems were designed to develop intermediate moisture shelf-stable products. These systems were designed using whole soy and desludged soy slurries. Four nutritive sweeteners namely sucrose, glucose, fructose ... application/pdf PDF (5MB) • (2004) The overall objective of this study was to develop and validate a rapid volatile analysis method which coupled Dynamic Vapor Sorption technology with fast gas chromatography-flame ionization detection for studying the ... application/pdf PDF (33MB) • (1974) application/pdf PDF (4MB)
### Scientific Measurement Practice your skills of measurement and estimation using this interactive measurement tool based around fascinating images from biology. # A Question of Scale ##### Stage: 4 Challenge Level: 'Order of magnitude' in science is a very useful concept: we are often not necessarily interested in the exact measurement of a quantity but rather whether it is 'about a metre' or 'about a kilometre' or 'about a nanometre' etc. Orders of magnitude makes the use of scientific notation. For any two numbers $X$ and $Y$ we use the notation $X$e$Y$ to mean $X\times 10^Y$. In case you are wondering, the letter $e$ stands for 'exponent' and is sometimes written $E$ instead. In standard notation, the number $X$ must be between $1.0$ and $9.99...$ and the exponent a whole number. For example, $1.2$e$3$ is $1.2\times 10^{3}$, which is the same as $1200$. The power of $10$ can also be negative, so that $6.8$e$-2$ means $6.8\times 10^{-2}$, which is the same as $0.068$. In science, certain exponents are more frequently used. Powers of $\pm 3,\pm 6, \pm 9, \pm 12$ are standards, which is why you will see (non-standard) measurements such as $375$e$-9$m, which scientists would refer to 'Three hundred and Seventy Five Nanometres'. You will need to convert such numbers to standard form before placing them on the scale. Are there any objects whose size you are confident you know? Are the other objects larger or smaller than these?
. ## Desperate Deniers: Bob Tisdale is lost in uncertainty of February temperatures Sou | 2:23 AM Bob Tisdale has got himself lost in a world of uncertainty. He's written an article at WUWT (archived here) with the headline: "February 2016 Global Surface Temperature Anomalies May or May Not Have Been Highest on Record, According to the UKMO". In fact it was the hottest February on record. What Bob does to support his claim is say how the UK Met Office Hadley Centre publishes uncertainty limits with the global surface temperature data. Bob went looking for any month that might have had an anomaly that came close. He couldn't find any other February, but he did find a January. He wrote: As shown, the lower February 2016 value for the global temperature anomaly is +0.92 deg C referenced to the years of 1961-1990.  This was exceeded by the upper January 2007 value of +0.98 deg C. Just in case you’re having trouble seeing that in Figure 1, see the graph here, which starts the data in January 1997.  So the best the alarmists could claim, according to the HadCRUT4 data, is that the February 2016 global surface temperature anomalies may or may not have been the highest on record when considering the uncertainties of the data. Seriously? Poor Bob is really stretching. The fact is that it was the hottest February on record. There was no other February that came close, not within the widest 95% probability bounds. According to the Met Office Hadley Centre, the February was hotter than the average of 1961-1990 by 1.057 +/- 0.136 °C. Incidentally, Bob really didn't need to scour the records to find another month where the uncertainty ranges of the monthly anomalies overlapped. They didn't just overlap in January 2007, they also overlapped in every month from August 2015 to January 2016. It's been getting very hot lately. Here are some charts for you using data from the UK Met Office. The first one is monthly global (all months). See the little spike that Bob was desperately latching on to in January 2007, where the upper range of uncertainty is just pipping the lower 95% line of February 2016: The second one is February only. There was no other February that came anywhere close: Bob continues to peddle his zany notion that global warming is "natural" - that is, it's getting hotter because it's getting hotter. Also known as "it's magic". He stopped short of trying to claim that all the warming of the past 65 years is "magic" - he probably knows that would sound ridiculous, even to the climate hoaxers at WUWT. There was the usual nonsense being spouted, not just from Bob Tisdale. Despite being a regular on the world's most viewed climate website (not), Bloke down the pub hasn't learnt a thing. That's not surprising. One doesn't go to WUWT to learn about climate. He needs to spend some time on a proper climate website: March 30, 2016 at 6:35 am So the most that can be reasonably claimed is that temperatures continue to recover from the last ice age, and that we haven’t entered a new one, yet. Mark tells no-one in particular that everyone should ignore Bob Tisdale's tedious articles: March 30, 2016 at 7:24 am (excerpt) Global average temperature is meaningless. It’s propaganda, a measurement of global average temperature has no scientific value unless accurately knowing the global average temperature is the scientific goal. Other than that it is a useless value. Pamela Gray tells no-one in particular what is wrong with the fans of WUWT: March 30, 2016 at 6:50 am Belief trumps data. A person can even lose jobs over this established human trait. The data can demonstrate completely the opposite case, yet belief will triumph over it more often than not. Those who show data with detached examination are rare and often not well-liked. And frequently unemployed. I've saved the best or worst idea for last, though it came first. Marcus wants to condemn the incoming President to death by a zillion charts, but only if it's Ted Cruz or Donald Trump: March 30, 2016 at 5:53 am Bob T, when Trump or Cruz is elected as POTUS, I will put my vote in for you as nominee for Science Advisor to the president ! I am sure your vast knowledge will be needed to rid the U.S. of all the $#^& Holdren has covered the White House in…. ( Just because I’m half Canadian does not make me bias in my choice )..Never stop ! #### 37 comments: 1. Heh. Reading the comments they're split between it being natural and it being a conspiracy. You'd expect such a split to show up some in fighting but that doesn't seem to be the case. 2. I ran the numbers and got that there's about a 1.3 % chance that January 2007 was more anomalously warm than February 2016. (some approximations in my calcs but it should be close) 1. I got 5.4% using a median of 0.832 and 1.057 °C, standard deviations of 0.072 and 0.068 °C respectively. Assuming Gaussian distributions, the point of intersection is 0.948 °C with a cumulative normal distribution of 0.054 under each tail. 3. What I posted on WUWT, at least for now. "So after combing through the 1993 (one thousand nine hundred ninety-three) other months in the Jan. 1880-Feb. 2016 HADCRUT4 series, Mr. Tisdale identified a single month with a 5.4% statistical chance of having a higher temperature anomaly than February 2016 (and with that month occurring just nine years ago). Talk about grasping at a very, very thin straw." While Sou is correct that Aug 2015 - Jan 2016 also overlap with Feb 2016 at the 2 sigma level or greater, that period is inconvenient for Tisdale's desired conclusion, namely that recent months and years are nothing special. 1. Correction: Jan 1850 to date, not Jan 1880. 2. Of course he corrected for running 1993 statistical tests too. Right? ...uhhh, wrong. That never appears to have occurred to Bob. Should we help him by pointing out the odds of NOT finding a significant result while running 1993 independent tests at the .05 level? Or by showing him how to correct for autocorrelation to get an even better estimate? Nah. It's not like he'd understand if we did. 4. But the difference is inconvenient if you use the low uncertainty limit of Jan 2007 and the high limit of Feb 2016. I wonder why bobbyboy didn't do that comparison too... Kind of like comparing apples and giraffes. 5. At WUWT Nick Stokes also considers the probability of the 2007 'high tail' and the 2016 'low tail' events occurring jointly so that the 'true' value Jan 2007 > Feb 2016, which reduces the probability to P1*P2, or <<1%. This is one of those very basic points that you have to ponder for a while, but I think he's correct. 1. Did he correct for all tests he implicitly ran by cherrypicking? No. 2. I got my 1.3 % by doing standard Gaussian things: two distributions P1, P2 with mean m(P1), m(P2) and standard deviation s(P1), s(P2) you would get: s(P1-P2) = sqrt( s(P1)^2 + s(P2)^2 ) m(P1-P2) = m(P1) - m(P2) m(P1-P2)/s(P1-P2) gives you the z-score and then you can convert that to a one-tailed p value. This is early first year undergraduate stuff in a typical science course. I think it's simpler and more intuitive than the correction needed for cherry picking (which is also necessary, and squashes the p value). So I'd expect that any adult claiming moderate competence should be able to follow the calculation I did to come up with about 1 % chance that January 2007 was more anomalous than Feb 2016. 3. In, say, 150 coin flips is finding a series of 8 heads in a row a 1/2^16 probability? No. A standard early first year undergraduate exercise in stats is to have half the students in a class produce "random" distributions of coin flips by hand and the other half produce random distributions by actually flipping coins. It is trivially easy differentiate the two as random distributions do, in fact, contain "improbable" runs. 4. "Improbable" runs which are completely probable given the number chances, that is. 5. That's 1/2^8. Edited in midstream. 6. This comment has been removed by the author. 7. An interest article regarding this a few years ago Spotify had to alter their "random shuffle" algorithm because, as humans we are terrible at coping with true randomness http://www.bbc.com/news/technology-31302312 This inability to allow for natural randomness is another reason why conspiracies take hold How often do you hear from science deniers "what are the chances of that happening" 8. Yes. A truly empty, rhetorical statement. And further to this point I have NEVER heard a denier ask: "What are the odds of finding a significant trend in the years including and following a cherrypicked 3.1 sigma (p <.001) event which the 1998 el Nino was?" Or equivalently: What are the odds of seeing the true 50% heads and 50% tails distribution of a fair coin in a series of 30 coin flips if I intentionally start out with an observed sequence of 10 heads--a bit over a 3 sigma event as well? Depends on the meaning of "about", but the odds of seeing the true 50-50 distribution are low. The expectation is that you'll see a 67% H-33% T split. Such are the joys of cherrypicking for deniers. 9. jgnfld, "What are the odds of finding a significant trend in the years including and following a cherrypicked 3.1 sigma (p <.001) event which the 1998 el Nino was?" Interesting approach. My immediate objection is that, unlike coin flips, period-to-period temperature variability isn't independent of prior periods. Unfortunately my stats aren't good enough by far to account for autocorrelation with an appropriate model, or even opine whether not doing so would give a materially different answer over an interval as long as 1998 to present. 10. I should have read down further ... looks like I need to go read Tamino. 11. Tamino has addressed cherrypicking many times in the past along these same lines, though usually uses more graphic methods. Tamino, and a Durbin-Watson test as well, also note that autocorrelation is largely, though not completely, removed by going to an annual aggregation as I do here. Aggregating annually, and therefore ignoring the 2016 spike to date, RSS gives a trend of .12/decade and a resid error of .1367. 1998 gives a resid of .43039 making it a 3.1 sigma event. INTENTIONALLY, after the fact, then starting your analysis at 1998 simply because it gives you an insignificant run later is implicitly a multiple comparison of at minimum every possible run of 18 years (and likely worse as deniers are willing to find any run of any length that shows what they want). Therefore you must drastically adjust the alpha to account for this. Deniers never bother with this last step for obvious reason it would show the error of their "logic". See Bonferroni or Scheffe, for example, for specific adjustment procedures. Or just look at "the Escalator" which uses NASA data http://www.skepticalscience.com/graphics.php for a nice graphical example. 12. I should have mentioned that one could also construct an equivalent correction procedure that would correct the alpha probabilities (i.e., drastically widen the error bars due to multiple comparisons) in the cherrypicked coin flip case. 13. Something happened to a post. Briefly, by aggregating annually (thereby ending at 2015), autocorrelation largely disappears. Tamino mentions this and Durbin-Watson lag 1=p of .5. RSS gives .12/decade. Residual error= .1367. Residual of 1998 works out to 3.1 sigma. By INTENTIONALLY going inside and looking for significant/insignificant periods one must correct for the number of comparisons being made. Every 18 year sequence at a bare minimum in this case and likely more given denier penchant for accepting any negative evidence at all. This means you must drastically increase the alpha level to control for all the comparisons. Tamino makes same points re cherrypicking, though usually through graphical methods. 6. I am new at this site, but who is Bob Tisdale? Is he a real person, or is he just a psevdonym for "Team Tisdale," who consists of several persons with Scientific knowlegde, who always seems to have a ready answer to every Challenge the Deniers meet. The reason I ask is that the writing style and reasoning seems to differ from post to post. And beeing in the climatedebate for only one year, I feal the urge to develop a conspiracy theory my self. 1. Synpathy is not the right emotion, but Tisdale has accepted the desperately onerous and unpleasant job of being Anthony's Chemical Ali in the face of reality's merciless attacks (on all of us) from all directions. Humans often deny blatant evidence (NO I DO NOT NEED ANOTHER BLOODY COLNOSCOPY I HAD ONE IN '06 AND THIS BLOOD CAME FROM A SCRATCH - YES THE TOILET SEAT HAS A SHARP EDGE) The stress that Tisdale suffers is obvious and it's reality's fault but it's amplified by Hot Whopper never letting him get away with blaming the toilet seat. 7. Sou...your furry little friend marcus has been placed in moderation by the proprietor of that pseudo-science blog. He was the one who posted the first dog-whistle anti-semitic comment. And he seems to post a lot about Agenda 21 and other silly stuff regarding imaginary conspiracy theories. If I didn't know better I would swear he was a parody account. Let's see how long it takes for him to exit limbo... 8. No luck on that one, Emeritus. There was a conspiracy theory doing the circles about a year or two back that 'Bob Tisdale' was a pseudonym/sock puppet, but it turns out that it's his real name. He's still boring nonetheless, always churning out tl;dr 5000+ word screeds that essentially repeat the same thing over and over again: the warming is caused by heat moving around in the system. At least, that's our best appraisal of it. Ah well, I suppose I'm not much good at expressing pseudo-scientific concepts in terms that rational people can grok. But I'm sure that in Bob's head it all makes perfect sense. Anyway, for all the AGW deniers go on about the laws of thermodynamics being violated by AGW theory, perhaps someone should try to explain the 1st law to Bob. Hint for Bob: Earth ain't a closed system. 1. I have a hard time accepting that, on WUWT he has been appointet the science advisor to The Donald; "Bob T, when Trump or Cruz is elected as POTUS, I will put my vote in for you as nominee for Science Advisor to the president ! I am sure your vast knowledge will be needed to rid the U.S. of all the$#^& Holdren has covered the White House in…. ( Just because I’m half Canadian does not make me bias in my choice )..Never stop !" Have You seen his Picture, is he just another weather man, how can this person get all this influence without some competent apperatus. Forget the first and second Law of thermodynamics, that's far beoynd the impact area of WUWT. 9. I laughed until I hurt when I read Tisdale's article. seaice1 nailed him, and Tisdale reacted with some of the purest jabberwocky I've read in ages. 1. It was a bit depressing to see how quickly seaice1 got labelled a troll. 10. Just a heads-up to readers here, that Tamino has calculated the odds of both the upper value for 2007 and the lower value for 2015 coinciding as 0.013. 1. Bob insists on getting his education in public. He has a good teacher in Tamino, but I doubt whether Bob is up to the task of being a good student. 2. Looks like Tamino took my approach and got the same number. As jgnfld points out, even that massively overestimates the chance that Tisdale's right. 3. and as I posted one there, the chances are just the same that the Jan 2007 temp was in fact lower than measured and Feb 2016 was actually quite a bit higher than +1.057. 11. In all fairness, I see that Bob has now published an 'update' (albeit a little begrudgingly) where he acknowledges Nick Stokes' re-interpretation of the overlapping error range. Maybe it isn't a complete waste of effort to respond with logic to posts on WUWT? OK, yes, I'm probably dreaming. 1. It's good that Bob Tisdale's acknowledged he was wrong this time around. However despite what he claims (archived here), that's very rare. He's never published corrections to his claims that it's blobs and El Ninos that are causing global warming for example. And he lies when he says that "alarmists" don't correct mistakes. They are usually the only ones who do. 2. Pfft. Until Bob changes the headline, the 'correction' is of little value. The clown posse has moved on, anyway. 3. Nick, I think anyone still deserves encouragement for correcting previous mistakes. The update was also put clearly at the top of the article, so fair play. And I'm very surprised. 4. C'mon, show a bit of sympathy. With a typical lag of 4-5 months between the peak of El Nino, the peak in middle troposphere temperatures influenced by it, and the tendency of the latter to exaggerate the El Nino effect (compared with the surface datasets), I think we all know that the next few days will see the start of very hard times for those who have long held the satellite records to be the gold standard by which global warming should be judged. :-) Instead of commenting as "Anonymous", please comment using "Name/URL" and your name, initials or pseudonym or whatever. You can leave the "URL" box blank. This isn't mandatory. You can also sign in using your Google ID, Wordpress ID etc as indicated. NOTE: Some Wordpress users are having trouble signing in. If that's you, try signing in using Name/URL. Details here.
## Online Multicommodity Routing with Time Windows Please always quote using this URN: urn:nbn:de:0297-zib-9654 • We consider a multicommodity routing problem, where demands are released \emph{online} and have to be routed in a network during specified time windows. The objective is to minimize a time and load dependent convex cost function of the aggregate arc flow. First, we study the fractional routing variant. We present two online algorithms, called Seq and Seq$^2$. Our first main result states that, for cost functions defined by polynomial price functions with nonnegative coefficients and maximum degree~$d$, the competitive ratio of Seq and Seq$^2$ is at most $(d+1)^{d+1}$, which is tight. We also present lower bounds of $(0.265\,(d+1))^{d+1}$ for any online algorithm. In the case of a network with two nodes and parallel arcs, we prove a lower bound of $(2-\frac{1}{2} \sqrt{3})$ on the competitive ratio for Seq and Seq$^2$, even for affine linear price functions. Furthermore, we study resource augmentation, where the online algorithm has to route less demand than the offline adversary. Second, we consider unsplittable routings. For this setting, we present two online algorithms, called U-Seq and U-Seq$^2$. We prove that for polynomial price functions with nonnegative coefficients and maximum degree~$d$, the competitive ratio of U-Seq and U-Seq$^2$ is bounded by $O{1.77^d\,d^{d+1}}$. We present lower bounds of $(0.5307\,(d+1))^{d+1}$ for any online algorithm and $(d+1)^{d+1}$ for our algorithms. Third, we consider a special case of our framework: online load balancing in the $\ell_p$-norm. For the fractional and unsplittable variant of this problem, we show that our online algorithms are $p$ and $O{p}$ competitive, respectively. Such results where previously known only for scheduling jobs on restricted (un)related parallel machines. $Rev: 13581$
# Confidence ellipse in PCA Is there any good explanation about the confidence ellipse used in PCA analysis which briefly describes how to interpret the result? For example, I am looking for the following cases: • Some points are not included in the ellipse. What does that mean? How about few and large number of observations (individuals) are not included in the ellipse? • What does the angle of the ellipse reveal? For example, what is the difference between a horizontal ellipse (stretched over PC1) versus angular ellipse versus a vertical ellipse (stretched over PC2).
# Sum of binomial distribution with different probabilities Pr ( K = k ) In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own boolean-valued outcome: a random variable containing single bit of The binomial sum variance inequality states that the variance of the sum of binomially distributed random variables will always be less than or equal to the variance of a binomial variable with the same n and p parameters. FOR THE OFFICE OF NAVAL RESEARCH. 1 Introduction 1 1. NO0014-92-J-1264 (NR-042-267). probabilities using the binomial distribution 𝑛=1 and so the sum of the probabilities is at the shape of different binomial distributions and discussing What is the difference between a normal and binomial distribution? Statistics Binomial and Geometric Distributions Calculating Binomial Probabilities. Author(s) David M. The difference between a hypergeometric distribution and a binomial distribution is Bernoulli / Binomial: The sum Examples of Different Experiments Binomial: The Negative Binomial Distribution An analytical approximation for binomial probabilities when n is large and Computational problem. sum of binomial distribution with different probabilities the distribution looks no different than the normal distribution. It describes the outcome of n independent trials in an experiment. BINOMIAL RANDOM VARIABLES ! Ken Butler. the binomial distribution takes on different shares. two random variables with different cumulative distribution functions cannot Sum of 'the first k' binomial coefficients for fixed n . Where sampling without replacement Describes the main properties of the binomial distribution and how to use it to perform statistical analyses in Excel. Thus AD-A266 969 THE DISTRIBUTION OF A SUM OF In this paper we examine the distribution of a sum S of binomial random variables, each with different success probabilities. 5) sum(dbinom(46 The binomial distribution is a discrete probability distribution. Reproduction in whole or in part is permitted for any purpose of In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own boolean-valued outcome: a random variable containing single bit of The binomial sum variance inequality states that the variance of the sum of binomially distributed random variables will always be less than or equal to the variance of a binomial variable with the same n and p parameters. the exact probabilities from the binomial distribution. I use the following paper The Distribution of a Sum of Binomial for different versions of Jun 27, 2016 · Binomial Distribution Cumulative we compute 3 individual probabilities, using the binomial The sum of all these probabilities is the The distribution of a sum S of independent binomial random variables, each with different success probabilities, is discussed. An approximation based on The distribution of a sum Sof independent binomial random variables, each with different success probabilities, is discussed. (1993). Enter the number of Successes, x, sum sum Sum YES PROB DIST Probability For Dummies Cheat Sheet. Working Subscribe Subscribed Statistics Chapter 5 Learn with flashcards, The sum of all the probabilities must equal 1 2. I'm not aware of a closed formula to exist. e. An efficient algorithm is given to calculate the exact distribution byTHE DISTRIBUTION OF A SUM OF. Oct 16, 2013 · Sum of binomial distribution Anish Turlapaty. Introduction to binomial probability distribution, binomial nomenclature, The sum of all these probabilities is the answer we seek. two random variables with different cumulative distribution functions cannot The binomial probability distribution function, The probabilities in the top plot sum to 1, > Bayes for Beginners: Probability and Likelihood. Let X1 and X2 be inde- pendent binomial random variables where Xi has a Binomial(ni,p) distribu- tion for i = 1,2. Get smarter on #sum_(k=0)^(3)=color(red in any binomial distribution, Posts about Independent Sum The negative binomial probabilities sum to i. Let’s start of with the tossing of a coin calling one outcome H, for heads and the other T for tails. My goal is approximate the distribution of a sum of binomial variables. Sum of independent Binomial random variables with different probabilities. 147, because we are multiplying two 0. See the binomial sum variance inequality. The generalized binomial distribution with size=$c (and, in case, with different$ni$): Z=$\sum Zi$, The probabilities for "two chickens" all work out to be 0. The binomial distribution model is an 4 or 5, and the sum of the probabilities of This lesson describes three rules of probability (i. The mean and variance of the Binomial distribution Different values of We can calculate each of these probabilities using the Binomial probability for- This is not a binomial distribution The multinomial distribution arises from an extension of the binomial experiment to The probabilities, Find the mean and standard deviation of a binomial distribution; so that heads and tails have different probabilities. As long as none of the success probabilities are equal to one, one can calculate the probability of k successes using the recursive formula. 147, to see the Binomial Distribution in action. The negative binomial distribution is a discrete distribution with two parameters and where and . Here is an excerpt from the Wikipedia page. binomial, Poisson, Jun 13, 2011 · ‘Binomial Distribution’ is the sum of independent and evenly distributed ‘Bernoulli Trials’. In probability theory and statistics, the sum of independent binomial random variables is itself a binomial random variable if all the component variables share the same success probability. An efficient algorithm is given to More Sum Of Binomial Distribution With Different Probabilities videos In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a$$\Pr(X=x) = \sum\limits_{A\in F_x} S. My question is what to do if the trial probabilities change Mar 25, 2008 · Can you explain the difference between normal and binomial distribution? sum of any sample will be probabilities using the normal distribution THE BINOMIAL DISTRIBUTION & PROBABILITY • ΣP(X=rk) means the sum of the probabilities for all values of r, The binomial distribution is then written X~B I know how to do a standard binomial distribution in python where probabilities of each trial is the same. My question is what to do if the trial probabilities change Find the mean and standard deviation of a binomial distribution; so that heads and tails have different probabilities. Prerequisites. An efficient algorithm is given to calculate the exact distribution by31 Jan 2018 Request (PDF) | The Distribution of | The distribution of a sum S of independent binomial random variables, each with different success probabilities, is discussed. sum of binomial distribution with different probabilitiescontains over 1020 elements). , Binomial distribution; Negative binomial; The sum of probabilities of all possible events equals 1. let me do it in a different variable that's being described as binomial distribution-- it's equal to the sum. X. Then. I use the following paper The Distribution of a Sum of Binomial for different versions of After you identify that a random variable X has a binomial distribution, How to Find Binomial Probabilities Using a Notice these probabilities all sum to 1 In particular the distribution just described is the Binomial Distribution of different Binomial probabilities must sum to 1 does this Binomial and normal distributions probabilities, we will end up using sums of random variables a lot. DIST), the sum of probabilities [for all possible x's] is equal to 10, instead of 1. The mean is the Sum of What is the sum (or difference) of two binomial distribution? Update Cancel. up vote 0 down vote. APRIL 28, 1999. Datadog: What are 3 different digits whose sum is 8? Binomial Distribution. TECHNICAL REPORT No. They can be distinguished by whether the support starts at k = 0 or at k Binomial Distribution. the proof he refers to is actually for a different direct formula) for the partial sum of binomial The best videos and questions to learn about Calculating Binomial Probabilities. probabilities using the binomial distribution 𝑛=1 and so the sum of the probabilities is at the shape of different binomial distributions and discussing Diagram of distribution relationships. Prepared Under Contract. An efficient algorithm is given to probability distributions for epidemiologists. The distribution arises in reliability analysis and in survival analysis. X1 +X2 has a binomial distribution with n1+n2 trials and probability of success p. Sum of the Probabilities and the Binomial Mean A Binomial Distribution, Brief Summary Flip a coin 4 times (or flip 4 coins) and count the number of heads. It has positive probabilities at the non-negative integers . Different texts adopt slightly different definitions for the negative binomial distribution. 2 The Binomial Distribution 2 for the binomial probabilities pi is anything other than a monotone function of the dose Normal Approximation to Binomial. com. Michael Stephens. (The Distribution of a Sum of Binomial Random Variables by Ken Butler and Michael Probability distribution for different get the full distribution of$Y=\sum_{i with a sum of indpendent binomial trials with different probabilities, The Sum of The Probabilities Is One. Compute cumulative binomial probabilities; The probability of getting from 0 to 3 heads is then the sum of these probabilities. The probability mass function of a binomial random variable X with parameters n and p is f(k) = P(X = k) = n k 1 Any random variable with a binomial distribution The distribution of a sum S of independent binomial random variables, each with different success probabilities, is discussed. Jun 27, 2016 · Binomial Distribution Cumulative we compute 3 individual probabilities, using the binomial The sum of all these probabilities is the Random variables, probability distributions, binomial is defined to be the sum of the probabilities F The pmf probability distribution for a binomial random We calculate probabilities of random variables and calculate expected value for different types of random variables. An efficient algorithm is given to THE BINOMIAL DISTRIBUTION & PROBABILITY • ΣP(X=rk) means the sum of the probabilities for all values of r, The binomial distribution is then written X~B We could of course compute the probability of that event by summing the individual probabilities: sum(dbinom the binomial distribution because different BERNOULLI TRIALS AND THE BINOMIAL DISTRIBUTION 1. In probability theory and statistics, the sum of independent binomial random variables is itself a See the binomial sum variance inequality. Lane. If we toss it once we get four events Let's draw a tree diagram: The "Two Chicken" cases are highlighted. The Binomial Distribution Note that the sum of these individual probabilities, because the sum is taken over Binomial Probability Distribution: To find binomial probabilities select [0: and the result will be the sum of all probabilities less than or equal to x. After the normal and binomial distribution, prob= to sample elements with different probabilities, What is the distribution of partial sum for the point probabilities of a Poisson binomial distribution there would liquid with two different Probability Distribution: To calculate Binomial Probabilities: 1. The connection between the negative binomial distribution and A negative binomial distribution with r = 1 is a geometric distribution. 1 the sum Binomial Distribution “OR” RULE “The probability that either one of 2 different events will occur is the sum of Different Probabilities for are independent and the sum of the probabilities is The binomial distribution gets its name from the Discrete distributions: empirical, Bernoulli, binomial, Assuming $n$ independent Bernoulli trials with different probabilities, the Poisson binomial distribution is the discrete probability distribution that describe Expected value of binomial distribution. Get smarter on #sum_(k=0)^(3)=color(red in any binomial distribution, To generate a different binomial distribution, click the Reload button and enter new values of n and p. Each trial is assumed to have . promoted by DatadogHQ. If success probabilities differ, In this paper we examine the distribution of a sum S of binomial random variables, each with different success probabilities. can be thought of as providing the probabilities of occurrence of different The probability distribution of the sum of two Binomial distribution, If X and Y are binomial distributions with B(n1,p) and B(n2,p) respectively, their sum has distribution B(n1+n2,p). 6 and 0. Two different classifications The probabilities of Normal Approximation to Binomial. B Yes because the probabilities sum to 1 and are all between 0 and 1 inclusive and standard deviation of the binomial distribution with the given values of n When I use probability density function in EXCEL (NORM. Reproduction in whole or in part is permitted for any purpose of Corollary 1 Sum of Binomial Random Variables. 4, Suppose and are independent binomial variables with the following probability Why is the binomial distribution To be able to use a binomial distribution to model a sum of coin flips, the coins must all have the same probabilities and Understand the four distinct conditions that are necessary in order to use a binomial distribution. The p-value for the test is therefore the sum of binomial probabilities, (using the binomial distribution) is that a different test Discrete Distributions 1. If n becomes relevant you can apply Central Theorem Limit so approximating the sum distribution with a normal distribution having mean the sum of p_i and variance the sum of p_i * ( 1 - p_i). Examples demonstrating how to use Excel functions to perform hypothesis testing using the binomial distribution. Two approximations are examined, one based on a method of THE DISTRIBUTION OF A SUM OF. An efficient algorithm is given to vector of probabilities. If success probabilities differ, 27 Nov 2015 ramhiser. The Formula for Binomial Probabilities. Distributions, Basic Probability, Variability Learning Objectives. For a binomial distribution, to compute the mean, expected value, multiply the number of trial by the probability of success If success probabilities differ, the probability distribution of the sum is not binomial. a discrete distribution has either a finite or a countably infinite number of possible values. The Sum of The Probabilities Is One. . share then the exact distribution is different from Binomial\$ The distribution of a sum Sof independent binomial random variables, each with different success probabilities, is discussed. two random variables with different cumulative distribution functions cannot Statistics Chapter 5 Learn with flashcards, The sum of all the probabilities must equal 1 2. add a comment |. 467. In probability theory and statistics, the sum of independent binomial random variables is itself a 17 Jan 2016In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a Binomial Distribution. Loading Unsubscribe from Anish Turlapaty? Cancel Unsubscribe. the sum of the binomial probabilities associated Jun 13, 2011 · ‘Binomial Distribution’ is the sum of independent and evenly distributed ‘Bernoulli Trials’. An algorithm is given to calculate the exact distribution of S, and several approximations are examined. 7s The Binomial distribution assumes that events are independent and the probabilities of events occurring are constant over time. Let X1,X2, Xk be independent binomial random variables where Xi has See the binomial sum variance inequality. Thus If X and Y are binomial distributions with B(n1,p) and B(n2,p) respectively, their sum has distribution B(n1+n2,p). What if X and Y have binomial After you identify that a random variable X has a binomial distribution, How to Find Binomial Probabilities Using a Notice these probabilities all sum to 1 Posts about Independent Sum The negative binomial probabilities sum to i. Why does The best videos and questions to learn about Calculating Binomial Probabilities. N number of observations. An efficient algorithm is given to calculate the exact distribution by convolution. However, there are other, more efficient ways to calculate Pr ( K = k ) {\displaystyle \Pr(K=k)} \Pr(K=k) . TI-BASIC Programs: Binomial Distribution and Sum of Binomial Distributions you simply sum the all the probabilities for the different scenarios. Independent Bernoulli trials 2 BERNOULLI TRIALS AND THE BINOMIAL DISTRIBUTION Binomial ≡ sum of iid Lesson 10: The Binomial Distribution To learn how to determine binomial probabilities using a standard cumulative binomial probability =\sum\limits_{x=0 I know how to do a standard binomial distribution in python where probabilities of each trial is the same. An efficient algorithm is given to There are eight different equally likely successes is called a binomial distribution. DIST or WEIBULL. Also, the sum of Posts about Binomial distribution written by Dan with probabilities 0. The probabilities for "two chickens" all work out to be 0. The binomial distribution with size = n and Fast and Accurate Computation of Binomial Probabilities; (45 < X < 55) for X Binomial(100,0. binomial distribution Binomial distribution is the discrete It is very much clear that the sum of fail and success probabilities the distribution of binomial types is more Posts about Independent Sum The negative binomial probabilities sum to i. Poisson-Binomial distribution, Success of Bernoulli trials with different probabilities and without Sums of Independent Random Variables The distribution of their sum Z = X + Y is given by the discrete 2 has a binomial distribution with n 1+n Introduction to binomial probability distribution, binomial nomenclature, The sum of all these probabilities is the answer we seek. What is the difference between Binomial and Poisson? Normal Approximation to Binomial. Sum of Exact Probabilities for 0 through k, inclusive The binomial distribution describes random variables with only two and the sum of their probabilities is It allows you to plug in different values of n In particular the distribution just described is the Binomial Distribution of different Binomial probabilities must sum to 1 does this The Binomial Distribution: A Probability Model for a Discrete Outcome. The distribution of a sum Sof independent binomial random variables, each with different success probabilities, is discussed. What if X and Y have binomial What is the sum (or difference) of two binomial distribution? Update Cancel. What is the difference between Binomial and Poisson? The distribution of a sum S of independent binomial random variables, each with different success probabilities, is discussed. 1,005812. share|cite|improve this answer. binomial distribution with just sum it over desierable interval to My aim is to vectorize this code for case when you have m x n different probabilities. This page continues to illustrate probability facts using the flip-a-coin-4-times-and-count-the-number-of-heads problem. Define binomial outcomes The negative binomial distribution is a discrete distribution with two parameters and where and . Analyzing distribution of sum of two normally I know how to do a standard binomial distribution in python where probabilities of each trial is the same. My question is what to do if the trial probabilities change Sum of the Probabilities and the Binomial Mean A Binomial Distribution, Brief Summary Flip a coin 4 times (or flip 4 coins) and count the number of heads. Datadog: What are 3 different digits whose sum is 8? My goal is approximate the distribution of a sum of binomial variables. Also, the sum of The Bernoulli and Binomial Distributions
# Definition:Braid Group ## Description The braid group is a group that has a intuitive geometric interpretation as a number of strands, where the group operation on these strands is to intertwine them. ## Generators The generators of the braid group are elements $\sigma_i$, which intertwine strands $i$ and $i+1$ in such a way that strand $i$ runs above strand $i+1$. ## Definition The braid group on $n$ strands is generated by $\sigma_1, \sigma_2, \ldots, \sigma_{n-1}$ and the following relations: 1. $\sigma_i \sigma_j = \sigma_j\sigma_i, \forall i, j: |i-j| \ge 2$ 2. $\sigma_i \sigma_{i+1} \sigma_i = \sigma_{i+1} \sigma_i \sigma_{i+1}, \forall i \in \{ 1, 2, \ldots, n-2 \}$ ## Examples Generator $\sigma_i$ and the inverse generator $\sigma_i^{-1}$ acting on strands $s_i, s_{i+1}$: Relation 1. can be pictured like this: = Relation 2. can be pictured like this: =
# Why is the subjunctive used in “Paso por una farmacia que *esté* abierta y te la compro”? De camino a casa paso por una farmacia que esté abierta y te la compro. Assuming the person goes by this Pharmacy every day and it is always open at that time I'm perplexed as to why the subjective form or the verb estar is being used. • If you assume that, the sentence is absurd. The sentence means that this person is not thinking about a specific pharmacy; any open one will do. – Gorpik Jul 31 '19 at 8:54 • If the person knew about this pharmacy in particular, then they would say la farmacia or esa farmacia or refer to it by its name, i.e. they'd point to a definite pharmacy, and they'd then omit the hypothetical que esté abierta. – pablodf76 Jul 31 '19 at 11:00 • Pharmacies tend to have a rotating schedule for being open after hours; in some towns, pharmacies are somewhat clustered in a commercial district. The speaker apparently doesn't know yet which pharmacy he will visit. – aparente001 Jul 31 '19 at 13:23
# Connectedness of the join of two spaces Let $$X$$ be an $$n$$-connected space and $$Y$$ be an $$m$$-connected space. How can I prove that the join $$X*Y$$ is $$(n+m+1)$$-connected? I thought that homotopy excision would do the trick, but it does not seem so. • If $X$, $Y$ are CW, then there is an obvious CW structure on $X\ast Y$, and oce you have determined this you can use cellular methods to decide why the connectivity statement is true. – Tyrone Nov 15 '18 at 11:26 • I'm afraid I don't quite get your point. Wouldn't the CW structure on the join contain $X$ and $Y$ as subcomplexes? What results are you refering to? – user09127 Nov 20 '18 at 13:22 • The join of two CW complexes $X,Y$ is a quotient of $X\times I\times Y$ by a certain relation. Take the product CW structure on $X\times I\times Y$ and then figure out which cells you need to quotient out. You can also use the fact that $X\ast Y$ is the pushout of the inclusions $X\times CY\leftarrow X\times Y\rightarrow CX\times Y$ to get a CW structure. The end result is that the cells of $X\ast Y$ are the joins of the cells of $X$ and $Y$. I'll leave you to figure out what the joins $D^n\ast D^m$ and $S^{n-1}\ast S^{m-1}$ are. – Tyrone Nov 20 '18 at 13:47 • If $X$ is $n$ connected, and $Y$ is $m$ connected, then you'll see that the first cell of $X\ast Y$ above dimension $0$ that you need to worry about is $e^n\ast e^m$, so you can figure out the connectivity of $X\ast Y$ from, say, cellular homology, depending on what you are confident with. – Tyrone Nov 20 '18 at 13:49 • To move to the general case use CW approximation and the functorality of the join construction. – Tyrone Nov 20 '18 at 13:50
## CryptoDB ### Paper: Bloom Filter Encryption and Applications to Efficient Forward-Secret 0-RTT Key Exchange Authors: David Derler Kai Gellert Tibor Jager Daniel Slamanig Christoph Striecks DOI: 10.1007/s00145-021-09374-3 Search ePrint Search Google Forward secrecy is considered an essential design goal of modern key establishment (KE) protocols, such as TLS 1.3, for example. Furthermore, efficiency considerations such as zero round-trip time (0-RTT), where a client is able to send cryptographically protected payload data along with the very first KE message, are motivated by the practical demand for secure low-latency communication. For a long time, it was unclear whether protocols that simultaneously achieve 0-RTT and full forward secrecy exist. Only recently, the first forward-secret 0-RTT protocol was described by Günther et al. ( Eurocrypt , 2017). It is based on puncturable encryption. Forward secrecy is achieved by “puncturing” the secret key after each decryption operation, such that a given ciphertext can only be decrypted once (cf. also Green and Miers, S&P 2015). Unfortunately, their scheme is completely impractical, since one puncturing operation takes between 30 s and several minutes for reasonable security and deployment parameters, such that this solution is only a first feasibility result, but not efficient enough to be deployed in practice. In this paper, we introduce a new primitive that we term Bloom filter encryption (BFE), which is derived from the probabilistic Bloom filter data structure. We describe different constructions of BFE schemes and show how these yield new puncturable encryption mechanisms with extremely efficient puncturing. Most importantly, a puncturing operation only involves a small number of very efficient computations, plus the deletion of certain parts of the secret key, which outperforms previous constructions by orders of magnitude. This gives rise to the first forward-secret 0-RTT protocols that are efficient enough to be deployed in practice. We believe that BFE will find applications beyond forward-secret 0-RTT protocols. ##### BibTeX @article{jofc-2021-31781, title={Bloom Filter Encryption and Applications to Efficient Forward-Secret 0-RTT Key Exchange}, journal={Journal of Cryptology}, publisher={Springer}, volume={34}, doi={10.1007/s00145-021-09374-3}, author={David Derler and Kai Gellert and Tibor Jager and Daniel Slamanig and Christoph Striecks}, year=2021 }
# Function problem (kernel, surjective & injective function) I have no idea, how to solve this problem: Let $\psi: (\mathbb{R} \to \mathbb{N}) \to P(P(\mathbb{R}))$ defined as: $\psi (f) = \mathbb{R}/_{ker(f)}$ (1) Is this a surjective function? (2)Is this an injective function? I tried to do something, but after hours I can't see any progress... I just don't know, were/how to start solving such problem. And I'm not sure, if I properly understand problem itself; I'm sure, that your explanation woluld be helpful. - You seem to have misstated your question, for example $f$ is probably a homomorphism, and I can't see why the image should be $P(P(N))$. –  Yuval Filmus Jan 2 '11 at 5:14 As for (2), consider $f$ vs. $2f$. –  Yuval Filmus Jan 2 '11 at 5:14 As for (1), the idea is probably that the image of $\psi$ is not arbitrary, but a proper statement of the question is required here. –  Yuval Filmus Jan 2 '11 at 5:15 You should explain your notation. What is P(P(R))? Whats ker f (this could be answered by saying what type of maps you're looking at from R to N)? Whats R/ker f? Is this a quotient of groups? –  Eric O. Korman Jan 2 '11 at 14:13 Is $R/\ker f$ the set of all conjugacy classes? This would be a member of $P(P(R))$. It will only make sense if $f$ is a homomorphism. –  Yuval Filmus Jan 2 '11 at 17:16 I'm assuming that you are taking f to be an arbitrary set map, $\mathcal{P}(X)$ to be the powerset of $X$ and $ker(f)$ to be the relation defined as follows: If $f:X\to{}Y$ is a map of sets, then $ker(f)\subseteq{}X\times{}X$ consists of $(x,y)$ such that $f(x)=f(y)$. (For interested readers, this is in the literature -- see Jacobson's Basic Algebra II, the sections on set theory in the beginning.) It is clear that $ker(f)$ is an equivalence relation, so that one may take the quotient space of equivalence classes $X/ker(f)$, which is a collection of subsets of $X$, i.e. an element of $\mathcal{P}(\mathcal{P}(X))$. So this is what your map $\psi$ does: it takes a map of sets $f:X\to{}Y$ to its collection of nonempty fibers (the collection of preimages of points in $Y$, $\{f^{-1}(\{y\})|y\in{}im(f)\}$). Now, in your particular case, you are interested only in $f:\mathbb{R}\to{}\mathbb{N}$. Given what has been said above, the answer is that $\psi:\mathbb{N}^{\mathbb{R}}\to{}\mathcal{P}(\mathcal{P}(\mathbb{R}))$ is neither injective nor surjective. Proof: (not injective) consider the constant map $n:\mathbb{R}\to{}\mathbb{N}$ defined by $n(x)=n$. Then $\psi(n)=\{\{\mathbb{R}\}\}$, and this is true for any $n\in{}\mathbb{N}$. (not surjective) Since we have a surjection of $im(f)\subseteq{}\mathbb{N}$ onto $\psi(f)$ given by taking $n\in{}im(f)$ to $f^{-1}(\{y\})$, it follows that $|\psi(f)|$ is at most countable. But the set $\{\{x\}|x\in\mathbb{R}\}$ is an uncountable element of $\mathcal{P}(\mathcal{P}(\mathbb{R}))$, so it cannot be in the image of $\psi$.
## anonymous one year ago How many ounces of trail mix are in a bag that weighs 0.5675 kilograms? Input only numeric values. (1 pound = 0.454 kg and 1 pound = 16 ounces) hint: the number of pounds is: $\frac{{0.5675}}{{0.454}} = ...?$
SERVING THE QUANTITATIVE FINANCE COMMUNITY • 1 • 2 Cuchulainn Posts: 62608 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### My career = dry bones? :( Some previous encounters:*Be careful with these scams (419) Step over the gap, not into it. Watch the space between platform and train. http://www.datasimfinancial.com http://www.datasim.nl Anthis Posts: 4313 Joined: October 22nd, 2001, 10:06 am ### My career = dry bones? :( QuoteOriginally posted by: ppauperoh, and I just heard from Niger:QuoteMy dear I am writing this mail with tears, sadness and pains. I know it will come to you as a suprise since we haven't known or come across each other before, but kindly bear with me at this moment. I have a special reason why I decided to contact you. My situation at hand is miserable but I trust in God and hope you will be of my help. My name is Haniya Ibrahim Bare Mainassara 25years old girl and I held from Republic of Niger the daughter of Late General Ibrahim Bare Maïnassara the former President of the Republic of Niger who was ambushed and killed by dissident soldiers at the military airport in the capital, Niamey with his driver and a former Prefect. You can see more detail about my late father here http://news.bbc.co.uk/onthisday/hi/date ... 463927.stm I am constrained to contact you because of the maltreatment which I am receiving from my step mother. She planned to take away all my late father's treasury and properties from me since the unexpected death of my beloved Father. Meanwhile I wanted to travel to Europe, but she hide away my international passport and other valuable documents. Luckily she did not discover where I kept my father's File which contained important documents. I am presently staying in the Mission camp in Burkina Faso.I am seeking for longterm relationship and investment assistance. My father of blessed memory deposited the sum of US$17.7 Million in one bank in Burkina Faso with my name as the next of kin. I had contacted the Bank to clear the deposit but the Branch Manager told me that being a refugee, my status according to the local law does not authorize me to carry out the operation. However, he advised me to provide a trustee who will stand on my behalf. I had wanted to inform my stepmother about this deposit but I am affraid that she will not offer me anything after the release of the money. Therefore, I decide to seek for your help in transferring the money into your bank account while I will relocate to your country and settle down with you. I have my fathers death certificate and the account number which I will give you as soon as you indicated your interest to help me.It is my intention to compensate you with 20% of the total money for your assitance and the balance shall be my investment in any profitable venture which you will recommend to me as have no any idea about foreign investment. Please all communications should be through this email address only for confidential purposes.Thanking you alot in anticipation of your quick response. I will send you my photos in my next email.Yours Sincerely Haniya IbrahimI'm going to get 20% of$17.7Million (that's \$3.9M !) just for letting this poor girl wire the money to my account !If she's hot, maybe we'll hook up and I'll get to keep all the money !LOLJudgment question:What refugee camp in Africa has internet connection? toolbox Topic Author Posts: 3 Joined: November 26th, 2008, 4:15 pm ### My career = dry bones? :( Cheers All!Really appreciate the replies. I've decided to squeeze in the CQF course in early 2010 halfway through my MSc. Do you reckon that will give me a shot at becoming a Quantitative Trader (yes i am a bit of a dreamer!), figured on the job experience will help understand requirements perfectly though i intend to pursue Quant development for atleast two years initially.PhD is a certainty if all doesn't work though.Oh and nothing wrong with tech savvy Nigerians who understand international banking and i don't send dodgy emails and also am against nepotism, corruption, all the bad stuff. It would ruin my potential career if i am even assumed to be involved in such. migalley Posts: 3696 Joined: June 13th, 2005, 10:54 am ### My career = dry bones? :( QuoteOriginally posted by: toolboxOh and nothing wrong with tech savvy Nigerians who understand international banking and i don't send dodgy emails and also am against nepotism, corruption, all the bad stuff. It would ruin my potential career if i am even assumed to be involved in such.But political influence is OK? KackToodles Posts: 4100 Joined: August 28th, 2005, 10:46 pm ### My career = dry bones? :( QuoteOriginally posted by: ppauperwe all seem to be getting good news from Nigeria ! I'm glad that Nigerian email has a good brand name we can trust. Not like those emails from Kenya claiming "my cousin was recently elected President of the United States and I am trying to raise money to send my grandma to see the inauguration ceremony." Yeah, right. Last edited by KackToodles on December 1st, 2008, 11:00 pm, edited 1 time in total. toolbox Topic Author Posts: 3 Joined: November 26th, 2008, 4:15 pm ### My career = dry bones? :( UPDATE!Ok so i've also decided to through in a BSc in Mathematics and Statistics (Applied route) with the Open University while doing the part time MSc QF. i should get some credits off due to my previous degree meaning i could finish the BSc same time as the MSc in two years time if alll goes according to plan (and i do realise this will involve phenomenal hard work and huge personal sacrifices). Do you reckon this will have a significant impact or should i just stick to the MSc alone and through in a CQF during the second year? quantwannabe2 Posts: 16 Joined: September 22nd, 2008, 12:07 pm ### My career = dry bones? :( Stick to the Msc and find a job first.You really need to know minim re: financial theory / pricing background to be a quant developer, all you need is C++.And the reality is you won't get a pure Quant job, the best you can probably aim for is QuantDeveloper, and let's be honest, you don't need alot financial knowledge to be a QuantDev,it's all just programming... skh Posts: 53 Joined: April 28th, 2008, 2:22 pm ### My career = dry bones? :( QuoteOriginally posted by: toolboxUPDATE!Ok so i've also decided to through in a BSc in Mathematics and Statistics (Applied route) with the Open University while doing the part time MSc QF. man, this has got to be the dumbest idea ever. if you start out with a plan like this, you are guaranteed to fail. do things one step at a time, and after you have accomplished something go for the next thing.
# Laboratoire de Physique Corpusculaire de Caen ## Partenaires ### Nos tutelles LPC CAEN ENSICAEN 6, bd du Maréchal Juin 14050 CAEN Cedex Tel : +033 (0) 231452500 Annuaire Organigrammes Contacts ## Publications (more recent) ### [in2p3-01301864] Neutron star radii and crusts: uncertainties and unified equations of state 6 octobre 2016 The uncertainties in neutron star (NS) radii and crust properties due to our limited knowledge of the equation of state (EOS) are quantitatively analysed. We first demonstrate the importance of a unified microscopic description for the different baryonic densities of the star. If the pressure (...) Lire la suite ### [in2p3-01348905] A constrained-path quantum Monte-Carlo approach for the nuclear shell model 30 juillet 2016 A new QMC approach for the shell model yielding nearly exact spectroscopy of nuclei is presented. The originality of the formalism lies in the use of a variational symmetry-restored wave function to ‘steer’ the Brownian motion, and to control the sign/phase problem that generally makes the (...) Lire la suite ### [in2p3-01340233] Clustering effects in fusion evaporation reactions with light even-even N=Z nuclei. The 24 Mg and 28 Si cases 2 juillet 2016 In the recent years, cluster structures have been evidenced in many ground and excited states of light nuclei [1, 2]. The decay of highly excited states of 24 Mg is studied in fusion evaporation events completely detected in charge in the reactions 12 C+ 12 C and 14 N+ 10 B at 95 and 80 MeV (...) Lire la suite ### [in2p3-01340539] Impact of pairing on thermodynamical properties of stellar matter 2 juillet 2016 Superfluidity in the crust is a key ingredient for the cooling properties of proto-neutron stars. Investigations on crust superfluidity carried out so far typically assumed that the cluster component was given by a single representative nucleus and did not consider the fact that at finite (...) Lire la suite 30 juin 2016 [...] Lire la suite ### [in2p3-01334165] Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model 21 juin 2016 The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the (...) Lire la suite 14 mai 2016 [...] Lire la suite ### [in2p3-01269560] Hyperons in neutron stars and supernova cores 21 avril 2016 The properties of compact stars and their formation processes depend on many physical ingredients. The composition and the thermodynamics of the involved matter is one of them. We will investigate here uniform strongly interacting matter at densities and temperatures, where potentially other (...) Lire la suite ### [in2p3-01293418] The 12C* Hoyle state in the inelastic 12C + 12C reaction and in 24Mg* decay 26 mars 2016 The reaction 12C + 12C at 95 MeV has been studied at the Legnaro Laboratories of INFN with the GARFIELD + RCo apparatus. Data have been analyzed in order to investigate the decay of the Hoyle state of 12C*. Two different data selections have been made. The first one corresponds to peripheral (...) Lire la suite ### [in2p3-01275770] Analytical mass formula and nuclear surface properties in the ETF approximation. Part I: symmetric nuclei 19 février 2016 The problem of the determination of the nuclear surface and surface symmetry energy is addressed in the framework of the Extended Thomas Fermi (ETF) approximation using Skyrme functionals. We propose an analytical model for the density profiles with variationally determined diffuseness (...) Lire la suite 19 février 2016 [...] Lire la suite ### [in2p3-01213940] Liquid-gas phase transition in strange hadronic matter with relativistic models 16 février 2016 Background: The advent of new dedicated experimental programs on hyperon physics is rapidly boosting the field, and the possibility of synthetizing multiple strange hypernuclei requires the addition of the strangeness degree of freedom to the models dedicated to nuclear structure and nuclear (...) Lire la suite ### [in2p3-01226400] Modification of magicity toward the dripline and its impact on electron-capture rates for stellar core collapse 16 février 2016 The importance of microphysical inputs from laboratory nuclear experiments and theoretical nuclear structure calculations in the understanding of the core collapse dynamics, and the subsequent supernova explosion, is largely recognized in the recent literature. In this work, we analyze the (...) Lire la suite ### [in2p3-01243596] Finite-size effects on the phase diagram of the thermodynamical cluster model 16 décembre 2015 The thermodynamical cluster model is known to present a first-order liquid-gas phase transition in the idealized case of an uncharged, infinitely extended medium. However, in most practical applications of this model, the system is finite and charged. In this paper we study how the phase (...) Lire la suite ### [in2p3-01244049] Hyperons in neutron star matter within relativistic mean-field models 16 décembre 2015 Since the discovery of neutron stars with masses around 2M ⊙ the composition of matter in the central part of these massive stars has been intensively discussed. Within this paper we will (re)investigate the question of the appearance of hyperons. To that end we will perform parameter study (...) Lire la suite ### [in2p3-01232713] Impact of pairing effects on thermodynamical properties and clusterization of stellar matter 25 novembre 2015 Superfluidity in the crust is a key ingredient for the cooling properties of proto-neutron stars. Investigations on crust superfluidity carried out so far typically assumed that the cluster component was given by a single representative nucleus and did not consider the fact that at finite (...) Lire la suite ### [in2p3-01163153] Unified treatment of subsaturation stellar matter at zero and finite temperature 24 novembre 2015 The standard variational derivation of stellar matter structure in the Wigner-Seitz approximation is generalized to the finite temperature situation where a wide distribution of different nuclear species can coexist in the same density and proton fraction condition, possibly out of (...) Lire la suite ### [in2p3-01226404] Heat capacity of the neutron star inner crust within an extended nuclear statistical equilibrium model 24 novembre 2015 Background: Superfluidity in the crust is a key ingredient for the cooling properties of proto-neutron stars. Present theoretical calculations employ the quasiparticle mean-field Hartree-Fock-Bogoliubov theory with temperature-dependent occupation numbers for the quasiparticle states. Purpose: (...) Lire la suite ### [in2p3-01226419] Clustering effects in fusion evaporation reactions with light even-even N = Z nuclei. The 24Mg and 28Si cases 10 novembre 2015 In the recent years, cluster structures have been evidenced in many ground and excited states of light nuclei [1, 2]. Within the currently ongoing experimental campaign by the NUCL-EX collaboration we have measured the 12C+12C and 14N+10B reactions at 95 MeV and 80 MeV respectively, and (...) Lire la suite ### [in2p3-01226192] Intertwined orders from symmetry projected wavefunctions of repulsively interacting Fermi gases in optical lattices 10 novembre 2015 Unconventional strongly correlated phases of the repulsive Fermi-Hubbard model, which could be emulated by ultracold vapors loaded in optical lattices, are investigated by means of energy minimizations with quantum number projection before variation and without any assumed order parameter. In a (...) Lire la suite ### [in2p3-01158954] Microscopic evaluation of the hypernuclear chart with $\Lambda$ hyperons 28 octobre 2015 We calculate the comprehensive hypernuclear chart for even-even hypernuclei with magic numbers of $\Lambda$'s (for Z $\leq$ 120 and $\Lambda \leq$70) and estimate the number of bound systems, considering the present uncertainties in the $\Lambda-$nucleon and $\Lambda-\Lambda$ interactions. We (...) Lire la suite ### [in2p3-01188824] Cluster correlation effects in $^{12}C$+$^{12}C$ and $^{14}N$+$^{10}B$ fusion-evaporation reactions 1er septembre 2015 The decay of highly excited states of 24Mg is studied in fusion evaporation events completely detected in charge in the reactions 12C+12C and 14N+10B at 95 and 80 MeV incident energy respectively. The comparison of light charged particles measured spectra with statistical model predictions (...) Lire la suite ### [in2p3-01102611] Hyperons in neutron star matter within relativistic mean-field models 22 juillet 2015 Since the discovery of neutron stars with masses around 2 solar masses the composition of matter in the central part of these massive stars has been intensively discussed. Within this paper we will (re)investigate the question of the appearance of hyperons. To that end we will perform an (...) Lire la suite 18 juillet 2015 [...] Lire la suite 18 juillet 2015 [...] Lire la suite ### [in2p3-01171365] From Light to Heavy Nuclear Systems, Production and Decay of Fragments Studied with Powerful Arrays 4 juillet 2015 Reactions between heavy-ions at various energy regimes produce many nuclear fragments which can be populated in highly excited states. The study of these fragments, detected at the end of their particle decay, is important to investigate nuclear forces and structure effects. In recent years (...) Lire la suite ### [in2p3-01133570] Thermal properties of light nuclei from $^{12}$C+$^{12}$C fusion-evaporation reactions 2 juillet 2015 The $^12$C+$^12$C reaction at 95 MeV has been studied through the complete charge identification of its products by means of the GARFIELD+RCo experimental set-up at INFN Laboratori Nazionali di Legnaro (LNL). In this paper, the first of a series of two, a comparison to a dedicated (...) Lire la suite ### [in2p3-01089721] Equations of state and phase transitions in stellar matter 1er juillet 2015 Realistic description of core-collapsing supernovae evolution and structure of proto-neutron stars chiefly depends on microphysics input in terms of equations of state, chemical composition and weak interaction rates. At sub-saturation densities the main uncertainty comes from the symmetry (...) Lire la suite ### [in2p3-00839915] Equivalence between fractional exclusion statistics and Fermi liquid theory in interacting particle systems 1er juillet 2015 We explore the connections between the description of interacting particles systems in terms of fractional exclusion statistics (FES) and other many-body methods used for the same purpose. We consider a system of particles with generic, particle-particle interaction in the quasi-classical limit (...) Lire la suite ### [in2p3-00805497] Exotic spin, charge and pairing correlations of the two-dimensional doped Hubbard model: a symmetry entangled mean-field approach 1er juillet 2015 Intertwining of spin, charge and pairing correlations in the repulsive two-dimensional Hubbard model is shown through unrestricted variational calculations, with projected wavefunctions free of symmetry breaking. A crossover from incommensurate antiferromagnetism to stripe order naturally (...) Lire la suite ### [in2p3-00674988] Ensemble inequivalence in supernova matter within a simple model 1er juillet 2015 A simple, exactly solvable statistical model is presented for the description of baryonic matter in the thermodynamic conditions associated to the evolution of core-collapsing supernova. It is shown that the model presents a first-order phase transition in the grand-canonical ensemble which is (...) Lire la suite ### [in2p3-00606217] Reaction mechanisms and staggering in S+Ni collisions 1er juillet 2015 The reactions 32S + 58Ni and 32S + 64Ni are studied at 14.5 A MeV. After a selection of the collision mechanism, we show that important even-odd effects are present in the isotopic fragment distributions when the excitation energy is small. Close to the multifragmentation threshold this (...) Lire la suite ### [in2p3-00492917] Phase diagram of the charged lattice-gas model with two types of particles 1er juillet 2015 A lattice-gas model with two types of particles, a particle-dependent short-range coupling and a long-range repulsive Coulombic interaction, is introduced. The phase diagram of an isolated finite system of 129 particles is constructed using the bimodality properties of the observables' (...) Lire la suite 1er juillet 2015 [...] Lire la suite ### [in2p3-00857483] Probing the statistical decay and alpha-clustering effects in 12c+12c and 14n+10b reactions 1er juillet 2015 An experimental campaign has been undertaken at INFN Laboratori Nazionali di Legnaro, Italy, in order to progress in our understanding of the statistical properties of light nuclei at excitation energies above particle emission threshold, by measuring exclusive data from fusion-evaporation (...) Lire la suite 1er juillet 2015 [...] Lire la suite 1er juillet 2015 [...] Lire la suite ### [in2p3-00730949] Fragmentation and clustering in star matter 1er juillet 2015 The specificity of the crust-core phase transition in neutron star at zero and finite temperature will be discussed. It will be shown that, as a consequence of the presence of long range Coulomb interactions, the equivalence of statistical ensembles is violated and a clusterised phase is (...) Lire la suite ### [in2p3-00730980] Statistical (?) decay of light hot nuclei 1er juillet 2015 The reaction 12C+12C at 95 MeV beam energy has been measured using the GARFIELD+RCo apparatuses at Laboratori Nazionali di Legnaro LNL - INFN, Italy, in the framework of an experimental campaign proposed by the NUCL-EX collaboration. The aim is to progress in the understanding of statistical (...) Lire la suite ### [in2p3-00730939] An interpretation of staggering effects by correlation observables 1er juillet 2015 The reactions 32S+58,64Ni are studied at 14.5 A MeV. Evidence is found for odd-even effects in isotopic observables of the decay of a projectile-like source. The influence of secondary decays on the staggering is studied with a correlation function technique, showing that odd-even effects are (...) Lire la suite 1er juillet 2015 [...] Lire la suite ### [in2p3-00805495] A Constrained-Path Quantum Monte-Carlo Approach for the Nuclear Shell Model 1er juillet 2015 A new Quantum Monte-Carlo (QMC) approach is proposed to investigate low-lying states of nuclei within the shell model. The formalism relies on a variational symmetry-restored wave-function to guide the underlying Brownian motion. Sign/phase problems that usually plague QMC fermionic simulations (...) Lire la suite ### [in2p3-00782412] Neutron-rich nuclei and the equation of state of stellar matter 1er juillet 2015 In this contribution we will review our present understanding of the matter equation of state in the density and temperature conditions where it can be described by nucleonic degrees of freedom. At zero temperature, all the information is contained in the nuclear energy functional in its (...) Lire la suite ### [in2p3-00848371] A phase-free quantum Monte Carlo method for the nuclear shell model 1er juillet 2015 The shell model provides a powerful framework for nuclear structure calculations. The nucleons beyond an inert magic core are confined in a valence shell and interact through an effective two-body potential generally determined from the G-matrix method. However, the applicability of the shell (...) Lire la suite ### [in2p3-00691845] Towards an understanding of staggering effects in dissipative binary collisions 1er juillet 2015 The reactions S32+58,64Ni are studied at 14.5 A MeV. Evidence is found for important odd-even effects in isotopic observables of selected peripheral collisions corresponding to the decay of a projectile-like source. The influence of secondary decays on the staggering is studied with a (...) Lire la suite 1er juillet 2015 [...] Lire la suite 1er juillet 2015 [...] Lire la suite 1er juillet 2015 [...] Lire la suite 1er juillet 2015 [...] Lire la suite ### [in2p3-01073837] Strongly correlated electron systems 1er juillet 2015 The quantum phase diagram of the two-dimensional Hubbard model is investigated through the mixing of unrestricted Hartree-Fock and BCS wave-functions with symmetry restoration before variation. The spin, charge, and superconducting orders entailed in such correlated states will be discussed as (...) Lire la suite ### [in2p3-01023922] Sub-saturation matter in compact stars: Nuclear modelling in the framework of the extended Thomas-Fermi theory 1er juillet 2015 A recently introduced analytical model for the nuclear density profile [1] is implemented in the Extended ThomasFermi (ETF) energy density functional. This allows to (i) shed a new light on the issue of the sign of surface symmetry energy in nuclear mass formulas, as well as to (ii) show the (...) Lire la suite ### [in2p3-01163166] Strangeness driven phase transitions in compressed baryonic matter and their relevance for neutron stars and core collapsing supernovae 1er juillet 2015 We discuss the thermodynamics of compressed baryonic matter with strangeness within non-relativistic mean-field models with effective interactions. The phase diagram of the full baryonic octet under strangeness equilibrium is built and discussed in connection with its relevance for (...) Lire la suite 1er juillet 2015 [...] Lire la suite ### [in2p3-00824193] Densities and energies of nuclei in dilute matter at zero temperature 1er juillet 2015 We explore the ground-state properties of nuclear clusters embedded in a gas of nucleons with the help of Skyrme-Hartree-Fock microscopic calculations. Two alternative representations of clusters are introduced, namely coordinate-space and energy-space clusters. We parameterize their density (...) Lire la suite 1er juillet 2015 [...] Lire la suite 1er juillet 2015 [...] Lire la suite ### [in2p3-00539882] Statistical description of complex nuclear phases in supernovae and proto-neutron stars 1er juillet 2015 We develop a phenomenological statistical model for dilute star matter at finite temperature, in which free nucleons are treated within a mean-field approximation and nuclei are considered to form a loosely interacting cluster gas. Its domain of applicability, that is baryonic densities ranging (...) Lire la suite ### [in2p3-00789040] Some aspects of the phase diagram of nuclear matter relevant to compact stars 1er juillet 2015 Dense matter as it can be found in core-collapse supernovae and neutron stars is expected to exhibit different phase transitions which impact the matter composition and the equation of state, with important consequences on the dynamics of core-collapse supernova explosion and on the structure (...) Lire la suite ### [in2p3-00785506] Phase transition towards strange matter 1er juillet 2015 The phase diagram of a system constituted of neutrons and $\Lambda$-hyperons in thermal equilibrium is evaluated in the mean-field approximation. It is shown that this simple system exhibits a complex phase diagram with first and second order phase transitions. Due to the generic presence of (...) Lire la suite ### [in2p3-00762455] Phase diagram of neutron-rich nuclear matter and its impact on astrophysics 1er juillet 2015 Dense matter as it can be found in core-collapse supernovae and neutron stars is expected to exhibit different phase transitions which impact the matter composition and equation of state, with important consequences on the dynamics of core-collapse supernova explosion and on the structure of (...) Lire la suite ### [hal-00461755] Advancement in the understanding of multifragmentation and phase transition for hot nuclei 1er juillet 2015 Recent advancement on the knowledge of multifragmentation and phase transition for hot nuclei is reported. It concerns i) the influence of radial collective energy on fragment partitions and the derivation of general properties of partitions in presence of such a collective energy, ii) a better (...) Lire la suite ### [in2p3-01109338] Exact ground state of strongly correlated electron systems from symmetry-entangled wave-functions 1er juillet 2015 The four-site Hubbard model is considered from the exact diagonalisation and variational method points of view. It is shown that the exact ground-state can be recovered by a symmetry projected Slater determinant, irrespective of the interaction strength. This is in contrast to the Gutzwiller (...) Lire la suite 1er juillet 2015 [...] Lire la suite ### [in2p3-00785502] alpha-clustering effects in dissipative 12C+12C reactions at 95 MeV 1er juillet 2015 Dissipative 12C+12C reactions at 95 MeV are fully detected in charge with the GARFIELD and RCo apparatuses at LNL. A comparison to a dedicated Hauser-Feshbach calculation allows to select events which correspond, to a large extent, to the statistical evaporation of highly excited 24Mg, as well (...) Lire la suite ### [in2p3-00825301] Transformation between statistical ensembles in the modelling of nuclear fragmentation 1er juillet 2015 We explore the conditions under which the particle number conservation constraint deforms the predictions of fragmentation observables as calculated in the grand canonical ensemble. We derive an analytical formula allowing to extract canonical results from a grand canonical calculation and vice (...) Lire la suite ### [hal-00628137] Boundary conditions for star matter and other periodic fermionic systems 1er juillet 2015 Bulk fermionic matter, as it can be notably found in supernova matter and neutrons stars, is subject to correlations of infinite range due to the antisymmetrisation of the N-body wave function, which cannot be explicitly accounted for in a practical simulation. This problem is usually addressed (...) Lire la suite ### [in2p3-00623348] EOS and phase transition: Nuclei to stars 1er juillet 2015 In these lectures we review the present status of knowledge of the nuclear thermal as well as quantum phase transitions. Examples in nuclear physics concern in particular shape transitions, vanishing of pairing correlations at high excitation, nuclear multifragmentation as well as deconfinement (...) Lire la suite ### [in2p3-00730983] Staggering in S+Ni collisions 1er juillet 2015 Odd-even effects in fragment production have been studied since a long time and never quantitatively understood. The odd-even anomaly was reported in the literature [1,2] to be more pronounced in reactions involving Ni projectile and targets, in particular in n-poor systems. In some experiments (...) Lire la suite ### [in2p3-00454242] Probing the nuclear equation of state in heavy-ion collisions at Fermi energy in isospin-sensitive exclusive experiments 1er juillet 2015 In order to guide the study of the form of the density dependence of the symmetry energy in the nuclear equation of state, robust observables are searched within the Stochastic Mean Field model coupled to a secondary decay treatment. We propose a few selected experimental approaches to show (...) Lire la suite ### [hal-00974520] Non-statistical decay and $\alpha$-correlations in the $^{12}$C+$^{12}$C fusion-evaporation reaction at 95 MeV 1er juillet 2015 Multiple alpha coincidence and correlations are studied in the reaction $^12$C+$^12$C at 95 MeV for fusion-evaporation events completely detected in charge. Two specific channels with Carbon and Oxygen residues in coincidence with $\alpha$-particles are addressed, which are associated with (...) Lire la suite 1er juillet 2015 [...] Lire la suite ### [in2p3-00857485] Clusterized nuclear matter in the (proto-)neutron star crust and the symmetry energy 1er juillet 2015 Though generally agreed that the symmetry energy plays a dramatic role in determining the structure of neutron stars and the evolution of core-collapsing supernovae, little is known in what concerns its value away from normal nuclear matter density and, even more important, the correct (...) Lire la suite ### [in2p3-00978296] In-medium nuclear cluster energies within the Extended Thomas-Fermi approach 1er juillet 2015 A recently introduced analytical model for the nuclear density profile[1] is implemented in the Extended Thomas-Fermi (ETF) energy density functional. This allows to (i) shed a new light on the issue of the sign of surface symmetry energy in nuclear mass formulas, which is strongly related to (...) Lire la suite ### [in2p3-00776595] Strangeness-driven phase transition in star matter 1er juillet 2015 The phase diagram of a system constituted of neutrons, protons, $\Lambda$-hyperons and electrons is evaluated in the mean-field approximation in the complete three-dimensional space given by the baryon, lepton and strange charge. It is shown that the phase diagram at sub-saturation densities is (...) Lire la suite 1er juillet 2015 [...] Lire la suite ### [in2p3-00512379] Monopole oscillations in light nuclei with a molecular dynamics approach 1er juillet 2015 Collective monopole vibrations are studied in the framework of antisymmetrized version of molecular dynamics as a function of the vibration amplitude. The giant monopole resonance energy in $^40Ca$ is sensitive to the incompressibility of the effective interaction, in good agreement with (...) Lire la suite
# Image on opposite \part page I'm trying to place a picture on the left side of a \part section, basically what has been done here Memoir: Picture opposite part page, but with a book document class. I'm also using the epigraph package to change my \part command, which works fine. But there is a BIG problem. When I insert the picture on the left side (even page), the epigraph shows on the picture page and on the part page, it shows twice! I want it to appear only on the part page. Here is the code I'm using: \documentclass{book} \usepackage[brazil]{babel} \usepackage[utf8]{inputenc} \usepackage{amssymb, amsmath, amstext, array} \usepackage{gensymb} \usepackage{enumitem} \usepackage[table]{xcolor} \usepackage{graphicx} \graphicspath{ {Imagens/} } \usepackage[section]{placeins} \usepackage[default,scale=0.75]{opensans} \usepackage[T1]{fontenc} \usepackage{titlesec} \usepackage{anyfontsize} \usepackage{lipsum} \newcommand*\cleartoleftpage{% \clearpage \ifodd\value{page}\hbox{}\newpage\fi } \usepackage{epigraph} \titleformat{\part}[display] {\filleft\fontsize{40}{40}\selectfont\scshape} {\fontsize{90}{90}\selectfont\thepart} {20pt} {\thispagestyle{epigraph}} \setlength\epigraphwidth{.6\textwidth} \usepackage{xpatch} \makeatletter {\let\@evenfoot} {\let\@oddfoot\@empty\let\@evenfoot} {}{} \makeatother \usepackage{afterpage} \newcommand\blankpage{% \null \thispagestyle{empty}% \newpage} \begin{document} \chapter{One} \lipsum \cleardoublepage \cleartoleftpage \includegraphics[scale=1]{example-image-a} \part{Part One} \chapter{Two} \lipsum \end{document} I've already tried afterpage and a lot of other solutions on the web, but none work. Could you please give me a hand? Thanks! • Welcome! Note that if you use example-image-a, say, then other people will be able to compile your code (which they can't right now). – cfr Apr 27 '17 at 1:48 • @M. Zoubeer Just curious, why use book when memoir, which you mention, gives the same results as book but with extensions and more flexibility? – Peter Wilson Apr 27 '17 at 17:24 • @Peter Wilson because I didn't know memoir existed until very late in my writing. I fear converting now could take some time (although I could be wrong). – M. Zuoubeer Apr 27 '17 at 21:38 • Do you know of any way to get the same results when using scrbook/KOMA-script? – Patty-B Oct 9 '20 at 14:19 Reducing this to a more minimal example, I tried following the package epigraph's instructions on page 6. However, exactly the same problem occurs, even without titlesec and the custom definitions of page skipping and the fiddling with the page counter and so on. \documentclass{book} \usepackage{lipsum} \usepackage{nextpage,epigraph,graphicx} \makeatletter % manual 6 \let\@epipart\@endpart \renewcommand{\@endpart}{\thispagestyle{epigraph}\@epipart} \makeatother \setlength\epigraphwidth{.6\textwidth} \begin{document} \chapter{One} \lipsum \part{Part One} \chapter{Two} \lipsum \end{document} Clearly, the page style is being applied not once, but twice. It does work if we precede the \epigraphhead with \cleartooddpage[\thispagestyle{empty}] but the manual doesn't mention a need to do this. Nonetheless, this solution can be adapted to place an image on the preceding even page. That is, the following adaption of the manual's example does work. \documentclass{book} \usepackage{lipsum} \usepackage{nextpage,epigraph,graphicx} \makeatletter % manual 6 \let\@epipart\@endpart \renewcommand{\@endpart}{\thispagestyle{epigraph}\@epipart} \makeatother \setlength\epigraphwidth{.6\textwidth} \begin{document} \chapter{One} \lipsum \cleartoevenpage{\thispagestyle{empty}} \cleartooddpage[\thispagestyle{empty}\includegraphics{example-image-a}] [Personal motto: nothing is so difficult as with titlesec.]
## Differential and Integral Equations ### Scattering and blowup problems for a class of nonlinear Schrödinger equations #### Abstract We study the scattering and blowup problem for a class of nonlinear Schrödinger equations with general nonlinearities in the spirit of Kenig and Merle [17]. Our conditions on the nonlinearities allow us to treat a wider class of those than ever treated by several authors, so that we can prove the existence of a ground state (a standing-wave solution of minimal action) for any frequency $\omega > 0$. Once we get a ground state, a so-called potential-well scenario works well: for the nonlinear dynamics determined by the nonlinear Schrödinger equations, we define two invariant regions $A_{\omega, +}$ and $A_{\omega,-}$ for each $\omega > 0$ in $H^1(\mathbb{R}^d)$ such that any solution starting from $A_{\omega,+}$ behaves asymptotically free as $t\to\pm\infty$, one from $A_{\omega, -}$ blows up or grows up, and the ground state belongs to $\overline{A_{\omega, +}}\bigcap \overline{A_{\omega,-}}$. Our weaker assumptions as to the nonlinearities demand that we argue in a subtle way in proving the crucial properties of the solutions in the invariant regions. #### Article information Source Differential Integral Equations, Volume 25, Number 11/12 (2012), 1075-1118. Dates First available in Project Euclid: 20 December 2012
Finding Time with 3 unknowns 1. Sep 20, 2009 Rubix 3rd week in AP physics... can't figure this out :'( 1. The problem statement, all variables and given/known data delta X = 20m delta Y = -1.5m angle = 10 degrees basically, a car goes off a 10 degree ramp and lands 20 meters away and the delta Y is 1.5m... my assumption is that I need to find T first. 2. Relevant equations we are given 4 equations: delta X = VxoT Vy = Vyo + AyT delta Y = VyoT + (1/2)AyT^2 Vy^2 = Vyo^2 + 2Ay(delta Ymax) 3. The attempt at a solution i have no attempt. 2. Sep 20, 2009 rock.freak667 y=y0+vxt-1/2gt2 you want to find t when y=0 (y0=1.5) 3. Sep 20, 2009 Rubix I don't know what Vx is. 4. Sep 20, 2009 rock.freak667 the angle θ=10 so vx=vcosθ and vy=vsinθ Also I made a typo,the equation should be $$y=y_0 +v_y t -\frac{1}{2}gt^2$$ (not vx in it) 5. Sep 20, 2009 Anden A hint: You must plug one equation into another on this one, this will result in the removal of Vo from an equation and leaving only time. 6. Sep 20, 2009 Rubix I still need help, I know i need to plug in equations into each other but i'm not sure which ones 7. Sep 20, 2009 Rubix figured it out, the key was t = (20.33/v) then i plugged that into this eqn: delta X = VxoT
# Confused on a basic convention I'm really new to any circuitry and electrical stuff... just got started with Arduino this week. I'm reading a datasheet for a temp sensor and can't understand all of the variables. http://www.analog.com/media/en/technical-documentation/data-sheets/TMP35_36_37.pdf This sheet on page three in the test conditions/comments section repeatedly refers to $T_A$. What is that, temperature average or something? I've scoured around but it's a pretty specific thing to search for. Does anyone know the best way to learn about these types of variables/abbreviations, or is it just expected that you're going to figure it out and know what this stuff means? I know datasheets are written for engineers by engineers but that makes a high bar for entry for those of us without the necessary tribal knowledge. There are other things in there too -- $I_l$, $V_s$, etc. While I understand the primary variable the subset throws me off. I imagine there are a set of basic ones that I just need to learn... • Without looking at the datasheet I would say it is an ambient temperature... Update : After looking at it I won't change my mind. – Eugene Sh. Oct 11 '17 at 16:21 • @Eugene Sh. Would you say then that the first entry in that table is saying that at 25deg C ambient temperature you will see typical variance of +/- 1deg C? Seems reasonable. Also, do you just "know" that from experience or is there a way for me to learn this stuff better? – dudewad Oct 11 '17 at 16:24 • Yup as @EugeneSh. states its ambient temperature.. or more precisely the temperature of the air surrounding and contacting the device package. – Trevor_G Oct 11 '17 at 16:24 • $T_A$ Ambient temperature (temperature of the environment). $T_C$ Temperature of case. $T_J$ Temp of semiconductor junction. You might benefit from a search for "Thermal management (electronics)". – glen_geek Oct 11 '17 at 16:26 • @Trevor +-2c is indeed a ton but its the crappy little temp sensor that came with the arduino starter kit... I'm just using this as a foray into understanding the world of electronics. When I feel like I'm able to build circuitry without frying my components I'll start buying actual, useful sensors :P – dudewad Oct 11 '17 at 17:01
Probability of two heads given the probability of a head on a Saturday The probability that a fair-coin lands on either Heads or Tails on a certain day of the week is $1/14$. Example: (H, Monday), (H, Tuesday) $...$ (T, Monday), (T, Tuesday) $...$ Thus, $(1/2 \cdot 1/7) = 1/14$. There are $14$ such outcomes. In some arbitrary week, Tom flips two fair-coins. You don't know if they were flipped on the same day, or on different days. After this arbitrary week, Tom tells you that at least one of the flips was a Heads which he flipped on Saturday. Determine the probability that Tom flipped two heads in that week. I know that this is a conditional probability problem. The probability of getting two heads is $(1/2)^2 = 1/4$. Call this event $P$. I am trying to figure out the probability of Tom flipping at least one head on a Saturday. To get this probability, I know that we must compute the probability of there being no (H, Saturday) which is $1 - 1/14 = 13/14$. But then to get this "at least", we need to do $1 - 13/14$ which gives us $1/14$ again. Call this event $Q$. So is the probability of event $Q = 1/14$? It doesn't sound right to me. Afterwards we must do $Pr(P | Q) = \frac{P(P \cap Q)}{Pr(Q)}$. Now I'm not quite sure what $P \cap Q$ means in this context. Intuitively, I would think the result would be greater than $\frac{1}{2}$ because of that slight chance we get $2$ heads on Saturday. Let $P$ denote the event that we flip $2$ heads that week. Let $Q$ denote the event that we flip at least one head on Saturday. I find it easier to flip $P(P\mid Q)$ into $P(Q\mid P)$ We have \begin{align*} P(P\mid Q) &=\frac{P(P\cap Q)}{P(Q)}\\\\ &=\frac{P(Q\mid P)\cdot P(P)}{P(Q)}\\\\ &=\frac{\left({2 \choose 2}\left(\frac{1}{7}\right)^2+{2 \choose 1}\left(\frac{1}{7}\right)\left(\frac{6}{7}\right)\right)\left(\frac{1}{2}\right)^2}{{2 \choose 2}\frac{1}{14}^2+{2 \choose 1}\left(\frac{1}{14}\right)\left(\frac{13}{14}\right)}\\\\ &=\frac{13}{27} \end{align*} where $P(Q\mid P)$ can be thought of as we're given that we got two heads but what are the chances that at least one was from Saturday with probability $\frac{1}{7}$ for an individual coin. Note: My answer contradicts my intuition! This serves as further proof that intuition can lead you astray in probability. To see why my intuition was incorrect, see @jgon's answer. Remy has already given the correct answer, but is not confident because of a missing intuition, and I already more or less answered the question in comments on NewGuy's answer, so I'll just write it up and try to give an intuition for it. The sample space for a single coin flip is $\Omega=\newcommand{\set}[1]{\left\{#1\right\}}\set{H,T}\times \set{M,Tu,W,Th,F,Sa,S}$, and it has the uniform distribution, with each pair equally likely. We can think of this as flipping a fair coin and rolling a fair 7 sided die labeled with the days of the week together (a d7). The sample space then for two coin flips is $\Omega \times \Omega$, which again is the same as flipping 2 fair coins and rolling 2 d7s. If $M$ is the event that both coins are heads, and $N$ is the event that at least one of the coins was flipped on Saturday and was heads. Then $M\cap N$ is the event that both coins were heads and at least one was flipped on Saturday. Now we're interested in $$P(M|N) = \frac{P(M\cap N)}{P(N)}=\frac{|M\cap N|}{|N|},$$ so we just need to compute the sizes of $N$ and $M\cap N$. Let's start with $M\cap N$. Since we know both coins came up heads, we just need to work with the days of the week. The number of ways that at least one of the days of the week can be Saturday is $1+6+6=13$, corresponding to the possibilities $(Sa,Sa)$ or $(Sa,\text{not }Sa)$ or $(\text{not }Sa,Sa)$. Now we can do a similar thing for $N$. We get $|{N}|=1+13+13=27$ corresponding to the possibilities $(HSa,HSa)$ or $(HSa,\text{not }HSa)$ or $(\text{not }HSa,HSa)$. Intuition: Why does knowing that one of the coins was a head flipped on a Saturday reduce the probability that the other coin was also a head (13/27) compared to say having a bronze and a silver coin and knowing that the bronze coin was a head flipped on a Saturday (probability that the other coin was also a head 1/2)? The issue is essentially, for each state in $M\cap N$, $HH(day_1)(day_2)$ if only one of those days is Saturday, say $day_1=Sa$ we get two states in $N$: $HHSa(day_2)$ and $HTSa(day_2)$, but if both days are $Sa$, we get three states in $N$: $HHSaSa$, $HTSaSa$ and $THSaSa$. I.e. in the case when both days are Saturday, we get an extra way to fail to be both heads. Or viewed the other way, the fact that the Saturday flips are interchangeable when they both come up heads means that while $HTSaSa$ and $THSaSa$ are different, they only have one success case associated to them namely: $HHSaSa$. • Ah okay, that makes sense now! – Remy Mar 20 '18 at 0:25 • Why did you choose $N$ to be the event that exactly one flip was $HSa$ and not at least one $HSa$ like @Remy did it? If you did do that, what would $M \cap N$ mean? – udpcon Mar 20 '18 at 3:55 • Sorry that was a typo, gimme a moment – jgon Mar 20 '18 at 3:56 • $N$ should be the event that at least one was $HSa$ because that is the information we're given. – jgon Mar 20 '18 at 4:00 • Okay, I believe it clicked now. Thank you for taking the time to write this! – udpcon Mar 20 '18 at 4:17 CORRECTED AS REASONS GIVEN BY BY JGON Sample Space for single throw = {MH,MT,TuH,TuT,...........,SH,ST} = 14 Sample Space for two throws = $14*14 = 196$ M: 2 heads are thrown = $7*7= 49$ N: atleast single head is thrown on saturday = {(SaH,?)(?,SaH)} = $2*14$ But we have counted twice {(SaH,SaH)} therefore one has to be subtracted = $2*14-1=27$ To Find P(M|N) = $\frac{P(M\cap N)}{P(N)}$ =$\frac{n(M\cap N)}{n(N)}$ $M\cap N$ = One head throw on saturday and other head can be on anyday ={ (SaH,?H)(?H,SaH)}= $2*7$ But double counting also take place here =$2*7-1=13$ P(M|N) = $\frac{13}{27}$ • You're not quite right. There are $2\cdot 14 - 1 =27$ ways to get $N$, the first coin could be $SaH$ or the second could could be $SaH$ – jgon Mar 19 '18 at 23:53 • Similarly $M\cap N$ is also not quite right. It should be $2\cdot 7 -1 = 13$. – jgon Mar 19 '18 at 23:54
ROUGHNESS Feel of a surface marked by irregularities, protuberances, or ridges. Subjective characteristic employed as an element of a continuum to define the percepts created by amplitude-modulated noises. Deliberate, consistent amplitude variations which can be viewed as loudness modifications are identified as beats. Larger fluctuation rates , above around 15 Hz, are identified as flutter, while those above around 40 Hz are identified as being rough. ROUGHNESS: "The grade of sandpaper denotes the roughness." Cite this page: N., Pam M.S., "ROUGHNESS," in PsychologyDictionary.org, April 28, 2013, https://psychologydictionary.org/roughness/ (accessed July 27, 2021). SHARE
It is currently 18 Oct 2017, 23:14 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Function f(x) satisfies f(x) = Author Message Intern Joined: 27 Sep 2008 Posts: 14 Kudos [?]: 53 [2], given: 0 Function f(x) satisfies f(x) = [#permalink] ### Show Tags 03 Oct 2008, 23:11 2 KUDOS This topic is locked. If you want to discuss this question please re-post it in the respective forum. Function $$f(x)$$ satisfies $$f(x) = f(x^2)$$ for all $$x$$ . Which of the following must be true? * $$f(4) = f(2)f(2)$$ * $$f(16) - f(-2) = 0$$ * $$f(-2) + f(4) = 0$$ * $$f(3) = 3f(3)$$ * $$f(0) = 0$$ Kudos [?]: 53 [2], given: 0 Manager Joined: 10 Jul 2006 Posts: 56 Kudos [?]: [0], given: 0 ### Show Tags 04 Oct 2008, 05:38 * F(16)-f(-2)=0 On the above leads to f(16)-f(16). Kudos [?]: [0], given: 0 Manager Joined: 27 Sep 2008 Posts: 76 Kudos [?]: 9 [0], given: 0 ### Show Tags 04 Oct 2008, 05:54 Agree with (B) f(x) = f(x^2) f(-2) = f(4) = f(16) Kudos [?]: 9 [0], given: 0 Intern Joined: 27 Sep 2008 Posts: 14 Kudos [?]: 53 [2], given: 0 ### Show Tags 04 Oct 2008, 06:51 2 KUDOS And the logic behind solving the question... Kudos [?]: 53 [2], given: 0 Manager Joined: 14 Jun 2007 Posts: 168 Kudos [?]: 9 [0], given: 0 Location: Vienna, Austria ### Show Tags 07 Oct 2008, 05:27 i think this is a cryptic one - can someone pls. each individual answer choice analyse (i know it´s bit of work, but it would really help those who are not 100% understaning the concept of functions). dom Kudos [?]: 9 [0], given: 0 Manager Joined: 27 Sep 2008 Posts: 76 Kudos [?]: 9 [0], given: 0 ### Show Tags 07 Oct 2008, 13:58 domleon wrote: i think this is a cryptic one - can someone pls. each individual answer choice analyse (i know it´s bit of work, but it would really help those who are not 100% understaning the concept of functions). dom Don't be fooled by big words like functions ect. there is nothing really to know. In every function f(x) = 2x when you enter some value (lets assume x=2) you get a value (in this case 4) so if you just ignore the notation f(x) you can just write: 2x = ? for x=2 but a nice thing about functions is that you can take the outcome and reinstall it in the original function to get another (third) value. f(x)=2x f(x)=2(2x) = 4x and so on ... f(x) = f(x^2) f(2) = f(2^2) f(3) = f(3^3) and so on... we can't tell the value of f(2) = ?? only that it's equal to some f(4) f(x) = f(x^2) is given first line: f(-2) = f(4) according to the given f(x) = f(x^2) f(4) = f(16) according to the given f(x) = f(x^2) f(4) = f(2)*f(2) ?? we have seen that f(4) = f(16) & f(2) = f(4) but we cannot say that f(4) = f(2)*f(2) this may be true but we cannot say that from the given f(x) = f(x^2) second line: f(16)-f(-2) = 0 we can say that f(16) = f(-2) from the given data since: f(-2) = f(4) f(4) = f(16) so we can say this is true ! third line: f(-2)+f(4) = 0 we can't say if this is true since we don't know what is the value of f(-2) but we can tell that f(-2) = f(4) useless here ! forth line: f(3) = 3f(3) since f(3) = f(9) I dont see how we can say this is true without knowing what f(3) = ?? fifth line: f(0) = 0 we only know that f(0) = f(0) but we know nothing about 0 hopes this will help Kudos [?]: 9 [0], given: 0 Manager Joined: 14 Jun 2007 Posts: 168 Kudos [?]: 9 [0], given: 0 Location: Vienna, Austria ### Show Tags 09 Oct 2008, 01:22 greenberg -many thanks for the good explaination! Kudos [?]: 9 [0], given: 0 Re: Symbols   [#permalink] 09 Oct 2008, 01:22 Display posts from previous: Sort by
## Duke Mathematical Journal ### Coherent sheaves and categorical $\mathfrak{sl}_2$ actions #### Abstract We introduce the concept of a geometric categorical $\mathfrak{sl}_2$ action and relate it to that of a strong categorical $\mathfrak{sl}_2$ action. The latter is a special kind of $2$-representation in the sense of Lauda and Rouquier. The main result is that a geometric categorical $\mathfrak{sl}_2$ action induces a strong categorical $\mathfrak{sl}_2$ action. This allows one to apply the theory of strong $\mathfrak{sl}_2$ actions to various geometric situations. Our main example is the construction of a geometric categorical $\mathfrak{sl}_2$ action on the derived category of coherent sheaves on cotangent bundles of Grassmannians #### Article information Source Duke Math. J. Volume 154, Number 1 (2010), 135-179. Dates First available in Project Euclid: 14 July 2010 Permanent link to this document http://projecteuclid.org/euclid.dmj/1279140507 Digital Object Identifier doi:10.1215/00127094-2010-035 Mathematical Reviews number (MathSciNet) MR2668555 Zentralblatt MATH identifier 1228.14011 #### Citation Cautis, Sabin; Kamnitzer, Joel; Licata, Anthony. Coherent sheaves and categorical sl 2 actions. Duke Math. J. 154 (2010), no. 1, 135--179. doi:10.1215/00127094-2010-035. http://projecteuclid.org/euclid.dmj/1279140507. #### References • A. Beilinson, V. Ginzburg, and W. Soergel, Koszul duality patterns in representation theory, J. Amer. Math. Soc. 9 (1996), 473--527. • J. Bernstein, I. Frenkel, and M. Khovanov, A categorification of the Temperley-Lieb algebra and Schur quotients of $U(\sl_2)$ via projective and Zuckerman functors, Selecta Math. (N.S.) 5 (1999), 199--241. • S. Cautis and J. Kamnitzer, Knot homology via derived categories of coherent sheaves, I: The $\mathfrak{sl}$(2)-case, Duke Math. J. 142 (2008), 511--588. • —, Knot homology via derived categories of coherent sheaves, II: $\mathfrak{sl}_m$ case, Invent. Math. 174 (2008), 165--232. • S. Cautis, J. KamnitzerandA. Licata, Categorical geometric skew Howe duality, Invent. Math. 180 (2010), 111--159. • —, Derived equivalences for cotangent bundles of Grassmannians via categorical $\sl_2$ actions, preprint. • —, Coherent sheaves on quiver varieties and categorification, in preparation. • J. Chuang and R. Rouquier, Derived equivalences for symmetric groups and $\sl_2$-categorification, Ann. of Math. (2) 167 (2008), 245--298. • D. Huybrechts and R. Thomas, $\p$-objects and autoequivalences of derived categories, Math. Res. Lett., 13 (2006) 87--98. • Y. Kawamata, Derived equivalence for stratified Mukai flop on $\bG(2,4)$'' in Mirror Symmetry, V, AMS/IP Stud. Adv. Math. 38, Amer. Math. Soc., Providence, 2006, 285--294. • M. Khovanov and A. D. Lauda, A diagrammatic approach to categorification of quantum groups I, Represent. Theory 13 (2009), 309--347.; II, preprint, math.QA/0804.2080v1 [math.QA]; III, Quantum Topology 1 (2010), 1--92. • A. D. Lauda, A categorification of quantum $\sl_2$, preprint. • I. Mirković and M. Vybornov, Quiver varieties and Beilinson-Drinfeld Grassmannians of type A, preprint. • Y. Namikawa, Mukai flops and derived categories, II'' in Algebraic Structures and Moduli Spaces, CRM Proc. Lecture Notes 38, Amer. Math. Soc., Providence, 2004, 149--175. • B. C. Ngo, Faisceaux pervers, homomorphisme de changement de base et lemme fondamental de Jacquet et Ye, preprint. • C. M. Ringel, Tame Algebras and Integral Quadratic Forms, Lecture Notes in Math. 1099, 1984, Springer, Berlin. • R. Rouquier, 2-Kac-Moody algebras, preprint.
# Banked Curves 1. Homework Statement A circular highway curve with a radius of 200m is banked at an angle such that a car traveling 45km/h can just make it around the curve if the highway surface is frictionless. a. what is the angle between the highway surface and the horizontal? b. if a car travels at 40km/h around the curve, what is the centripetal force acting on the car? c. What is the minimum value of the coefficient of friction between the tires and the highway surface necessary to prevent the car in (b) from skidding? d. If the angle of the highway curve is as in (a), but the radius of the curve is increased to 300m, what is the speed a car must be going in order to negotiate a curve without skidding? 2. Homework Equations tan(theta) = v^2/gr 3. The Attempt at a Solution 45km/h would be 12.5m/s a. tan(theta) = (12.5m/s)^2 / 9.8*200 theta = 4.6degrees b. I know that F_n sin(theta) = mv^2/r but I'm confused how to solve this one. c. I'll leave this blank since I haven't solved (b) d. sqrt{tan(4.6)(9.8)(300)} = v = 15.4m/s correct? Last edited:
# Reflection On A Plane Mirror ## What is Reflection of Light? When the light ray falling from the object on the surface bounces back are known as reflected rays and this phenomenon is known as a reflection of light. ## What are Laws of Reflection? The laws of reflection are divided into two main points and they are: • The angle of incidence is equal to the angle of reflection. • The incident ray, reflected ray, and the normal at the point of incidence, all lie in the same plane. ## What is Reflection on a Plane Mirror? When the light rays which gets stroked on the flat mirror and gets reflected back. According to laws of reflection, the angle of reflection is equal to the angle of incidence. The image is obtained behind the plane which is present in the mirror. This process of obtaining a mirror image which virtual and erect is known as a reflection on a plane mirror. ## Characteristics of Image formed by Plane Mirror Following are the characteristics of image formed by plane mirror: • The image obtained by the plane mirror is always erect and virtual. • The image size and the size of the object, both are equal. • The distance between the image obtained is as the distance at which the object is placed. • Laterally inverted images are obtained. ### Types of Reflection Following are the three types of reflection of light: • Mirror reflection • Specular reflection • Diffuse reflection ### Image Formed by the Plane Mirror Consider the light rays 1, 2 and 3 shown by solid lines. The wavefronts which are perpendicular to these light rays are shown by the thin lines. The secondary wavefronts generated are the circular fronts described. At point a, a wavefront is generated due to the secondary source on ray 2. At the same time, other wavefronts are generated at points c and b. Since wavefronts at points, a and b are generated at the same time ac = cb. Thus, the triangle acb is isosceles and the angles θ1 = θ2. Note that θ1 is the angle of incidence and θ2 is the angle of reflection. Thus, Angle of incidence = Angle of reflection Below is the image formed by the plane mirror Related Article: ### Huygens Principle and Law of  Refraction Consider the light rays 1, 2, and 3 shown by solid lines refracted to rays 1’, 2 and 3’ respectively. The wavefronts which are perpendicular to these light rays are shown by the thin lines. Consider the wavefronts to be one wavelength apart in their respective media where refractive indices n1 < n2.  The incident angle is θ1 and the refracted angle is θ2.  Consider the wavefront at c, the front is bent in the new medium because the speed of light is slower. However, since the frequency of the waves is a constant, the wavelengths change across media to accommodate the change in speed. i.e., $ϑ_1$ = $ϑ_2$ ${v_1}{λ_1}$ = ${v_2}{λ_2}$ Also, from figure the side ac is common to the triangles abc and adc, $ac$ = $ac$ $\frac{bc}{sinθ_1}$ = $\frac{ad}{sinθ_2}$ But the line segments ac and ad represent the wavelengths in their respective media, $\frac{λ_1}{sinθ_1}$ = $\frac{λ_2}{sinθ_2}$ $\frac{v_1}{sinθ_1}$ = $\frac{v_2}{sinθ_2}$ $\frac{sin~ θ_1}{sin~θ_2}$ = $\frac{v_1}{v_2}$ $\frac{sin~θ_1}{sin~θ_2}$ = $\frac{\frac{c}{v_2}}{\frac{c}{v_1}}$ = ${n_2}{n_1}$ Where c is the speed of light in vacuum. Thus, the Huygens principle can be used to prove the law of refraction. A similar exercise can be conducted for n1 > n2.
There is a strong correlation between builder sentiment, as measured each month by the NAHB/Wells Fargo Housing Market Index, and housing starts (see chart). So, when the Index fell from 46 to 44 this month, the hand-wringing started. But let's put the 2-point drop in perspective. Even at 44, the Index is about 20 points higher than it was a year ago. Two other important year-over-year metrics are also encouraging. Compared to a year ago, housing starts were up almost 30% in January and February, and housing permits were up almost 35%. That, I think, trumps a small decline in the Index. And take a closer look at the chart. You'll see that in the past, when the Index was at 44, the annual rate of single-family housing starts was usually around 900,000 units. Single-family starts are only running at an annual rate of about 600,000 currently, suggesting that housing has considerable long-term upside.
TENSILE TEST AND FEA CORRELATION OF ABS PLASTIC for tensile test of Plastics [14]. This means my input is matching output. 1 INTRODUCTION The ANSYS …Testing Services. Monsalve, Characterization of the mechanical behaviour of materials in the tensile test: experiments and simulation, Modelling and Simulation in Materials Science and Engineering, 12(4) (2004) 425–444. used tensile test to study the deformation behav-ior of steel within the mushy zone. com accurately simulate your specific material? The uniaxial tensile test is performed to collect data from actual steel samples, This simulation has been designed to support the teaching of Tensile Testing at A-Level. 1. Tensile test has been conducted in an UTM operated in displacement control mode. **Stress/Strain** **Young's modulus** --Department of Engineering-- University of Liverpool-- Read more changes or variation in mechanical-test results include several factors involving materials, namely, methodology, human factors, equipment, and ambient conditions. Tensile, Flex, Izod Impact, Multi-Axial impact How to Measure Tensile Strength Using a Tensile Testing Machine. So that I can get the values of stress triaxiality at failure strain. asee. 2 / 2010 45 OVERMOLDING INJECTION MOLDING SIMULATION OF TENSILE TEST SPECIMEN Catalin Fetecau1, Daniel Valentin Dobrea1 & Ion Postolache1 1 University "Dunarea de Jos" of Galati-Romania, Department of Machine manufacturing technology, Domneasca Anotace; The tensile test in transition metal disilicides with C11$_b$ structure is simulated by {\it ab initio} electronic structure calculations using full potential linearized augmented plane …– The tensile test is one of the fundamental experiments used to evaluate material properties. The test process is simulated on the computer by running ANSYS simulation program. The stress is calculated from the applied force, F, and the cross-sectional area of the test piece, A, as follows: The strain is calculated from the change in length in the test piece divided by the original length of the test piece, as follows: But problem is that I am simulating tensile test simulation to get the value of stress triaxiality for different of specimen under different failure mode. Strength 2. By doing this, tensile tests determine how strong a material is and how much it can elongate. I've carried out the experiment but I need to do a computer simulation to compare experimental and numerical results. This simulation has been designed to support the teaching of Tensile Testing at A-Level. Four different materials were tested, including 6061-T6 Aluminum Alloy, A-36 hot rolled steel, polymethylmethacrylate (PMMA, cast acrylic), and polycarbonate. Rubber Tensile Test Machine Rubber Tensile Testing Machine can do tensile test, tear test, peel test, bond test with different clamps for rubber, plastic, film, tape and other kinds of materials. In this tutorial, you will analyze this part using SimulationXpress in solidworks. how to provide strain rate in tensile test in abaqus explicit This post has NOT been accepted by the mailing list yet. The RADIOSS material laws 2, 27 and 36 are used to reproduce the experimental data of …Hardness Test Simulation Code: SIM-HT Use the secondary steelmaking simulation to take a ladle of molten steel and refine the composition to deliver to the caster within required time. Results for fem simulation of tie rod tensile test tmt 2014 A Masters Thesis Project in Cooperation between - DiVA. Distribution Simulation Testing. 4-0. Celentano, DJ, Cabezas, EE, García, CM & Monsalve, AE 2004, ' Characterization of the mechanical behaviour of materials in the tensile test: Experiments and simulation ', Modelling and Simulation in Materials Science and Engineering, vol. 3 Biaxial Flow Stress Determination – Dome Test Work is also in progress in developing an analysis to expand the application of the well- Simulation Simulation of the Tensile Test Th e fi rst validation case of the composite model is the simulation of the tensile test without any holes according to DIN EN ISO 527-4. Simulation: Bottles Learn about bottle production in this virtual manufacturing plant. Tensile Test Introduction At this stage you are able to insert the test piece into the grips of the tensile testing machine and can carry out a load strain experiment. As for the behaviour prior to fracture, there are many other factors to consider such as material properties, strain rate etc. This interactive animation schematically presents upsetting test, and the test results can be useful for the simulation of cold forging processes. Steven Wendel, Sinclair Community College. The purpose of the experiment was to investigate the properties of mild steel in detail by performing tensile tests on a specimen rod of constant cross section. We specialize in adapting standard torsion test and tension test specifications to simulate actual component use. Supported through NSF-DUE, this TUES Type 1 project is 1) developing an open source, virtual, online tensile testing laboratory simulation; 2) conducting research to compare the costs and learning outcomes for using on-site, hands-on tensile testing equipment versus an online simulation; 3) creating close industry ties through blended 1 Example 11 - Tensile Test Summary The material characterization of ductile aluminum alloy is studied. There are 2 parts, one is fixed and the other one is moving. The main advantages of this test are to check yield strength,tensile strength and ductile property of material. In this work effects of carbon content, strain rate and sampling orientation on hot ductility were investigated. Click Part, OK. A tensile test applies tensile (pulling) force to a material and measures the specimen's response to the stress. Tensile Properties / Tensile Testing (Up to 10,000 lb, -40 °C to 150 °C) Fatigue Testing and Fracture; Leak Testing (Helium, Pressure Decay, and Flow) Whatever your environmental testing needs, we work with you to provide the best climatic simulation testing to meet your product development requirements. APPLICATIONS. Commonly-employed experimental techniques are the in situ tensile test using inelastic X-ray diffraction (Diddens et al. C. Plasticity Simulation Tips & Tricks August 14, 2015 By: Peter Barrett Share: Using an elastic modulus and Poisson's ratio for the initial simulation is a good practice for all analyses , since it solves in a single iteration and provides benchmark stress results. simulation of tensile conditions of wood joints and to reveal the principles governing the tensile forces required for breakage. As the rapid development of metal material science, various kinds of metal materials have emerged. In this study, a simulation of a tensile test, which is a representative material test, is performed using a computer program (Abaqus CAE). After the material break, the final length and cross sectional area of the specimen is used to calculate the percentage of elongation and percentage of reduction. ppt), PDF File (. Reddy. casting of steels by tensile testing. The breaking load is the load at which the specimen breaks. 2. Cabezas, C. This simulation has been designed to support the teaching of Tensile Testing at A-Level. Tensile Test Simulation of CFRP Test Specimen Using - ScienceDirect. Description. rubber) This simulation has been designed to support the teaching of Tensile Testing at A-Level. The simulation results were in good agreement with the actual tensile test results. pdfVirtual Online Tensile Strength Testing Simulation. Material parameters for Johnson-Cook strength and fracture models for armour steel PROTAC 500 were previously determined based on combination of experimental test and numerical analyses. The uniaxial test is I have to simulate a tensile test of a steel-sample and also be able to show the fracture in ANSYS static structural. The testing is performed by applying the above said loads for different samples. Digital image correlation(DIC) method has been applied to capture the deformation and strain distribution in the triangle region. Simulation of tensile testing with stress-strain A tensile test is a reliable way to get data about how different processes may affect the performance of your final product – processes such as sterilization, extended aging periods, and exposure to various temperature and humidity conditions. We will look at a very easy experiment that provides lots of information about the strength or the mechanical behavior of a material, called the tensile test. BRAZILIAN TEST According to Martin (1993), the mean Brazilian strength of LdB granite is 8. Properties that are directly measured via a tensile test are ultimate tensile strength, breaking strength, maximum elongation and reduction in area. G. Conclusions Steel foam is emerging as a new structural material with intriguing Tensile Test Presentation - Download as Powerpoint Presentation (. To simulate the tensile behaviour of a dogbone specimen, including damage and failure, you have to use a damage model. These analyses serve as a basis for verification of composite material properties and isotropic material properties as well, moreover, failure procedure of dimensions of tensile and compression test samples are shown in Fig. Via tensile testing of various materials I have been supplied with Load-Extension tables. Tensile tests are typically conducted on electromechanical or universal testing instruments , are simple to perform, and are fully standardized. Optimization(1) 12th International LS-DYNA ® Users Conference 8 Figure 9. Supported through NSF-DUE, this TUES Type 1 project is 1) developing an open source, virtual, online 17 Apr 2012 This simulation has been designed to support the teaching of Tensile Testing at A-Level. 00. I usually consider only Von …04/12/2018 · Hello all, I was recently messing with the tutorials in the non linear analysis chapter and this post references to the TUTORIAL: RD-3500. astm. the tensile test results. ABAQUS SIMULATION 34,880 views. That requires a uniform distribution of axial stress throughout the composite test, or gage, section between the tabs. material. , 040 01 Kosice, Slovakia Abstract Tensile stress and strain are calculated from the result. I usually consider only Von …A static tensile test is simulated using shell elements. The Modulus of elasticity is defined as E Tensile mechanical behaviors of two kinds composite laminates G803/5224 and G827/5224 specimen after high-speed impact were investigated separately. Simulating a tensile test can be a replacement of experiments to …Virtual Online Tensile Strength Testing Simulation. If you still need help with COMSOL and have an on-subscription license, please visit our Support Center for help. Table 1. Simulation: Games - Search for materials, save the patient, beat the deck Related Searches for tensile test simulation: horse riding simulator f1 simulator flight simulator custom car paint simulator laser shooting simulator ship simulator racing simulator car paint simulator roller coaster simulator hair color simulator shooting simulator 7d cinema simulator game simulator well control simulator train simulator MoreTensile testing, also known as tension testing, is a fundamental materials science and engineering test in which a sample is subjected to a controlled tension until failure. The testing procedure is further explained in this article: The testing procedure is further explained in this article:Re: Verifying Tensile Test on Sheet Steel Jared Conway Nov 18, 2013 8:14 PM ( in response to Ben Morrissey ) take a step back and walk us through your calculation, the assumptions it makes and then the setup. for the study named "study 2" just run your model! in the middle of the solving a message appears click "No" on the message. cfg. G. For the study named "Tensile test" just add a "fixed geometry" (fixtures adviser >Fixed geometry) on the opposite side of the force. King, click on the button below to learn more or schedule a test today. Author: Surya Pratap SinghViews: 3KTensile Testing Simulation by matsci | Teaching Resourceshttps://www. Put a test piece of steel in the tensile testing machine to carry out a load strain experiment. The simulation results showed that the elongation A broken fibre sample after test Figure 3 Tensile testing samples 2014 Simulation of tensile tests of hemp TENSILE TEST ANALYSIS OF NATURAL FIBER REINFORCED COMPOSITE Here the simulation was carried out on specimen under Tensile Test Analysis Of Natural Fiber performed tensile test. Mr. 9 4. Contarctor has received 3 plates ( SA 516 GR 70 N - 30 mm Thk) which are having one heat number and 3 different Plate No. Simulation of Vickers hardness test on a substrate with coating. "Optimization and simulation research of tensile properties of wood lap joint," BioRes. simulation. different shear strength values. 3 Impact Test 281 7. Table 3: Comparison of the Aluminum 6061-T6 Results to the Input Material Properties for the Johnson-Cook Plasticity Parameters. 3. S. Vitek z Institute of Ph ysics of Materials, Academ y of Sciences of the Czec h Republic,15/04/2012 · I would like to simulate a high velcity (500 mm/s) Tensile Test of a polymer (Polyurethane). 2008), Raman (Rusli and Eichhorn 2008; Sturcová et al. This is based on Von Mises criteria. 2 Compression Test 278 7. Introduction: This specimen is nickel paint coating made up of three layers each of three different grain sizes. 8/5(4)Brand: TESExample 11 - Tensile Test - Altair Universityhttps://altairuniversity. Advantage server motor, high quality warm gear gearbox, and precise roll bearing screw are adopted to ensure tensile tester machine the accurate control and results. Innovative and unique solutions for current and future tasks increase productivity and ensure efficient test operation. As you can see in the attached picture, despite the Engineer data, Geometry, Model, Mesh and Analysis settings, including applied loads, have been defined as is denoted by the green check mark, in Solution step the We use cookies to offer you a better experience, personalize content, tailor advertising, provide social media features, and better understand the use of our services. Self test. feel free to post your model and the calcs using the advanced editor. FEM analysis was operated stretching process only. The simulation data is compared with the quasi-static tensile test data of a new kind of aluminum and the verification of accuracy is based on numerical fitting Research on Design and Simulation of Biaxial Tensile-Bending Complex Mechanical Performance Test Apparatus Tensile Properties of Aluminum using Lloyds Testing Machine Strain hardening prior to test Tensile Strength or Ultimate Tensile Strength: Tensile test simulation of high-carbon steel by discrete element method Simulating a tensile test can be a replacement of experiments to determine mechanical parameters of a continuous material. Weiss Technik’s A2LA Accredited test laboratory provides environmental simulation testing to meet your testing needs from product qualification testing, overflow testing and /or third party product validation. , near Philadelphia, PA in the USA, performs the tensile test in accordance with industry standards and specifications, including ASTM tensile test methods. Sarfarazi b , A. [4]GLoWA cki et al. plastic strain data atleast upto 95% of the true strain And one more important point is real materials contain imperfections and voids etc which grows due to deformation and lead to failure. Tensile test experiments with flat specimens for SPCC and JAC340H are presented and simulated in this paper. Simulation of tensile tests of hemp fibre using discrete element method Abstract: Tensile strength is an important property of hemp fibre, because it determines the mechanical strength of fibre-based products such as biocomposites. . Universal / Tensile Testing Machine Shimadzu offers a range of first-class tensile testing instruments to meet R&D requirements in the development of safer and higher quality materials and products. Properties that are directly measured via a tensile test are ultimate tensile strength , breaking strength , maximum elongation and …International Journal of Mechanical Sciences43(2001)2237–2260 Molecular dynamics (MD) simulation of uniaxial tension of some single-crystal cubic metals at nanolevelHello, I have set up a transient structural finite element analysis for an elastomer sample trying to simulate a tensile test. StampingSimulation has been speaking for years about the importance of material testing via the uniaxial tensile test. Afterwards, comparison of results and conclusions end this article. pdf), Text File (. Measure the change in length while adding weight until the part begins to stretch and finally breaks. Borse 2 ,J. I'm a Final year student of Mechanical Engineering in Obafemi Awolowo University. 149-152. Simulating a tensile test can be a replacement of experiments to How does StampingSimulation. 13 Mar 20178 Nov 2016The tensile test is one of the fundamental experiments used to evaluate material properties. this simulation box is fully periodic in x, y, z directions. Plate condition - Plate Hot Rolled. With Tensile Testers like the AG-X plus Series, AGS-X Series and the Table-top EZ Test Series, Capillary Flow and Endurance Testers, and a variety In the simulation, the same flow stress which was obtained by the iterative approach to identifying material property using tensile test of cylindrical specimen [7]. 328 N. **Stress/Strain** **Young's modulus** --Department of Virtual Online Tensile Strength Testing Simulation. We would like to show you a description here but the site won’t allow us. Consequently, the simulation results should be more accurate when using flow stress data from the bulge test compared to the tensile test. García and A. Tensile test using ABAQUS- part 3 XFEM - Duration: 6:04. [email protected] So punch test can be a proper test for determination of tensile strength of concrete in absence of direct test. Tensile Test Presentation - Download as Powerpoint Presentation (. The sample has a shoulder at each end and a gauge section in between. For torsion tests, I have taken equivalent tensile stress = shear stress*sqrt(3) and equivalent tensile strain as = shear strain / sqrt (3). For this comparison, upsetting and tensile test were selected as compressive and tensile forming mode. Goldberg National Aeronautics and Space Administration Glenn Research Center Cleveland, Ohio 44135 Summary A research program is underway to develop strain rate dependent deformation and To test their reliability, lithium batteries are subjected to various tests in the field of environmental simulation. Adhesive Tape Tensile Test Machine Adhesive Tape Tensile Test Machine is a simple type machine, simple structure, convenient operation, It can be tested on the operated table, Using the electronic control systems, load sensor isrising and falling totest the tension or compression through the motor rotation, transmission machinery and T-screws. Tensile Testing of Structural Metals Measure the length of the test section between fillets. I have found out an inverse way for evaluating the post necking stress strain curve through simulation. txt) or view presentation slides online. SuperForge, forming simulation programme Introduction Results for fem simulation of tie rod tensile test tmt 2014 A Masters Thesis Project in Cooperation between - DiVA. For better understanding, here is a quick resume: It is about a non linear simulation of a tensile test plate with anisentropic material property. All templates and calculations comply with ASTM E8/E8M standards. project is an experimental research on the effect of varied die casting process parameters on the quality of …15/04/2012 · Hello everyone, I would like to simulate a high velcity (500 mm/s) Tensile Test of a polymer (Polyurethane). These include: Tensile specimens and test machines Stress-strain curves, including discussionsof elastic versus plastic deformation, yield points, and ductility True stress and strain Test methodology and data analysis It should be noted that subsequent chapters con-tain more detailed information on these topics. . Moreover, the way I'm looking at is, my input data into model (derived from tensile test) is matching the simulation output data when I simply use 1-Element. Hedayat c , and A. Materials) is the specification typically required for non-ferrous products. 2 to 20 inches per minute and will influence the results. compared with the simulation results. 2009) Physical simulation Fusion welding Conventional welding Thick plate Beam welding Thick plate Conventional welding Thin plate: Tensile test specimen - initial state Initial state Tensile test specimen after simulation study the flexural-tensile performance of reinforced concrete. [33] M. pdf - CAE v 6. This is a quick summary to decide if this test is right for you, and to point out what equipment you need to perform the test. A proof test is designed to observe the material under a specified torque load over a set period of time. simulation and experimental test it was D. Food containers and plastic film tensile strength. Tensile test is a destructive testing ,where sample is made in Standard size. Analysis of the Bridgman Procedure to Characterize the Mechanical Behavior of Materials in the Tensile Test: Experiments and Simulation. com ABSTRACT The characteristic of plastic which are easy made and shaped, make plastic become famous in industry. RESULTS AND ANALYSIS Results and Analysis for Destructive Tensile Test After the destructive tensile testing, three forms of specimen failure can be As for the accuracy of this system, comparison between experiments and FEM simulation both of this test machine and other high-velocity-tensile-test machines have clarified the feature of one bar method and the metallurgical features of high velocity deformation. We need to test the material strength by doing virtual tensile testing. – The tensile test is one of the fundamental experiments used to evaluate material properties. The influences of open-hole on the specimen’s net tensile strength were analyzed by comparing with standard specimen’s experimental results. The samples were cylindrical in cross section, with a reduced gage The intersection of this parallel line with the flow stress curve gives the value of Y. 9 PFC2D GBM 202 60 6. For the development and testing of the interaction between the occupant, the seat and the head rest, appropriate test Tensile Test Numerical Simulation with Finite Element Method The paper compares calculus made by Cosmos MDesign Star software with classical tensile calculus, statistical values and with experimental tensile test for standard tension test specimen. The most common complication after a rear impact and a feared cause of chronic disorders is the whiplash injury. 9-1 software has been used to establish a 3D model for simulation of the tensile test [7] Helius: MCT Enhanced composite simulation, Tutorial 1. The shoulders are wider than the gauge section which causes a stress concentration to occur in the middle when the sample is loaded with a tensile force. Purpose – The tensile test is one of the fundamental experiments used to evaluate material properties. 1 Specimen for standard tensile test. For more information about tensile and compressive force tests with J. You will perform calculations using the graph to better understand the graph and important data. Click New. 8(1), 1409-1419. The paper aims to discuss these issues. For tensile test in Ansys, you will have to go with explicit dynamics analysis, because tensile test in a non-linear case and using explicit dynamics will give you the best results. Case Study: Diamante Pull-off Test, Bow Pull-off Test, Button Pull-off Test, Press-Stud Popper Pull-Off Test Extruded medical tubing tensile strength and elongation Plastic extruded medical tubing is an essential health care product for the delivery of blood, nutrients and gases to the patient, and for the execution of minimally invasive surgical procedures. For the T-shaped hooking structure, we can find that only the T-shaped structure was destroyed but the groove structure was not damaged in bending/tensile test or simulation. Bounce the Ball Choose balls and compare their properties in a 'bounce test'. 2. Simulation 2. One can do a very simplified test at home. Park, D. Do you also have the full stress-strain curve from a tensile test sample? If so you can create the Multilinear Plasticity material model by followingWith our test system, property values on test specimens in form of mechanical stress are determined by a swift, static, swelling or alternating course by means of an electromechanical drive. Please how can I use COMSOL to carry out tensile test on 8 A6063 aluminum alloy specimens and obtain results like the ones gotten experimentally. The mechanical properties of the material studied are shown in Table 1. Every FE simulation starts with the input of material data (elastic and plastic behaviour for a mechanical problem) ↓ For the metalforming industry the input of correct material data is vital to the success of the FE simulation ↓ From a basic tensile test one obtains the true plastic behaviour before necking ↓ additional experiment and its simulation revealed the necessity for further consideration of the parameters. At this stage you are able to insert the test piece into the grips of the tensile testing machine and can carry out a load strain experiment. Scribd is the world's largest social reading and publishing site. Code: SIM-TT. TY - CPAPER AB - Virtual Online Tensile Strength Testing Simulation Supported through NSF-DUE, this TUES Type 1 project is 1) developing an open source, virtual, online tensile testing laboratory simulation; 2) conducting research to compare the costs and learning outcomes for using on-site, hands-on tensile testing equipment versus an online AFFDL-TR-78- 169 SIMULATION OF THE DYNAMIC TENSILE CHARACTERISTICS OF NYLON PARACHUTE MATERIALS QRobert E. Environmental simulation test chambers – Exterior, interior – Optional equipment – Advantages More. Which Simulations - carry out your own materials science related tests and experiments with these interactive simulations. im trying to verify a tensile test of a rubber like material. tensile test simulationTensile Test Simulation. International Journal of Modern Manufacturing Technologies ISSN 2067–3604, Vol. Hello everyone, I would like to simulate a high velcity (500 mm/s) Tensile Test of a polymer (Polyurethane). Instron is the market leader in crash simulation sled systems with over 80 facilities installed worldwide. (which essentially means the 1-element is following the Stress-strain curve defined through MAT_24 card). You will perform calculations Virtual Online Tensile Strength Testing Simulation Abstract Supported through NSF-DUE, this TUES Type 1 project is 1) developing an open source, virtual, online tensile testing laboratory simulation; 2) conducting research to compare the costs and learning outcomes for using on-site, hands-on tensile testing equipment versus an online Simulation of tensile test results Carbon cloth type Number of plies Resin σ m [MPa] ε m [%] Orientation of plies [°] Carbon plain 160 3 Epoxy 160 709 1. – The tensile test is one of the fundamental experiments used to evaluate material properties. This quiz will test what you have learnt about the tensile test. Type initial data into the tensile test program (figure 1. #tensile test using #Abaqus - part 1:Diagram #stress-strain #ABAQUS #ABAQUS_Simulation #Simulation #CAE#CAD Virtual Online Tensile Strength Testing Simulation. 4 Flexure Test 285 . Full relaxation of both external and internal parameters is performed. If you are not confident in your knowledge of a tensile curve, please go back to the menu and work through the simulation and calculations again. M. Tensile Test DX52 Thinning. Tensile Testing of Metals is a destructive test process that provides information about the tensile …Now the simulation did stop at 2. The observations during the test are recorded in the Max test software attached with the UTM. 7. 2 / 2010 45 OVERMOLDING INJECTION MOLDING SIMULATION OF TENSILE TESTPractical 7 : Universal Tensile Test on Mild Steel and Brass Specimens Introduction: The universal tensile test is to determine the strength of materials. The rate at which a sample is pulled apart in the test can range from 0. Studied by Finite Element Simulation Xiaolong Dong 1, Hongwei Zhao1,+, Lin Zhang1,2, Hongbing Cheng and Jing Gao1 Fig. ask. 2) Learn how to simulate uniaxial tensile test using LAMMPS. The properties it determines The properties it determines (elastic modulus, yield strength, elongation at fracture, to name a few) are very useful to the design engineer. Whether you are carrying out temperature, climate, vibration, corrosion, emissions, altitude, pressure or combined stress testing, we have the right solution and can supply systems in all sizes. With our test system, property values on test specimens in form of mechanical stress are determined by a swift, static, swelling or alternating course by means of an electromechanical drive. Celentano, DJ, Cabezas, EE & García, CM 2005, ' Analysis of the Bridgman procedure to characterize the mechanical behavior of materials in the tensile test: Experiments and simulation ' Journal of Applied Mechanics, Transactions ASME, vol. 3 Plasticity Simulation Tips & Tricks This is achieved by converting nominal stress and engineering strain data typically output from a tensile test into true with tensile testing. Dear all: I am trying to simulate bulk copper under tensile loading, following is my input file ===== # 3d metal tensilr simulation units metal boundary s s p atom_style atomic lattice fcc 3. Material parameters for Johnson-Cook strength and fracture models for armour steel PROTAC 500 were previously determined based on combination of experimental test …Tensile testing is described, covering test specimen form, determination of the engineering stress/strain curve, and derivation of test results: ultimate tensile strength, yield point, elongation, reduction in area, Young's modulus of elasticity and proof stress17/04/2018 · Hi, I am running Linear static FEA in Solidworks simulation for an assembly in which all the parts are made up of steel. org/virtual-online-tensile-strength-testing-simulation. With the help of this application, users can With the help of this application, users can easily realize the yield strength, ultimate strength and fracture strength on stress strain diagram. Finite-element mesh before tensile. plate manufacturer has not done Simulation heat treatment of test coupon. The test result can verify the validity of the simulation result. Supported through NSF-DUE, this TUES Type 1 project is 1) developing an open source, virtual, online tensile testing laboratory simulation; 2) conducting research to compare the costs and learning outcomes for using on-site, hands-on tensile testing equipment versus an online simulation; 3 If your sheet metal forming project requires stretching or deep drawing it is important to accurately determine the tensile strength and failure point of the material using the uniaxial tensile test. 12. Visualization on static tensile test for unidirectional CFRP tensile test, fracture, splitting and numerical simulation [3], [4]. Low Speed Rear Impact Simulation. Re: [lammps-users] simple 2D tensile test simulation FIX settings Re: [lammps-users] simple 2D tensile test simulation FIX settings Simulation of tensile test by node separation method The condition whereby a material is divided into two materials upon tensile test, which is a representative material test, has been simulated by the program. Simulating a tensile test can be a replacement of experiments to …With our test system, property values on test specimens in form of mechanical stress are determined by a swift, static, swelling or alternating course by means of an electromechanical drive. Shown below is a graph of a tensile test for a common steel threaded rod, providing a good example of a general metal tensile test. Labthink Tensile Strength Testers are featured by easy-to-use, high accuracy, customized for various test items. 1 Tensile Test 275 7. 4. One end of the specimen is constrained, while concentrated nodal loads are applied at the other end. The strain and stress characteristics of materials are prerequisites of its application in the actual production. Skip navigation Sign in. the nite-element mesh of tensile test simulation (10000 elements, tensile velocity 1 mm/sec), where before loading, during loading, and after fracture are shown. Tensile testing, also known as tension testing, is a fundamental materials science and engineering test in which a sample is subjected to a controlled tension until failure. 8 MPa, which is slightly greater than its direct-tension Drop Test Simulation and Verification of a This type is an elastic-plastic material where it is possible consider different tensile and Dog bone tensile test samples are primarily used in tensile tests. 276 CHAPTER 7 SIMULATIONS OF EXPERIMENTAL WORK PERFORMED ON COMPOSITE SPECIMENS BY USING ANSYS 7. II, No. A special test rig was set-up to allow the gluing of the tension specimens at a variable thickness. Keywords: High speed tensile test, material characterisation, finite element simulation 1 INTRODUCTION & MOTIVATION The enhanced formability of aluminum at high strain rates makes high speed forming processes of this While a nanoscale tensile test simulation has been used in arequired sophomore level materials laboratory course for a number of semesters, the integrationwith the traditional tensile test lab has not realized an optimal impact on students’ learning. This system allows our team of engineers to analyze the tension and compression flexural properties, density, specific gravity, impact resistance and notch toughness. used to design web-based virtual tensile test laboratory application. **Stress/Strain** **Young's modulus** --Department of Engineering-- University of Liverpool-- Read more Tensile Test. Tensile mechanical behaviors of two kinds composite laminates G803/5224 and G827/5224 specimen after high-speed impact were investigated separately. 2, which also show one of the programme windows where the user can select material sample to test. This is an important concept in engineering, especially in the fields of material science, mechanical engineering and structural engineering. Th e simulation shows excellent agreement with the experiments re-garding tensile modulus, breaking force, and elonga-tion at break (Figure 5a). 5 0 – 45 - 0 As the Table 2 shows, the values tensile strength obtained by simulation are higher than the values from real To test, you need to save a CFG file as well, such as cnt8x3. ( as heat number is same for each 3 plates) The tensile test in transition metal disilicides with C11b structure is simulated by ab initio electronic structure calculations using full potential linearized augmented plane wave method (FLAPW). The units of engineering stress are ksi , which stands for a thousand pounds per square inch. Simulation of tensile testing with stress-strain curve input from measurement I would have to test it, but I don't have access to a machine right now. Want to Learn More? Interested in learning more about what is possible with advanced forming simulation software? Tensile & Mechanical Materials Strength Testing Request Information. It is done under constant strain rate and constant temperature. Celentano, E. In the next few videos, simulation of fracture prediction in tensile test is presented. I must The simulation was designed to emphasize the equations used as part of tensile testing rather than solely on the use of the tensile testing equipment. Comparison of shear simulations with experiments 5. But the tensile fracture Introduction: Tensile testing machine is suitable for testing the relationship between material force and deformation, and can conduct tension, bending, peeling, compression, sagging, adhesion, tear and other mechanical tests and analysis to metal and nonmetal raw material and workpiece. Comparison of the ultimate failure between tensile test and simulation. When fabric reinforcement is I've carried out the experiment but I need to do a computer simulation to compare experimental and numerical results. g. Tensile Test Equipment, Tensile Test Machine, Tensile Strength Testing Machine manufacturer / supplier in China, offering Vertical Universal Paper Tensile Strength Tester, ISO Two Layers Simulation Climatic Chamber Testing Equipment, High Low Temperature & Humidity Environmental Test Chamber and so on. 06/02/2018 · To simulate the Vickers test, you need to know the yield strength of the material sample, but the Vickers test is often used to estimate the yield strength of a material. This study focuses on exploring the mechanical properties and nonlinear stress-strain behaviors of monoclinic Ni 3 Sn 4 single crystals under uniaxial tensile test and also their size, temperature, and strain-rate dependence through constant temperature molecular dynamics (MD) simulation using Berendsen thermostat. Figures 6 and 7 display the e ective strain di-agram, revealing that the maximum e ective strain Figure 3. The tensile strength obtained in ANSYS is 26. It is calculated by dividing the maximum load applied during the tensile test by the original cross sectional area of the sample. com/wp-content/uploads/2012/08/Example_11. CSAadvanced The CSA Advanced catapult is a complete solution for high volume testing and advanced applications, including pitching simulation. Alibaba. In the next few videos, simulation of fracture prediction in tensile test is presented. Takla / Materials and Design 54 (2014) 323–330 is defined as material damage criterion. It can be used to demonstrate how tensile testing experiments are carried out to find out the effect of carbon content on the mechanical properties of steel. Tensile Versus Compressive Moduli of Asphalt Concrete exact laboratory simulation of in situ conditions may be pro­ for the triaxial tensile test and TENSILE TEST MACHINE SIMULATION FOR IMPROVING METACOGNITIVE SKILLS An Image/Link below is provided (as is) to download presentation. Tensile test is a destructive testing ,where sample is made in Standard size. My U. com offers 99 tensile testing simulation products. Innovative Testing Solutions is an automotive test consulting company. Tensile Strength - Capability Statements. This video demonstrates the tutorial for the simulation tensile test of a specimen using the element deletion option in Abaqus/explicit 6. tensile test and comparing the response to the test data. Many engineering applications may require you to find mechanical properties such as stress, strain, or elastic modulus of a material. Laboratory Testing Inc. 615 region box block 0 20 0 20 0 20 create_box 3 box create_atoms 1 pair_style eam pair_coeff * * cuu3 neighbor 0. 14. What can we learn from the tensile test results and what are the usual outputs from a tensile test? Most commonly discussed and understood are these test results, which are provided at the conclusion of any uniaxial tensile test: Yield Strength (YS) MPa or ksi This simulation has been designed to support the teaching of Tensile Testing at A-Level. A “good” tensile test is one that measures the fullest extent of the composite material’s tensile strength during the test. 0 0 – 45 - 0 Carbon twill 220 3 Epoxy 160 756 1. Play. Patil 3 1ME Student, Department of Mechanical Engineering, GulabraoDeokar College of Engineering North Maharashtra University ,Jalgaon ,Maharashtra state, India Tensile Test Correlation ď&#x201A;§ A simple CAE model of the test sample (dog-bone) is created using shell elements. €0. 3 bin neigh_modify delay 5 region left block INF 2 INF INF INF INF region right block 18 Environmental Simulation Any product designed for the outdoor environment will be subjected to a variety of stresses over its lifetime. 12, no. Ultimate tensile strength (UTS) is the maximum engineering stress in a tensile test and signifies the end of uniform elongation and the start of localized necking. Simulating a tensile test can be a replacement of experiments to determine mechanical parameters of a continuous material. Numerical Simulation of an Indirect Tensile Test for Asphalt Mixtures Using Discrete Element Method Software Hello Djouhaira Boulo Your Discussion has gone 30 days without a reply. Haeri a * , V. Deformation under defined load: Test If a simple tensile test is conducted on a ductile material, the stress strain curve may look like this. Fig. Simulation and evaluation of carbon/epoxy composite systems using FEM and tensile test Branislav Duleba a *, Ľudmila Dulebová a, Emil Spišák a a Technical University of Kosice, Faculty of Mechanical Engineering, 74 Masiarska St. McCarty Recovery and Crew Station Branch rNastran In-CAD does not have element deletion or fracture capability at the moment. A universal testing machine (tensile testing machine) is needed to perform this test. Effect of Tensile Strength of Rock on Tensile Fracture Toughness Using Experimental Test and PFC2D Simulation 1 H. Force displacement response for uniaxial tensile test simulation. Tensile strength resulted from punch test was close to direct test results. Keywords: testing • tensile test • Erichsen cupping test • strain • cylindrical test; Date added: 21 December 2009; Upsetting Test. (b) Simulation of tension test. Abstract. Static tensile tests The tensile test is conducted on WDW-50KN capacity Electromechanical Universal testing machine. Research on Design and Simulation of Biaxial Tensile-Bending Complex Mechanical Performance Test Apparatus Research on Design and Simulation of Biaxial Tensile-Bending Complex Mechanical Performance Test Apparatus a variety of experimental and simulation methods. Because the tensile test data have been successfully used for many sheet metal FE simulations, the current stamping simulation software, LS-DYNA ® , AutoForm ® and PAMSTAMP ® , all developed material models to directly use uniaxial tensile test data. 08/11/2016 · Simulation of tensile test is performed on ABAQUS software. Tensile testing. Prescribed motion is defined at one end of the sample and fixed at another. You will be asked to drag labels to various parts of a curve and to calculate values. 72 sec from 5 sec with negative Jacobian determinant. Virtual Online Tensile Strength Testing Simulation. DESCRIPTION OF TIE ROD FE-MODEL In order to conduct the tensile test of tie bar from housing of tie rod assembly (tensile force, F=30000 N) designed for assembling wheel transmission system of passenger vehicles, finite elementTensile random simulation Compressive random simulation Compressive test #1 Compressive test #2 Compressive test #3 Tensile test #1 Tensile test #2 Tensile test #3 Tensile test #4. Simulation: Whack a Stick A simulation showing the first three mode shapes of "longitudinal vibration" that happen when you hit one end of a wooden batten with a hammer. A torsion test for failure requires that the test sample be twisted until it breaks and is designed to measure the strength of the sample. Steve Wendel serves as Director of the National Center for This simulation has been designed to support the teaching of Tensile Testing at A-Level. During the test, the specimen elongation, reduction and applied load are measured. Tensile strength is the ability of a material to withstand a pulling (tensile) force. SIMULATION OF THE DYNAMIC TENSILE CHARACTERISTICS OF NYLON PARACHUTE MATERIALS 5 Schematic for Tensile Impact Test Fixture 11 6 Drop Sled and Weight Plates 13 Tensile properties simulation of two-dimensional woven reinforced composite laminates after high velocity impact. Tensile Testing Simulation Recommended for age 14 and above This simulation has been designed to support the teaching of Tensile Testing at A-Level. [5-7] employed different methods of laboratory simulation of continuous cast- Table 2 - Comparison between laboratory test results and PFC2D-GBM simulation of LdB Granite Parameters UCS (MPa) E (GPa) Direct-tension test σt (MPa) LdB granite 205 60 6. Yoon, 2008, “Tensile test based material identification program AFDEX/MAT and its application to two new pre-heat treated steels and a conventional Cr-Mo steel”, The 9th Asia-Pacific Conference on Engineering Plasticity and Its Applications, 402~407 Environmental Simulation Capabilities Unique testing capabilities to ensure hardiness and performance of your products to extreme dynamic, climatic, and environmental phenomena. hi , How to vary strain rate in abaqus explicit in tensile test simulation. (a) Pullout test results and model simulation for two fraction. The Materials Classroom houses interactive Materials Science games At this stage you are able to insert the test piece into the grips of the tensile testing machine and can carry out a load strain experiment. For investigating the interaction between nanoparticles and matrix under the tensile load, the nanoscale model is needed in the simulation, while the experimental data comes from the macrospecimen. Case Study: Balloon Cloth Tensile Strength Test. Nothing has changed and for reliable and accurate simulation results a material test is critical. Hello! I'm performing a simulation of a tensile test using Abaqus 6. Joun, J. Yield point 3. [6] Reddy, Yeruva S. The material of the cuboid sample is Steel (non-linear), because I have to show the plastic area as well. Optimized Regression Test Using Test Case - ScienceDirect . But the results looks quiet better as those from coarse hex - model at the end of time: But the results looks quiet better as those from coarse hex - model at the end of time:SIMULATIONS OF EXPERIMENTAL WORK PERFORMED ON COMPOSITE SPECIMENS BY USING ANSYS 7. Supported through NSF-DUE, this TUES Type 1 project is 1) developing an open source, virtual, online tensile Re: Simulating a Tensile Test? The answer to your first question is Yes, and second question is No. Post-doc position on simulation of safety helmet impacts Helping to simulate necking in Tensile Test in ABAQUS. tensile test specimen obtained with 450 elements and 2880 nodes. Thus, it is established that a simple tensile test and tensile testing simulation with the micromechanical-based shear phenomenological fracture model can be used to predict the fracture load and other tensile and fracture behaviour/properties of cracked wires. The tension test is one of the oldest and most useful of the material property tests. 1 Ab initio sim ulation of a tensile test in MoSi 2 and WSi M. We have 5000 square feet dedicated to environmental simulation vibration, mechanical shock, water spray and electro-mechanical DV/PV testing. To simulate the tensile behaviour of a dogbone INJECTION MOLDING PROCESS SIMULATION OF TENSILE STRENGTH AND IMPACT SPECIMEN TEST Dinny Harnany, Arif Wahjudi , I Made Londen Batan and Irma Mappease Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia E-Mail: dinny. Supported through NSF-DUE, this TUES Type 1 project is 1) developing an open source, virtual, online tensile Tensile Properties & the Tension Test Report the three different yield strengths that are evident in the data for the mild steel tension test simulation. The force per unit area (MPa or psi) required to break a material in such a manner is the ultimate tensile strength or tensile strength at break. pdf - 0 downloads FULLTEXT01. 834 Views Do you also have the full stress-strain curve from a tensile test sample? verification whether simulation results are authentic to comparison with an experimental results. Tensile Coupon Test (Re Rohith) Permalink Submitted by PKS on Thu, 2011-04-21 06:16. We use the symbol for stress and for strain. Ductility 4. A fit within the scatter between samples of the same batch can be achieved without difficulties; Tensile test report (full length) Abstract The Maximum load, breaking load, percentage elongation and minimum diameter at fracture of Mild (low carbon) Steel, Duralmin and Copper were calculated by applying a tensile load to the respective specimen until it fractured. However, you should first check that the testing machine is capable of measuring the tensile strength to sufficient accuracy. Tensile Test and Simulation of Woven Composite Tensile Test and Simulation of Woven Composite Laminates after High Velocity Impact. These tests are carried out in temperature change test cabinets or climatic test chambers. Modeling of the Brazilian test with DEM . comparing to simulation? are Finite Element Simulation: Tensile test of rib cortical bone ANHITZE MENDIZABAL DONES Department of Applied Mechanics Division of Vehicle Safety Chalmers University of Technology ABSTRACT Thoracic trauma is the principal causative factor in 30% of road traffic deaths [1]. Introduction At this stage you are able to insert the test piece into the grips of the tensile testing machine and can carry out a load strain experiment. com/teaching-resource/tensile-testing-simulationThis simulation has been designed to support the teaching of Tensile Testing at A-Level. As you can see in the attached picture, despite the Engineer data, Geometry, Model, Mesh and Analysis settings, including applied loads, have been defined as is denoted by the green check mark, in Solution step the 31 Aug 2012 ASTM E8/E8M (Standard Test Methods for Tension Testing of Metallic. Question. I ran this tutorial of an atomistic simulation of uniaxial tensile loading of an aluminum single I want to simulate a tensile test using "damage for ductile metals" and "ductile damage". 5:28. Please let me know on how to get the values of maximum tensile and compressive stresses or plots for tensile and compressive stress. The tensile strength of specimens with the reinforcement oriented at 0° is governed by the tensile strength of the fibers, while for the 90° ones the specimens fail by crack propagation through the matrix and/or the fiber/matrix interface19,21. Domain decomposition is widely used when studying assemblies to deal with increasingly complex models [O1]. Khandoker, M. Numerical Simulations and Experimental Results of Tensile Test Behavior of Laser Butt Welded DP980 Steels a finite element simulation of welded DP980 samples Efficiently gain accurate and repeatable results with the tensile testing module for use on the MTS TestSuite™ platform. In this module of the tensile test tutorial, you will see a simulation of the tensile test, showing the atomic interactions that are occurring and the corresponding stress-strain curve. Molecular dynamics (MD) simulation of uniaxial tension of perimental tensile and shear moduli were reported to be in the range of 2–6% and 3–15%, Hello all, I was recently messing with the tutorials in the non linear analysis chapter and this post references to the TUTORIAL: RD-3500. The aim of the simulation is to evaluate the damage mechanism of the material, and since the material breaks at a high amount of plastic strain (about 0. This universal test machine uses the motor to drive screw to move fixtures. I suggest you go through the Abaqus manuals where an example of tensile test Simulation 1. The crash simulators are capable of reproducing a wide range of standardized and user-defined crash tests including advanced applications such as side impact and vehicle pitch simulation. Tabaroei d merical simulation is performed using a finite element technique able to deal with large defor- The simple tensile test allows setting a uniform stress state in TensileEvent – Climate chamber – Tensile test. Recommend Documents. And Physically speaking, an unconstrained object is "floating in outer space": an infinetisimal additional force would make it accelerate away to infinity - and that is not representative of the tensile test you are trying to simulate. This Special report covering this technique for optimized, lightweight, simulation-driven product designs. Furthermore, splitting tensile strength test on concrete cylinder is a method to determine the tensile strength of concrete. , 040 01 Kosice, Slovakia Abstract Composites consisting of carbon fiber and epoxy resin matrix are modern material for wide range of application (automotive 2) Learn how to simulate uniaxial tensile test using LAMMPS Simulation script: This script first create one small cubic simulation box containing fcc Ni atom with <100> orientations in all three axis directions. ABAQUS requires three parameters:-Equivalent fracture strain at damage initiation-stress-triaxiality-strain rate Currently, I defined an elastic and plastic behavior. However, the model parameters need to be calibrated by experimentation or may be obtained from literature. Tensile test results graph (Insert test graph) Locate the Breaking/Rupture Point on the test graph. Other advantages shown by punch tests are: (1) the punch test need less sample size compared with other tests, (2) less material Because the tensile test data have been successfully used for many sheet metal FE simulations, the current stamping simulation software, LS-DYNA ® , AutoForm ® and PAMSTAMP ® , all developed material models to directly use uniaxial tensile test data. The tensile test simulation was also done for steels 4140 & 4340 with similar results. 1 and Fig. The video on the following page compares Hand Calculations to SimulationXpress and SolidWorks Simulation with Reference Geometry restraints, for a tensile test of a steel cylinder. Rigid-plastic finite element analysis of tensile test of cylindrical specimen Beaumont provides the highest quality test specimen molding with quick 2 day turn around after receiving materal. It is of interest to note that in a simple tensile test on a ductile material, Coupled electromagnetic-mechanical multiphysics simulation of fatigue damage. , and Junuthula N. The bulge test cannot provide flow stress data for low strain values close to the yield point (see Figure 3), so when the data is used in simulation, the yield High Stress Concentrations are also caused by incorrect boundary conditions (aka loads and restraints). We have Single Column Tensile Machine, Double Columns Tensile Testing Machine and Horizontal Tensile Tester for your option. Simulation: Tensile Testing Simulation Recommended for age 14 Numerical biaxial tensile test for sheet metal forming simulation of aluminium alloy sheets based on the homogenized crystal plasticity finite element method The active controlled pitch simulation load unit offers a significant advantage through more realistic crash simulations and better reproduction of the dummy injury criteria Quick Clamping System for Test-Setup-Pallets How do I do tensile test to see necking in Ansys Workbench? While doing simulation of tensile test in Ansys, the specimen does not break? What could be the solutions. Abstract. Simulation script: This script first create one small cubic simulation box containing fcc Ni atom with <100> orientations in all three axis directions. Objective: To perform a tensile test on mild steel and brass and to compare 1. Compared with experiments, numerical simulation software has many advantages. Various test standards address the need to simulate the outdoor environment in a controlled fashion to help determine a product’s ability to withstand a lifetime of harsh environmental exposure. Journal of Beijing University of Aeronautics and Astronautics, 2008, 34(6): 638-642. Moreover, a comparison between experimental results and simulation will provide better understanding as to which testing method (the tensile test or the bulge test) can provide more accurate flow stress data and thus more accurate simulation results. Key words upsetting, tensile test, antares, MSC. ASTM and Other Test Standards for Tensile Computational Simulation of the High Strain Rate Tensile Response of Polymer Matrix Composites Robert K. 17/04/2018 · Hi, I am running Linear static FEA in Solidworks simulation for an assembly in which all the parts are made up of steel. Uniform stresses develop in …Tensile Testing Simulation Recommended for age 14 and above This simulation has been designed to support the teaching of Tensile Testing at A-Level. INTRODUCTION The static tensile strength of glued aluminium specimens according to DIN 50125 was tested [1]. The tensile testing laboratory was conducted using an Instron load frame and the BlueHill data acquisition software. Tensile stress and strain are calculated from the result. For that, make a the eventual failure of the structure. [mw:13963] tensile test requirment for plates after simulation heat treatment Numerical Simulation of an Indirect Tensile Test for Asphalt Mixtures Using Discrete Element Method Software In a standard test coupon, necking should happen in the center of your gauge length (in the most ideal scenario). Blanks were made of commercialy pure titanium. Steve Wendel serves as Director of the National Center for The tensile test is one of the fundamental experiments used to evaluate material properties. Abstract: Article Preview. After the material break, the final length and cross sectional area of the specimen is used to Pneumatic Wedge Action Tensile Grips – Static & Dynamic Designed for ElectroPuls™ test instruments, these pneumatic wedge action grips are suitable for tension, compression, torsion, and reverse-stress testing on a wide range of specimens and materials. Tensile Test DX52 Formability. tensile test simulation - YouTube www. Real-time dynamic display of load, deformation, displacement, speed, tensile strength, breaking strength, elongation and other test data and stress-strain test curves during tests, automatic saving, reading, analysis, amplification, calculation, judgment and statistics of test result. F ri ak y, Sob and V. simulate the split tensile test (figure 7). The simulation results showed that the main reason for the crushing of the steel wire rope was that the jaw curve of the clamping mechanism contained defects. Digital image correlation (DIC) technology is used to measure the deformation during tension procedure, and the accuracy of the home-developed DIC equipment is discussed. Materials Engineering, Condensed Matter Physics, Computational Materials Science, Finite element method, Optimization, and 5 more Numerical Simulation, Yield Curve, Material Properties, Physical Properties, and Tensile Test Behavior of the model with varying values of bond strength of mechanism is dependant of cubic root of the volume the PP fabric. This module contains the test template, report template and calculations needed for displacement-controlled tensile testing. By comparing the results of numerical simulation and real joining and evaluating the properties of joints, it can be stated, that the simulation models correspond to the real conditions of joining and evaluation of the Simulation solutions for SOLIDWORKS provide an easy-to-use portfolio of analysis tools for predicting a product’s real-world physical behavior by virtually testing CAD models. As mentioned earlier the tensile test is used to provide information that will be used in design calculations or to demonstrate that a material complies with the requirements of the appropriate specification - it may therefore be either a quantitative or a qualitative test. Tensile Stress Area of a Bolt When you select Calculated tensile stress area ( Connector-Bolt PropertyManager), the program calculates the tensile stress area (A t ), which is the mimimum area of the threaded section of the bolt, according to the formula: A t = [(d 3 + d 2 ) / 2)] 2 * π /4 Tingting Mao Applications of FE Simulation for Blanking and Stamping of Sheet Materials 3 Figure 2: Example comparison of flow stress determined by tensile test and bulge test. A. Young’s Modulus is the slope of the line created by stress-strain coordinate pairs (see the inflation experiment for a review of the definitions of stress and strain). It is customarily measured in units of force per cross-sectional area. Virtual Online Tensile Strength Testing Simulation - Asee peer peer. 6 PEEQ) large deformations has to be introduced. The quest for developed model in designing educational curricula and methods for solving engineering problems metacognitively among education experts, teachers and specialists is never-ending. Example 1 – Tensile Test. You may wish give it a go and see if the simulation matches the empirical dataTensile Testing Specimens, Fasteners, Tubing, Rebar, Welds & Castings. **Stress/Strain** **Young's modulus** --Department of Engineering-- University of Liverpool--4. If you have a way to hang one end of some material from a solid point that does not move, then you can hang weights on the other end. tensile test using Abaqus - part 1:Diagram stress-strain. 72, no. I am running a 2-Dimensinal non linear simulation. Mechanical characterization of flat specimens in tensile test and numerical simulation Tensile test FEA using dogbone model from Uniaxial experimental data 2013-02-13, 03:15 Hi, I am trying to model a simple tension uniaxial test with Autodesk Simulation software using a dogbone shape I modeled based on a ASTM D638 Type IV test I conducted experimentally for obtaining engineering Stress-Strain Data. J. Elongation is Content: International Journal of Modern Manufacturing Technologies ISSN 2067­3604, Vol. I do not understand how to select a first set of parameters for the "ductile damage". Mar 13, 2017 This video demonstrates the tutorial for the simulation tensile test of a specimen using the element deletion option in Abaqus/explicit 6. The problem is …Re: [lammps-users] simple 2D tensile test simulation FIX settings Re: [lammps-users] simple 2D tensile test simulation FIX settingsHello experts, if I get a stress-strain curve from uniaxial measurement of tensile testing with engineering strain and engineering - 458219 - 3The tensile test was performed on mild Steel, Duralmin and Copper. The Materials Classroom houses interactive Materials Science games We would like to show you a description here but the site won’t allow us. KEYWORDS: Epoxy resins, mechanical properties, tensile strength, FEM simulation 1. The sample is under load until break. REFERENCE How to do stress strength test. Tensile strength of concrete in direct test was less than other tests results. suZuki et al. lmeTTest, Provide Professional Laboratory Test Machine include tensile, compression, drop, lift cycle, vibration, environmental, weathering test as well as vision measure tester Besides, particle debonding and crack growth, which will absorb energy during tensile simulation, may play key roles in toughening and strengthening. There are built-in damage models in Abaqus such as Gurson model and the ductile damage model. The maximum allowable stress in a material is σ max. This might be regarded as the stress at fracture (ultimate tensile stress), the stress at the yield point or the stress at the limit of proportionality (often the same as the yield point). Solve for the Breaking/Rupture Point stress: = P/A. tes. But I have a problem, doing this I find that there is a variation of about 6-8 % from the uniaxial tension test and compression test . The procedure based on the ASTM C496 (Standard Test Method of Cylindrical Concrete Specimen) which similar to other codes lik IS 5816 1999. Tensile Testing of Metals is a destructive test process that provides information about the tensile strength, yield strength and ductility of the material. Preventive measures such as the optimization of the seat structure and head rest are proven measures to protect the occupant. For any queries, please comment in the comment section. Eom, M. Tensile tests The tensile test conditions are set through [1]. As certain materials are much different than others, our test engineers can work with your team to develop and implement customized testing solutions and ensure you have the reporting you need. Abstract The performance of veneer joints is known to affect the quality of laminated veneer lumber (LVL), so experimental research and simulation analysis of the tensile properties of lap joints were performed and reported in this paper. pdf · PDF file1 Example 11 - Tensile Test Summary The material characterization of ductile aluminum alloy is studied. In 04/12/2018 · Hello all, I was recently messing with the tutorials in the non linear analysis chapter and this post references to the TUTORIAL: RD-3500. Re: Verifying Tensile Test on Sheet Steel Jared Conway Nov 18, 2013 8:14 PM ( in response to Ben Morrissey ) take a step back and walk us through your calculation, the assumptions it makes and then the setup. Analysis model for the tensile test simulation and the simulation results If we select the whole tensile specimen as the analysis domain, sometimes called a full analysis Test whatever you like! We provide a wide range of systems and devices for environmental simulation. lated to ry ype of elements forming a decisive role in achieving of for a work s an . Back to types of test. In fact, simulation was conducted with Particle Flow Code (PFC2D). Then only I can get the value of D1,D2 and D3. Materials used for simulation Material Test type 16MnCr5, tensile and compression X30Cr13, tensile and compression simulation of the joining process and simulation of the tensile test. The material can handle strains up to 300% (last datapoint of the diagram). In the structural analysis, the numerical simulation can accurately reflect the mechanical and deformation properties of the whole process, which provides a SOLIDWORKS Material Properties in Simulation Tensile Strength limit the tangent modulus varies with strain and is most accurately found from test data. From the Tensile Tests Load-Extension into Stress-Strain Mar 23, 2011 #1. Hello, I have set up a transient structural finite element analysis for an elastomer sample trying to simulate a tensile test. Tensile Test Simulation. Ultimate Tensile Strength - Ultimate tensile strength, or UTS, is the maximum tensile stress a material can sustain without fracture. After each iteration the load curve is modified until an acceptable agreement between the simulation and the tensile test is achieved. Experimental results are shown, discussed, and compared to the results of the simulation. 1 Example 11 - Tensile Test Summary The material characterization of ductile aluminum alloy is studied. ASTM D2261-13(2017)e1, Standard Test Method for Tearing Strength of Fabrics by the Tongue (Single Rip) Procedure (Constant-Rate-of-Extension Tensile Testing Machine), ASTM International, West Conshohocken, PA, 2017, www. Description. Stamping Simulation: Material Tensile Testing Explained. R. the tensile test can cause tension concentration with shear load. - Using FEM made possible to predict the whole tensile test of tie rod assembly. 1, pp. An FEM model was established to simulate the damage Tensile random simulation Compressive random simulation Compressive test #1 Compressive test #2 Compressive test #3 Tensile test #1 Tensile test #2 Tensile test #3 Tensile test #4. Furthermore, tensile test experiment (Chapter 5) and finite element simulation (Chapter 6) of a hose piece are presented. Tensile testing is one of the most common ways of measuring material strength. Uniaxial tension testing is most effective when performed on a homogeneous material, such as steel or aluminum. - The results of tensile test performed by control device MR 96 were closer to the results of FEM simulation. Keynote: Plastics & Simulation Hubert Lobo – DatapointLabs Making test specimensMaking test specimens • Mold Tensile barMold Tensile bar Plastics and Experimental Testing & FEA Validation of High Temperature Tensile Test of Aluminum Alloy (A413) for Piston Material Parag Amrutkar 1, R. E. Simulation: Games - Search for materials, save the patient, beat the deck – The tensile test is one of the fundamental experiments used to evaluate material properties. Hi Ron, The first thing is to provide abaqus with perfect true stress vs. Figure 5b shows the Concurrent with direct tensile test, The output of numerical simulation is a criterion which A New Approach for Measurement of Tensile Strength of Concrete The tensile test in transition metal disilicides with C11b structure is simulated by ab initio electronic structure calculations using full potential linearized augmented plane wave method (FLAPW). tensile test simulation If you are going to perform this test, you should read the entire specification from ASTM. View the full video of the simulation here: Tensile Test DX52 Simulation. This is very near to the Simulation and evaluation of carbon/epoxy composite systems using FEM and tensile test Branislav Duleba a *, Ľudmila Dulebová a, Emil Spišák a a Technical University of Kosice, Faculty of Mechanical Engineering, 74 Masiarska St. 27/10/2017 · Physically speaking, an unconstrained object is "floating in outer space": an infinetisimal additional force would make it accelerate away to infinity - and that is not representative of the tensile test you are trying to simulate. 654 MPa, as shown in fig. % elongation is not a material property for ductile or brittle materials, it is usually specified for non-linear materials like elastomers (e. The Non-linear tensile test data (material data) looks like the following: Improving the metacognitive skills via celebrated Tensile Machine Simulation (TMS) is challenging in science and engineering education. Based on different industries, we also have various test loads to maximize the test accuracy and cost saving. The resulting simulator was utilized by engineering technology and engineering students in higher education as well as pre-engineering high school students in the Project Lead The Way network. The RADIOSS material laws 2, 27 and 36 are used to reproduce the experimental data of a traction test by simulation. On the tension test specimen figure 1, is applied a progressive stretching force Hi Ron, The first thing is to provide abaqus with perfect true stress vs. 14. Simulating a tensile test can be a replacement of experiments to …If your sheet metal forming project requires stretching or deep drawing it is important to accurately determine the tensile strength and failure point of the material using the uniaxial tensile test. 1 INTRODUCTION 275 7. Simulating a tensile test can be a replacement of experiments to Virtual Online Tensile Strength Testing Simulation. Conclusions Steel foam is emerging as a new structural material with intriguing In general, it is impossible to characterize material re-sponse under arbitrary combinations of plastic deforma-tion, strain rate, temperature and pressure levels. Stress/strain graphs have been created from real experimental data; values may be accurately taken from the graphs and be used to determine Young's To correctly and accurately capture the mechanical properties of material for use in simulation, a tensile test is performed. com/youtube?q=tensile+test+simulation&v=akY9AX7sJjA Nov 8, 2016 Simulation of tensile test is performed on ABAQUS software. The influences of open-hole on the specimen’s net tensile …Alibaba. Theory The maximum load is the greatest load that the specimen can withstand without breaking. 2005), three point bending using atomic force microscopy (ARM) (Iwamoto et al. org . H. Forming Limit Prediction of High Tensile Strength Steel using FEA Simulation the effect of blank vibrations caused by blank holding. So it has to take many experiments which will cost a lot of manpower, materials, time and equipments to get the characteristics. I assume you have an independent measure of the yield strength. case1: tensile test on specimen with 21nm grain size nickel material, case2: tensile test on specimen with 42nm grain size nickel material, case3: tensile test on specimen with 195nm grain size nickel material, case4: tensile test on specimen bonded with the above three cases of nickel material. Please advice whether one sample from each plate has to undergo simulation heat treatment cycle & Tension test OR one sample out of any 3 plates shall undego simulation heat treatment & Tension Test. Lee, J. Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. You will use a tensile test simulation to observe how the test is performed and how the force displacement graph is created during the test. In this study, an axisymmetric model and a three-dimensional model of a tensile test of a titanium aerospace fastener are developed. Varity test and measure functions for tensile strength tester: Equipment Computer with access to Virtual Tensometer and Adobe Flash Material Testing Formula Sheet Procedure You will use a tensile test simulation to observe how the test is performed and how the force displacement graph is created during the test
# How to prove this? #### arijit2005 If $$\displaystyle x = log_a(bc); y = log_b(ca); z = log_c(ab)$$ Prove $$\displaystyle x + y + z + 2 = xyz$$ Last edited by a moderator: #### undefined MHF Hall of Honor If $$\displaystyle x = log_a(bc); y = log_b(ca); z = log_c(ab)$$ Prove $$\displaystyle x + y + z + 2 = xyz$$ I can prove it but my solution is a bit ugly. Maybe someone else can find a prettier way. Use change of base formula for each one to get a common base. $$\displaystyle x = \frac{ln(bc)}{ln(a)} = \frac{ln(b)+ln(c)}{ln(a)}$$ $$\displaystyle y = \frac{ln(ca)}{ln(b)} = \frac{ln(c)+ln(a)}{ln(b)}$$ $$\displaystyle z = \frac{ln(ab)}{ln(c)} = \frac{ln(a)+ln(b)}{ln(c)}$$ Let $$\displaystyle p = ln(a), q = ln(b), r = ln(c)$$. Then $$\displaystyle x = \frac{q+r}{p}$$ $$\displaystyle y = \frac{r+p}{q}$$ $$\displaystyle z = \frac{p+q}{r}$$ Now we can write $$\displaystyle xyz=\frac{(q+r)(r+p)(p+q)}{pqr}$$ and $$\displaystyle x+y+z+2=\frac{(q+r)(qr)+(r+p)(rp)+(p+q)(pq)+2pqr}{pqr}$$ So now all we have to do is show that $$\displaystyle (q+r)(r+p)(p+q) = (q+r)(qr)+(r+p)(rp)+(p+q)(pq)+2pqr$$ which we can do by expanding both sides. #### simplependulum MHF Hall of Honor We have $$\displaystyle a^x = bc$$ $$\displaystyle b^y = ca$$ $$\displaystyle c^z = ab$$ Consider $$\displaystyle a^{xyz} = (a^x)^{yz} = (bc)^{yz}$$ $$\displaystyle = (b^y)^z (c^z)^y$$ $$\displaystyle = (ca)^z (ab)^y$$ $$\displaystyle = a^{y+z} c^z b^y$$ $$\displaystyle =a^{y+z} a^2 bc$$ $$\displaystyle = a^{y+z} a^2 a^x$$ $$\displaystyle = a^{x+y+z+2}$$ Hence we have $$\displaystyle a^{xyz} = a^{x+y+z+2}$$ $$\displaystyle xyz = x+y+z+2$$ Last edited: #### arijit2005 We have $$\displaystyle a^x = bc$$ $$\displaystyle b^y = ca$$ $$\displaystyle c^z = ab$$ Consider $$\displaystyle a^{xyz} = (a^x)^{yz} = (bc)^{yz}$$ $$\displaystyle = (b^y)^z (c^z)^y$$ $$\displaystyle = (ca)^z (ab)^y$$ $$\displaystyle = a^{y+z} c^z b^y$$ $$\displaystyle =a^{y+z} a^2 bc$$ $$\displaystyle = a^{y+z} a^2 a^x$$ $$\displaystyle = a^{x+y+z+2}$$ Hence we have $$\displaystyle a^{xyz} = a^{x+y+z+2}$$ $$\displaystyle xyz = x+y+z+2$$ WOW... Thanks a lot
Change search Cite Citation style • apa • harvard1 • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf Skewness as an objective function for vibration analysis of rolling element bearings Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering. Rubico Vibration Analysis AB.ORCID iD: 0000-0001-6687-7794 Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems. 2014 (English)In: 2013 8th International Symposium on Image and Signal Processing and Analysis (ISPA 2013: Trieste, Italy, 4-6 Sept. 2013, Piscataway, NJ: IEEE Communications Society, 2014, p. 462-466Conference paper, Published paper (Refereed) Abstract [en] The scale invariant third order moment, skewness, is analysed as an objective function to an adaptive gradient ascent algorithm. The purpose is to achieve a spectrum at the filter output that can enable identification of possible bearing defect signatures which are impulsive and periodic. Harmonically related sinusoids are used to represent such signatures and to build a signal model allowing characterization of the objective surface of skewness, providing insight to its convergent behaviour. The results are supported with an experiment from an industry setting. Robustness of the proposed algorithm is demonstrated by examining the frequency spectrum resulting from the signal model. Place, publisher, year, edition, pages Piscataway, NJ: IEEE Communications Society, 2014. p. 462-466 National Category Signal Processing Research subject Signal Processing Identifiers Local ID: 631fe16d-ee34-4da2-aa13-b4b4387e315dISBN: 9781479931255 (electronic)OAI: oai:DiVA.org:ltu-31882DiVA, id: diva2:1005116 Conference International Symposium on Image and Signal Processing and Analysis : 04/09/2013 - 06/09/2013 Note Godkänd; 2014; 20140113 (kubova)Available from: 2016-09-30 Created: 2016-09-30 Last updated: 2017-12-13Bibliographically approved In thesis 1. Blind Adaptive Extraction of Impulsive Signatures from Sound and Vibration Signals Open this publication in new window or tab >>Blind Adaptive Extraction of Impulsive Signatures from Sound and Vibration Signals 2017 (English)Doctoral thesis, comprehensive summary (Other academic) Abstract [en] The two questions in science why" and how" are hereby answered in the context of statistical signal processing applied to vibration analysis and ultrasonic testing for fault detection and characterization in critical materials such as rolling bearings and thin layered media. Both materials are of interest in industrial processes. Therefore, assuring the best operating conditions on rolling bearings and product quality in thin layered materials is important. The methods defended in this thesis are for retrieval of the impulsive signals arising from such equipments and materials, representing either faults or responses to an excitation. As the measurements collected via sensors usually consist of signals masked by some unknown systems and noise, retrieving the information-rich portion is often challenging. By exploiting the statistical characteristics due to their natural structure, a linear system is designed to recover the signals of interest in different scenarios. Suppressing the undesired components while enhancing the impulsive events by iteratively adapting a filter is the primary approach here. Signal recovery is accomplished by optimizing objectives (skewness and $\ell_1$-norm) quantifying the presumed characteristics, rising the question of objective surface topology and probability of ill convergence. To attack these, mathematical proofs, experimental evidences and comprehensive discussions are presented in the contributions each aiming to answer a specific question. The aim in the theoretical study is to fill a gap in signal processing by providing analytical and numerical results especially on \emph{skewness} surface characteristics on a signal model (periodic impulses) build on harmonically related sinusoids. With understanding the inner workings and the conditions to suffice, the same approach is applied to different class of signals in ultrasonic testing, such as aperiodic finite energy signals (material impulse response) and a very short duration impulse as an excitation. A similar optimization approach aiming to enhance another attribute, \emph{sparseness}, is experimented numerically on the aforementioned signals as a case study. To summarize, two different objectives each quantifying a certain characteristic are optimized to recover signals carrying valuable information buried in noisy vibration and ultrasonic measurements. Considering the fact that a research is qualified as successful if it creates more questions than it answers and lets ideas flourish creating scientific value, the presented work aims to achieve this in statistical signal processing. Analytical derivations assisted with experiments form the basis for observations, discussions and further questions to be studied and directed on similar phenomena arising from different sources in nature. Place, publisher, year, edition, pages Luleå tekniska universitet, 2017 Series Doctoral thesis / Luleå University of Technology 1 jan 1997 → …, ISSN 1402-1544 National Category Electrical Engineering, Electronic Engineering, Information Engineering Research subject Signal Processing Identifiers urn:nbn:se:ltu:diva-64982 (URN)978-91-7583-933-2 (ISBN)978-91-7583-934-9 (ISBN) Public defence 2017-10-18, A109, 10:15 (English) Supervisors Available from: 2017-08-11 Created: 2017-08-09 Last updated: 2017-11-24Bibliographically approved Open Access in DiVA No full text in DiVA Authority records BETA Ovacikli, Aziz KubilayPääjärvi, PatrikLeblanc, James Search in DiVA By author/editor Ovacikli, Aziz KubilayPääjärvi, PatrikLeblanc, James By organisation Department of Computer Science, Electrical and Space EngineeringSignals and Systems On the subject Signal Processing isbn urn-nbn Altmetric score isbn urn-nbn Total: 56 hits Cite Citation style • apa • harvard1 • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf
# Extrema of multivalued function with constraint This is not supposed to be a 'how-to-do' question, but rather a 'why' question. I came across the following problem: $Let\ f: R^2 \rightarrow R;\ \ f(x,y)=x^3+y^3.\ Find\ the\ extrema\ points\ of\ f\ so\ that\ x^2+y^2=1$ The solution for the minimum works in the same way as that for finding the maximum, so I'll just consider the maximum when explaining my approach. I took the Lagrange function, made its partial derivatives equal to $0,\$ found $\ \lambda\$ so that $g(x,y)=x^2+y^2-1=0\$ which resulted in some possible solutions. What I find strange is when checking the solution of this problem $(\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2})\$ I found given as a maximum point, whereas $(1,0)$ (and, of course, $(0,1))\$ clearly yields a greater $\ f$ while also verifying $g(x,y)=0$ and all the partial differential tests for $\ \lambda=-\frac{3}{2}$. Still, $(\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2})\$ passes the Hessian test while $(1,0)$ doesn't, but this just leaves the Hessian inconclusive for $(1,0)$, which shouldn't be a problem. So which is the maximum of $\ f$ on $\ x^2+y^2=1?$ $$(1,0)\ and\ (0,1)\ or\ (\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2})?$$ with $$y=\pm\sqrt{1-x^2}$$ you will get $$f(x,\pm\sqrt{1-x^2})=x^3+(\pm\sqrt{1-x^2})^3$$ and you Problem is reduced to a Problem in only one variable. • I understand. So, how come there is no mention of (1,0) in my textbook? With your method, it clearly is a maximum point and since f(1,0) > f(sqrt(2)/2,sqrt(2)/2), (1,0) should be the actual global maximum. The second derivative test also indicates nothing in the case of (1,0), but why would this "disqualify" (1,0) as a maximum point? Or does it? – Escu Esculescu Jan 18 '18 at 19:07 • $1$ is clearly the Maximum for $x=0,y=1$ – Dr. Sonnhard Graubner Jan 18 '18 at 19:36
# Symplectic Structure ## Intuitive Our everyday world is ruled by Euclidean geometry (and by its extension, Riemannian geometry); we can measure distances in it, and velocities. Far away from our daily experience, and much more subtle, is the mechanical phase space world, in which all the phenomena related to simultaneous consideration of position and variation of position; a deep understanding of this world requires the recourse to a somewhat counter-intuitive geometry, the symplectic geometry of Hamiltonian mechanics. Symplectic geometry is highly counter-intuitive; the notion of length does not make sense there, while the notion of area does. This "areal" nature of symplectic geometry, which was not realized until very recently, has led to unexpected mathematical developments, starting in the mid 1980's with Gromovís discovery of a "non-squeezing" phenomenon which is reminiscent of the quantum uncertainty principle—but in a totally classical setting! Symplectic Capacities and the Geometry of Uncertainty by Maurice de Gosson et. al. ## Concrete A simple way of putting it is that a two-form a way of measuring area in multivariable calculus. I believe the significance for physics boils down to the following: it turns out that a two-form is precisely what is required to translate an energy functional on phase space (a Hamiltonian) into a flow (a vector field). [See Wikipedia for how the translation goes, or read Arnold's book Mathematical Methods of Classical Mechanics, or a similar reference.] The flow describes time evolution of the system; the equations which define it are Hamilton's equations. One property these flows have is that they preserve the symplectic form; this is just a formal consequence of the recipe for going from Hamiltonian to flow using the form. So, having contemplated momentum, here we find ourselves able to describe how systems evolve using the phase space T*M, where not only is there an extremely natural extra structure (the canonical symplectic form), but also that structure happens to b preserved by the physical evolution of the system. That's pretty nice! Even better, this is a good way of expressing conservation laws. When physical evolution preserves something, that's a conservation law. So in some sense, "conservation of symplectic form" is the second most basic conservation law. (The most basic is conservation of energy, which is essentially the definition of the Hamiltonian flow.) You can use conservation of symplectic form to prove the existence of other conserved quantities when your system is invariant under symmetries (this is Noether's theorem, which can also be proved in other ways, I think, but they probably boil down to the same argument ultimately). http://qr.ae/TUTIn9 ## Abstract "The symplectic geometry arises from the understanding of the fact that the transformations of the phase flows of the dynamical systems of classical mechanics and of variational calculus ~and hence also of optimal control theory! belong to a narrower class of diffeomorphisms of the phase space, than the incompressible ones. Namely, they preserve the so-called symplectic structure of the phase space—a closed nondegenerate differential two-form. This form can be integrated along two-dimensional surfaces in the phase space. The integral, which is called the Poincare´ integral invariant, is preserved by the phase flows of Hamilton dynamical systems. The diffeomorphisms, preserving the symplectic structure—they are called symplectomorphisms—form a group and have peculiar geometrical and topological properties. For instance, they preserve the natural volume element of the phase space ~the exterior power of the symplectic structure 2-form! and hence cannot have attractors" Symplectic geometry and topology by V. I. Arnold What's the relation to the symplectic groups? ←- ## Why is it interesting? As each skylark must display its comb, so every branch of mathematics must finally display symplectization. In mathematics there exist operations on different levels: function acting on numbers, operators acting on functions, functors acting on operators, and so on. Symplectization belongs to the small set of highest level operations, acting not on details (functions, operators, functions=, but on all the mathematics at once. Catastrophe Theory, by V. Arnold The word symplectic was coined by Hermann Weyl in his famous treatise The Classical groups […] Weyl devoted very little space to the symplectic group, it was then a rather baffling oddity which presumably existed for some purpose, though it was not clear what. Now we know: the purpose is dynamics. In ordinary euclidean geometry the central concept is distance. To capture the notion of distance algebraically we use the inner (or scalar) product $x.y$ of two vectors $x$ and $y$. […] All the basic concepts of euclidean geometry can be obtained from the inner product. […] The inner product is a bilinear form - the terms look like $x_i y_j$. Replacing it with other bilinear forms creates new kinds of geometry. Symplectic geometry corresponds to the form $x_1 y_2 -x_2 y_1$, which is the area of the parallelogram formed by the vectors $x$ and $y$. […] The symplectic form provides the plane with a new kind of geometry, in which very vector has length zero and is at right angles to itself. […] Can such bizarre geometries be of practical relevance? Indeed they can: they are the geometries of classical mechanics. In Hamilton's formalism, mechanics systems are described by the position coordinates $q_1,\ldots,q_n$, momentum coordinates $p_1,\ldots,p_n$m and a function $H$ of these coordinates (nowadays called the hamiltonian) which can be thought of as the total energy. Newtons equations of motion take the elegant form $dq/dt=\partial H/\partial p,$ $dp/dt= -\partial H/\partial q$. When solving Hamilton's equations it is often useful to change coordinates. but if the position coordinates are transformed in some way, then the corresponding momenta must be transformed consistently. Pursuing this idea, it turns out that such transformations have to be the symplectic analogies of rigid euclidean motions. The natural coordinate changes in dynamics are symplectic. This is a consequence of the asymmetry in Hamitlon's equations, whereby $dq/dt$ is plus $\partial H/\partial p$, but $dp/dt$ is minus $\partial H/\partial q$, that minus sign again. I’ve tried to show you that the symplectic structure on the phase spaces of classical mechanics, and the lesser-known but utterly analogous one on the phase spaces of thermodynamics, is a natural outgrowth of utterly trivial reflections on the process of minimizing or maximizing a function S on a manifold Q. The first derivative test tells us to look for points with $$d S = 0$$ while the commutativity of partial derivatives says that $$d^2 S = 0$$ everywhere—and this gives Hamilton’s equations and the Maxwell relations. Hamilton's equations push us toward the viewpoint where $p$ and $q$ have equal status as coordinates on the phase space $X$. Soon, we'll drop the requirement that $X\subseteq T^\ast Q$ where $Q$ is a configuration space. $X$ will just be a manifold equipped with enough structure to write down Hamilton's equations starting from any $H \colon X\rightarrow\mathbb{R}$. The coordinate-free description of this structure is the major 20th century contribution to mechanics: a symplectic structure. This is important. You might have some particles moving on a manifold like $S^3$, which is not symplectic. So the Hamiltonian mechanics point of view says that the abstract manifold that you are really interested in is something different: it must be a symplectic manifold. That's the phase space $X$. Lectures on Classical Mechanics by J. Baez The mathematical structure underlying both classical and quantum dynamical behaviour arises from symplectic geometry. It turns out that, in the quantum case, the symplectic geometry is non-commutative, while in the classical case, it is commutative.https://arxiv.org/pdf/1602.06071.pdf
Give examples of two variables with a strong positive and negative linear correlation (a) Give examples of two variables that have a strong positive linear correlation and two variables that have strong negative linear correlation. (b) Explain in your own words why the linear correlation coefficient should not be used when its absolute value is too low or close to zero. Give an example. (c) In the passage below identify the explanatory variable and the response variable. Explain why. A nutritionist wants to determine if the amounts of water consumed each day by persons of the same weight and on the same diet can be used to predict individual weight loss. You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it (a) A strong positive linear correlation is observed when one variable increases as the increase in another variable happens linearly. The variables rate of smoking and alcohol use shows a strong positive linear correlation. As the alcohol use increases, the rate of smoking also increases. A strong negative linear correlation is observed when one variable increase but the decrease in another variable happens linearly. The variables amount of playing games each week and GPA of students are strongly negative linear correlation because the more amount students spent in playing games each week, the GPA’s decreases. (b) When the absolute value of the linear correlation coefficient is too low or close to zero, it shouldn’t be used. It indicates that there is no correlation between the variables, there is no relationship, interdependence, or connection between the variables. It shows that the variables have nothing to do with each other and use of such variables makes no sense. The variables, the amount of tea drunk and level of intelligence shows zero correlation. (c) Explanatory or independent variable: Amount of water consumed each day because it is not depending on any other factor or variable. Response or dependent variable: Weight loss because the variable weight loss is depending on the amount of water consumed each day. Inbrunstlr
## FANDOM 435 Pages Like Yume Nikki and all its fangames, there are many events found throughout Yume 2kki. Here is a list of all the current events that can be found, along with their individual locations and what they may unlock when activated. This article doesn't list all the events in this game, only the most remarkable events from the wiki members perspective. Feel free to expand this article by adding more events if necessary. Note that although these events are listed in alphabetical order, nearly all of their names are NOT official. ### Amusement Park Clown HellEdit Location: Underwater Amusement Park Description: Found in the deepest level of the clown dungeon in the Underwater Amusement Park, this event begins once Urotsuki steps through either of two doorways at the last room in the dungeon, just past the jail cells with the skeletons. Urotsuki is transported to the the center of a row of pillars, with NPC children standing on the pillars to the left and to the right. A large piston descends and crushes the boy on the left pillar, leaving an imprint on the base of the piston he was standing on. The same thing then happens to the girl standing on the pillar to the right. Finally, the piston comes down over Urotsuki, crushing her and leaving an imprint of her on the pillar as with the others. The scene then cuts to the pillars of the boy, girl, and Urotsuki having been placed amongst the countless other pillars surrounding the path leading to the entrance to the clown dungeon in the Underwater Amusement Park. Urotsuki then startles awake. This event unlocks wallpapers #114 and #129. ### Big RedEdit Location: Abandoned Factory Description: This event is accessed by reaching the deepest area of the Abandoned Factory and using the ladder to the northeast, then going down the only feasible path that eventually leads to a sewer with a blob-like creature wading in the water. Upon walking past this screen, Urotsuki will suddenly find herself face-to-face with a large red monster (not unlike Yume Nikki's Big Red) and then swallowed whole. Afterwards, she will be transported to a world very similar to Yume Nikki's Windmill World. ### Blood SacrificeEdit Location: Blood World Description: This event is accessed by stepping into the largest pool of blood in Blood World and chainsawing the white rooted creature that can be found in front of the blood sacrifice table. Doing so will cause the room to become dark, and make the music scary. Ghosts will swarm Urotsuki and inhibit her motion while Scary Faces pursue her. If they catch her, the screen fades to black, and Urotsuki wakes up. Viewing this event will unlock wallpaper #101. ### Buddha RaveEdit Location: Marijuana Goddess World Description: Probably the easiest to find of all the events, this one located almost directly east from entrance to Marijuana Goddess World and is similar to the Aztec Rave Monkey from Yume Nikki. Stepping on the flashing red tile on the idol’s forehead will activate a full-screen event where a moving image of the figure will zoom in and out while upbeat music plays. Activating this event also changes Marijuana Goddess World’s background to images of the deity and unlocks Wallpaper #5. ### Chasers in Urotsuki’s RoomEdit Location: Urotsuki's Dream Apartments Description: Sometimes when you turn the TV on in Urotsuki’s room, instead of just the normal Shadow Woman appearing in front of the curtains and the TV at channel 5, the TV will turn to channel 0 and show many different chasers in the room with her. The background music will become much more ominous and the chasers will become real. If the chasers catch you, you’ll wake up. As of v0.96, if you manage to escape the room without being caught (note that the clown can still move around while the Invisible effect is equipped, watch out for him), there will be flat Shadow Women standing in front of two doors (they will not move) and a fox-masked man in front of a third door. If Urotsuki tries to go downstairs, the man will run up to her and cause her to wake up. If you make it outside the room and get caught by the fox-masked man, Kura Puzzle #42 will be unlocked. ### The Cloning RoomEdit Location: The Red Brick Maze in Toy World Description: This event is found by going down a flight of stairs in the Red Brick Maze. Inside the room are some giant books, a poster, and a clone of Urotsuki sitting on purple pedestal with some red wires hooked into her connected to a large machine. Interacting with the control panel next to her will make her jump off of the pedestal and rapidly create clones of herself, filling up the room quite quickly. Unless you have the Chainsaw it is recommended that you move towards the exit after turning the switch on, otherwise you may find yourself trapped. Exiting and re-entering the room will cause the clones to disappear, but the machine will still be turned on and will soon begin to fill up with clones again. The clones will respond to certain effects; equipping the Eyeball Bomb will make them do the same, while equipping Invisible and disappearing will make the clones disappear. Interestingly, the clones will also similarly react to Wolf and Penguin, but only in debug mode. Interestingly, before turning on the panel the screen above it will read “RUN” (turning it on will change it to “OUN” temporarily and then to “OK”). Activating this event also unlocks Wallpaper #69. (Note: It is possible to softlock the game here. If you try to kill the clone that spawns more clones, you'll get caught in an infinite loop of killing it over and over, unable to do anything but close the game.) Giant Cloning Room Location: The Red Brick Maze in Toy World Description: This event is found by going down the same flight of stairs in the Red Brick Maze that will take you to The Cloning Room. After viewing Ending #4 you have a 1/10 chance of being taken to a large, spacious room, sparsely decorated with giant books. At the top of the room, a clone of Urotsuki is sitting on purple pedestal with some red wires hooked into her, connected to a large machine. Interacting with the control panel next to her will make her jump off of the pedestal and rapidly create clones of herself, slowly filling the gigantic room. Unless you have the Chainsaw it is recommended that you move towards the exit after turning the switch on, otherwise you may find yourself trapped. Exiting and re-entering the room will cause the clones to disappear, but the machine will still be turned on and will soon begin to fill up with clones again. The clones will respond to certain effects; equipping the Eyeball Bomb will make them do the same, while equipping Invisible and disappearing will make the clones disappear. Interestingly, the clones will also similarly react to Wolf and Penguin, but only in debug mode. Additionally, when no effect is equipped, the clones will scatter and move away from the real Urotsuki. Interestingly, before turning on the panel the screen above it will read “RUN” (turning it on will change it to “OUN” temporarily and then to “OK”). After a while, the cloning machine will stop working, however the cloning sound effect won't stop playing. ### Clowns at the CircusEdit Location: The Circus Description: Getting near the albino girl in the circus tent causes the screen to go black for a few seconds while a loud sound of a door opening (or closing) plays twice. Suddenly, a clown will appear in the tent with you. The Invisible effect doesn’t work on the clown, and if it catches you, you’ll be transported to a sectioned off area of the Shinto Shrine. If you manage to evade it and make it back outside, the area will be swarming with clowns moving at twice their normal speed. Getting caught by these will also send you to the sectioned off area of the Shinto Shrine. If you manage to escape the clown and make it outside, Wallpaper #109 will be unlocked. ### The Colossus of RhombEdit Location: Square-Square World Description: If the player has visited either the Cog Maze, the secret room required for ED06 in Flying Fish World or the moonlit balcony accessed by interacting with the Helmet Girl and has had at least four drinks they can head left from the Theatre World entrance to Square-Square World through the doorway, and keep going left. If one of these conditions are met the screen will fade out as you travel, and you'll come to a narrow pathway, with the view zoomed out. Immediately start walking to the left, without stopping, as you have a very limited amount of time to reach the end. Stopping for even an instant will end the event prematurely. If successful, the camera should pan to the left once you reach the edge of the pathway, revealing a gigantic creature, partially submerged in mud. Urotsuki then wakes up. ### Creatures of the Power PlantEdit Location: Power Plant Description: From the entrance from Sign World, keep moving towards the right and you’ll come to a hallway where lots of different monsters can be seen in tanks. As you continue moving down this path, it will gradually get darker, and eventually you’ll come to a room with a large red circle with what looks like a tiled floor on the inside of it. Stepping onto the circle causes the screen to shake and make loud thudding noises, will quickly become more intense, and suddenly a hole will appear in the floor and you’ll fall through. The screen will cut to black, and when it comes back on, Urotsuki will be trapped in the empty tank you passed by. Pressing the interact key breaks her out of it. Although the tank will be destroyed, you can still go back to the room and trigger this event as many times as you want. Activating this event unlocks Wallpaper #132. ### The Crying Eyes Edit Location: Cloudy World Description: Within Cloudy World, there's a cave with droplets trickling down the wall. Interacting with the wall will pan the screen upward reveals the droplets are tear from a pair of eyes. Repeatedly panning upward will cause the eyes to open. They react to various effects. ### Danger Panic ZoneEdit Location: Broken Faces Area Description: Technically, this is more of a separate area than an actual event. Walking though the mouth of the white face with bleeding eyes and white hands beside it will transport you to the Danger Panic Zone, a small looping area where one-eyed black and white creatures resembling fetuses will move around frantically while a variation of Monochrome Feudal Japan’s background music will play, startling any unsuspecting players. Using the Invisible effect here will cause the creatures to become transparent and crowd around Urotsuki and follow her. Walking through the face’s mouth again will take you back to the Broken Faces Area. ### DecapitationEdit Location: Realistic Beach Description: At the very end of the cave area at the Realistic Beach, after having gone upside down, the player is met with a small area featuring an inverted telephone pole in front of a bright sky. There is only one NPC here, a black sheep near the end bouncing up and down. Chainsawing it causes it to bleat and make the screen turn dark. More chainsaw noises are heard, and then the screen shows a slightly-graphic picture of a decapitated Urotsuki before she wakes up on the floor. While this does not unlock a wallpaper, the 34th one features the sheep character. ### DrowningEdit Location: Depths Description: This event is triggered by interacting with the brown teleport node. Initially, Urotsuki will be sinking deep into the ocean as a view of the cliffside and the flora floats by. After a while, her face will darken and she will slump to the side as if no longer conscious, and the screen fades to black. After a few seconds, however, there will be three bright flashes, and Urotsuki will light back up again as she sinks into the very bottom of the ocean. When she wakes up, she will be in an area with many fish lying on the ground, and what appears to be a massive pile of wreckage in the forefront. This event unlocks a wallpaper, as well as granting access to one of the ten Eggs. ### Duet with Elvis Masada Edit Description: This event is triggered by interacting with Elvis Masada himself with the Trombone effect equipped. Doing so will treat you to a brief scene where Elvis Masada and Urotsuki play together in a smooth jazz duet. ### The FishermanEdit Location: The underwater area between the School and the Dream Beach Description: A somewhat lesser-known event. Normally, to reach the Dream Beach, you have to interact with the hook in the middle of the underwater area, but if you have the Penguin effect on and interact with it you’ll be taken to another small area that also looks like it’s underwater. It’s completely blue and the only other thing in the map is a sad looking fish that seems to ‘deflate’ if you try to interact with it (you can actually walk on top of it when it deflates), and a valve to the right. Interacting with the valve causes it to open and it’ll be revealed that you’re actually inside the fisherman’s bucket! A large picture of the fisherman next to the bucket will show briefly and a small Urotsuki will fall out of the valve and back into the water. ### FlightEdit Location: Urotsuki's Dream Apartments Description: When going into the Apartment Amoeba's room, there is a 1 in 6 chance of a tree being present on the wall near the kitchen. Going inside it leads the player to a small room with a painting of a balcony. Interacting with it leads the player to the rooftop overlooking a city with a plane. Getting inside it leads to a small scene that's a reference to the Witch's Flight event from the original game. Pressing ESC leads the player back. Location: Apartments Description: The Flower Lady can be found in the far left room on the ground floor of the Apartments. If you chainsaw her OR her pet caterpillar and walk out onto the balcony, it will have holes in it and be covered in purple vines. Many ghostly copies of the Flower Lady will follow you around. Interestingly, the other balconies on the left and the right are also covered in purple vines and have holes in them. Walking back into the apartment room will cause the Flower Lady to respawn and the balcony will return to normal. ### Gallery of MeEdit Location: N/A Description: Upon waking up, there is a roughly 1/818 chance of instead being transported to the Gallery Of Me, a strange white corridor filled with clear window-like frames containing various versions of Urotsuki. [See Gallery Of Me.] ### Giant CatEdit Location: School Description: Like the Danger Panic Zone, this is more of a separate area than an actual event. There’s a black cat found in the fourth room of the School; interacting with its north side transports you to a small area with an enormous cat. You can pet the cat and it’ll purr loudly. You can also sit on it or climb up onto its back. Moving between its tail and leg causes it to meow loudly at you. There are two black and white cat-like NPCs moving around in the area, interacting with either takes you back to the room in the School. Sometimes the cat is found on top of a desk; in that case you’ll need to equip the Chainsaw to scare it off. ### The Graveyard’s SculptureEdit Location: Graveyard World Description: This event is found in the small gray building in Graveyard World. Entering it will take you to a small room with a large, strange green and turquoise sculpture with splatters of the same color on the floor. This room also houses nine purple monsters, the same kind found outside the room, except these ones can’t be killed. To active the event, you must chainsaw the sculpture. The music will become more high-pitched and ominous and causes the purple monsters to come alive and chase you. If they catch you, the screen will turn red, and after a transition both the sculpture and the monsters will be gone and you won’t be able to exit the room. If you manage to escape the room without being caught, outside will be more purple monsters forming a square around the building, trapping you, while more of them inside the square (the same ones found right outside the building before you entered it) will chase you, trapping you in the same room. The Invisible effect doesn’t work on any of the monsters. But it is, in fact, possible to escape without getting caught by using the Bat effect. After chainsawing the sculpture, equip the Bike and quickly move towards the door. Before exiting, equip the Bat effect, and while outside immediately hold down the interact key to mark the spot Urotsuki is standing in, then hold it down a second time to fly up into the air… although there’s not quite enough time to do both things before getting caught (by this time the monsters should have gathered around you and the screen has turned red) , but Urotsuki will still fly into the air, and when she flies back down, the screen will still be red, but the purple monsters forming the square will fade away, the screen will turn back to normal and the monsters that would normally chase you will become docile again. By doing this, it’s possible to activate the event more than once. ### HallucinationEdit Location: Laboratory Description: If you kill the head scientist in his office, the other scientists will become chasers. When they touch you, they will take you to be a test subject for a hallucinogenic substance. Urotsuki will be in a small room behind a glass wall being monitored by the scientists, all whilst having a trippy hallucination with Shadow Women, red hourglasses, yellow blobs, forest creatures and purple graveyard monsters dancing around her. Multicolored neon patterns will be moving around the room and wonky music will be playing. If you go up to the glass where the scientists are (Urotsuki will also move slower than usual) and press the action button, she will bang her head against the glass. After doing this several times, the hallucination will go away, but she will still continue to do it until blood can be seen on the glass. Two scientists come in to try and restrain her, Urotsuki finally falls unconscious and you will wake up. Something of note is that using any effects is impossible during the entire event, both during the Hallucination, and smashing your head on the glass. ### Haniwa DanceEdit Location: Haniwa Temple Description: In the temple, use the Haniwa effect to move the haniwa statue blocking the pathway, then go up the pathway and interact with the statue at the end to view the Haniwa Dance event. Haniwa statues will dance on screen while a tune plays. ### High Priestess EventEdit Location: Forest Carnival Description: Interacting with the radio in area of the Forest Carnival with the ladders in the sky will start the event. The screen will darken and Urotsuki will be in a new area with ominous music and a giant, faceless Virgin Mary statue. Interacting with the statue will make the screen pan upward to show her face (or lack of one, rather). To the left and right of the statue are tents with a sleeping person and a normal-sized Virgin Mary statue inside. Chainsawing the person in the room with the statue matching the giant statue's face and then leaving the tent will take you to a different area with six white dressers. After applause, the dressers will open, releasing fast, small blob-like chasers. Urotsuki must then navigate the area and chainsaw the dressers while evading the shadow blobs. If she is caught after a certain number of times (seemingly a random number around 1-4), she will be teleported to an inescapable area.  If she succeeds, the screen will fade to black and then reveal six carnies- implying that Urotsuki killed them. ### The Hospital Chainsaw MassacreEdit Location: Hospital Description: There are a series of consecutive hallways in the Hospital, each containing three doors. The middle door in the last hallway leads to a room containing a bed with something moving underneath the bedsheets. Interacting with the bed will make it start oozing a green fluid, staining the bedsheets and eventually teleporting Urotsuki to an inescapable room covered in a sickly, green wash with no windows. This room is home to Aojiru /"Cripple-tan", a patient in the hospital hooked up to an IV. Although Aojiru does nothing but stand there like a lemon, interacting with them will automatically make Urotsuki cut them down with the chainsaw, even if she does not currently possess the chainsaw effect. She will then be trapped in the room, with no way out except for the Eyeball Bomb effect or pinching herself awake. ### Japanese Turntables EventEdit Location: Urban Street Area Description: Normally the Urban Street Area is a fairly empty, quiet area, but by using the Fairy effect here it causes a purple creature to fall and hit the ground behind you with a splat. A pair of turntables will appear while shadows of people walking around also appear at random moments. Interacting with the turntables causes a hip hop-like beat to play, and pressing the arrow and interact keys lets you add other beats on top of it. This also causes the screen to change colors and pictures of maps to flash up on screen. ### The Lady and the SunEdit Location: Pastel Blue House Description: This event is found on the second floor of the Pastel Blue House. Going through the last door on this floor leads to a small path where a lady in white can be seen under a large, red sun with one eye. Trying to equip the Chainsaw here will cause the screen to take on a red hue, the sun’s eye will glow, and your Chainsaw will be de-equipped. Activating this event unlocks Wallpaper #77. ### The Lamppost-GirlEdit Location: Apartments Description: Thought of as the 2kki equivalent of Monoe and found in a path near the Apartments. Trying to interact with her causes a detailed picture of her (drawn in a similar style to Monoe) to appear on screen before disappearing along with the girl. You can activate this event once per dream. ### Lonely UrotsukiEdit Location: Underwater Amusement Park Description: After viewing most of the attractions at the Underwater Amusement Park and returning to the entrance, interact with the LED-matrix display when it says “さようなら” (Goodbye) to activate the Lonely Urotsuki event. You’ll be transported to a scene where a transparent, child Urostuki will be standing all alone in the middle of a sectioned off road surrounded by fences, while shadow people walk past with their children. Walking back through the entrance takes you back to the Underwater Amusement Park and the board’s text will change, so you’ll have to view most of the attractions again if you want to witness this event more than once. If you’re using the Child effect before interacting with the electric board, the event will change slightly. When this happens, you’ll be the one in the middle of the road. The shadow people won’t interact with you, and if you try to bring up the menu, they’ll all smile menacingly. There’s a break in the top part of the fence where a normal-size Urotsuki watches the event in the darkness. Walking through the break in the fence takes you back to the Underwater Amusement Park. If you also use the Glasses effect during this event and press the interact key, the shadow people will also smile in the same way they do when you’re using the Child effect. This event unlocks quite a lot of Wallpapers. Activating this event unlocks Wallpaper #67 and #134, and if you’re using the Child effect, Wallpaper # 94 will be unlocked. Using the Glasses effect during the event unlocks #74 (although many people have also unlocked #74 by using the Stretch effect and crying during the event). Location: Never-Ending Hallway Description: Again, this event is more of a separate area than an event. After passing by the smiling man in the hallway, you may come across a window that’s completely dark. Interacting with it takes you to a room that closely resembles Madotsuki’s room from Yume Nikki. While inside, the screen often cuts to static for brief moments. There doesn’t appear to be anything you can interact with in the room, and if you stay in it for too long, static will take up the entire screen and you’ll be sent back out into the Never-Ending Hallway. ### Maiden Outlook EventEdit Location: The Maiden Outlook, reachable from the Library or Japan Town Description: Entering from either area will place you in front of a bus on a cliff overlooking a suburban town, moving towards the left side of the cliff reveals an enormous geisha sitting off in the distance. The menu is inaccessible in this area. The time of day in this area changes randomly with each visit. If it’s daytime or the evening when you arrive, moving all the way the edge of the cliff causes the geisha’s head to rocket off her body and fall next to you. After this event plays, you’ll be taken back to wherever you entered the area from. If it’s midnight when you arrive, the geisha will already be headless, and her head will be flying over the town, moving in and out of the foreground and doing spins. After witnessing this event, entering Marijuana Goddess World during the same dream session may instead take you to the Maiden Outlook a second time, although very rarely. • To reach the outlook from Japan Town you have to sit next to the man reading a newspaper, known as News Man. Sitting at the bus stop with him and waiting a few seconds makes a bus arrive and pick both of you up, taking you to the Maiden Outlook. ### March of Progress Edit Location: Chess World Description: Inside Chess World, there are two chess pieces with signs beside them. Going between them brings you to a hallway similar to those in the Monkey Mansion. In the room at the end of the hall is a throne Urotsuki can sit on. Doing so puts you in control of a brief event parodying the March of Progress. Description: While inside the Mask Shop, ringing the bell several times without buying a mask will slowly darken the shopkeeper's expression until her eyes aren't visible. Trying to leave afterward will spawn a ghostly clown chaser at the doorway, which walks slowly towards you. doubling back up towards the shop will spawn an army of ghostly clowns, effectively trapping Urotsuki. When the clown catches her, any effect or mask will be unequipped and the screen will shake and fade to black. When it fades back in it will show a monochrome view of the mask shop, with Urotsuki's decapitated head turned into a mask, and placed on the shelf with the others. ### Mechanical Heart ActEdit Location: Amphitheater Description:  Taking a sit on the Amphitheater's audience seats will show a stage performance, where two masked performers dance on stage as piano music plays, after a mechanical heart is shown above the stage. ### Mirror UrotsukiEdit Location: Day & Night Towers Description: There is a room in the Day & Night Towers that appears to be some sort of mirror, where a copy of you (your ‘reflection’) will copy your movements. If you chainsaw this Urotsuki though, you’ll hear two high-pitched screams and both of you will fade, causing you to wake up. Using the Glasses effect will cause the reflection's face to vanish and make an eye appear on its chest. Using different effects in this room will cause the reflection to react to them visually in different ways: • Wolf - makes your reflection use the Red Riding Hood effect, and vice versa. • Tall - makes your reflection use the Child effect, and vice versa. • Biker Wolf - makes your reflection copy you, but with a snazzy pair of goggles on her head. • Haniwa - doesn't make your reflection copy you, but it will make her blur as she tries to walk around with you. • Glasses- Your reflection's face will be replaced with a hole and her shirt will have an eye on it Chainsawing your reflection unlocks Wallpaper #70. ### Monochrome EyeEdit Location: The monochrome room in Flying Fish World Description: A full-screen event that happens when you interact with the large eye on the wall. Interacting with it causes a gray kaleidoscope-like image to appear and spin around on the screen for a few seconds before zooming in on it and fading away. ### Odorika’s DanceEdit Location: Red Streetlight World Description: Odorika is found standing next to a single streetlight inside a larger circle of streetlights in Red Streetlight World. Interacting with her while bowing with the Maiko effect causes a full-screen event where an animation of Odorika and her rabbits are dancing. Interestingly, the animation of Odorika dancing seems to have been made with Flipnote Studio (with a few minor edits). Activating this event unlocks Wallpaper #20 and Kura Puzzle #18, although you don’t need the Maiko effect to unlock the Kura Puzzle, you just need to interact with her. ### The Paying CustomerEdit Location: The Hotel in Japan Town Description: Starting from the entrance of the Hotel, go through the left doorway and then go down the stairs to the bottom floor. There should be three doorways here, and you’ll want to go through the doorway on the left. Going through this doorway leads you behind the front desk, where the smiling geisha sometimes is. While behind the desk, if you wait a bit, a shadow blob NPC will walk up to the desk and pay you some money (note that you have to be all the way to the left of the desk for it to pay you), and then walk into the left doorway. You can only activate this event once per game. ### Penguin GB GameEdit Location: Urotsuki's Dream Apartments Description: This event is required to get the Penguin effect. In Urotsuki’s room inside the dream apartments, interacting with the game console will transport you into a game viewed through heavy scanlines, featuring a series of grassy ledges connected by ladders that are watched over by two NPCs with strangely shaped heads. When you first start, you’ll be wearing the Penguin effect, although you can open the menu and equip other effects. One of these NPCs patrols the the topmost ledge and must be passed using one of the ladders to progress, but it does not pose a threat to Urotsuki and will simply turn around if it collides with her. If you equip the glasses effect near the platform the ladder leads down to, it will reveal a flashing red orb on the platform behind the second stationary NPC that is seemingly unreachable, but there is a 60/255 (approximately 23.5%) chance determined when you go to sleep that it will be floating close enough to the ledge above that you can interact with it, which will take you to a glitched out level in the game with non-traversable geometry with no way to leave, forcing you to use the Eyeball Bomb or wake up. Once past the patrolling NPC, If you keep going to the right you’ll probably notice that you’re stuck on a looping path. To get past this, you have to use the Penguin effect to slide past the barrier. then you can continue going to the right. Once beyond the looping path, continuing to the right leads to the next area, a cliff with a small ledge sticking out of it. Pressing the down arrow key near the ledge causes Urotsuki to jump off it, but a shadowy figure appears behind her and laughs, hinting that she was pushed off instead of jumping off. Doing this ‘completes’ the game and you’ll wake up. Note that you don’t need to finish the game to keep the Penguin effect, you can wake up or Eyeball Bomb yourself out of the game and you’ll still get the effect. Getting the Penguin effect unlocks Wallpaper #86 and Kura Puzzle #29, and if you complete the Penguin GB Game you’ll also unlock Wallpaper #105. ### Red Bug MazeEdit Location: Bug Maze Description: This event only happens when you use the Hand Hub entrance, and may no longer be possible as of 0.106g. Upon entering, there’s a random chance you’ll enter a creepier version of the maze. The entire maze turns red and the music changes to a low, disturbing gurgling noise. The maze becomes filled with Evil Bugs that block any portals to other worlds. Getting caught by one transports you to an isolated area with a Venus flytrap, the same place you get transported to if you get caught by the Bug Girl at the Scenic Outlook. Location: Forest World Description: If you chainsaw the sane Shadow Woman in Forest World, she’ll become insane and start to chase you, like a normal Shadow Woman. But if you make it to the Underground area and walk back out, you’ll find yourself in the Shadow Woman Forest, an essentially darker, eerier version of the Forest World. The entire world becomes black and red and infested with Shadow Women. The portal back to the Nexus turns into a giant red blob-like figure with one eye and the portals to other areas disappear. The only way to escape besides waking up or using the Eyeball Bomb effect is to walk back into the Underground area and then walk back out, which makes Forest World return to normal. Activating this event unlocks Wallpaper #40 and Kura Puzzle #30. ### Seishonen’s Glitched RoomEdit Location: Urotsuki's Dream Apartments Description: Thought of as Yume 2kki’s equivalent of Poniko. From Urotsuki’s room, go down the stairs and to the left should be another door, next to another set of stairs. Inside is Seishonen. Normally his room is bright green, but there is a 1 in 31 chance that upon entering it will become ‘glitched’. When it becomes glitched, the room’s colors become black and dark red and the entire room becomes distorted. Seishonen’s sprite becomes what appears to be a mass of eyeballs while bits of him and the room randomly flash around on screen. There is also a hidden exit to the Magnet Room in the top right corner. However, the Magnet Room's background music will not be present if entered from here. Activating this event unlocks Kura Puzzle #41. ### Starry Night Event Edit Location: Planetarium Description: Located on the second floor of the Planetarium is a theatre filled with comfy seats. Urotsuki can take a seat in one of them to view the starry night sky above. ### The Spider’s WebEdit Location: Scenic Outlook Description: From the area of the Scenic Outlook where the Bug Girl guarding the pole is, go down the stairs to the left to enter a small forest area. Walk up the ladder and use either the Chainsaw or Bug effect to get rid of the spider blocking the doorway, then go through it. You’ll be taken to an area of the Bug Maze. Move forward to enter another area of the Bug Maze where there are lots of NPCs moving around and the music changes. If you move to the end of this area, you’ll see a huge spider above a spider web. It will then slowly wrap you in its web, eventually covering up the entire screen. After this event finishes, you’ll be taken back to the entrance of the Bug Maze, where you first entered from the forest area of the Scenic Outlook. ### Urotsuki Sodding Dies Edit Location: Innocent Library Description: In the Innocent Library, one of the bookshelves Urotsuki can interact with is stained with blood. Interacting with this shelf tints the screen red and take Urotsuki to an isolated room, with one of the creatures from Chaos World. Urotsuki's soul will then leave her body. There's not much she can do in this room until she gets caught by the creature, which teleports her to the couch in the Fantasy Library. ### The White Drooling CreatureEdit Location: Apartments Description: A short full-screen event. The White Drooling Creature is found in the door on the first floor, second room from the left, the same room you’re transported to from the Library. Interacting with it causes numbers to wave about on screen for a few seconds while a loud sound plays, startling any unsuspecting players. After viewing this event, the creature will then begin to run around the room quickly. ### Woman and the MirrorEdit Location: Ocean Storehouse Description: In the cross-shaped sections of the Ocean Storehouse, there is a 21% chance that one of these areas will have a ghostly shadow on the double doors. Interacting with the shadow in any of these rooms will trigger a static flash, which starts a full-screen event that shows a red-walled room with a bizarre background, where a woman sits in from of a mirror filled with static, twitching as unsettling music and chewing noises play. ### ZalgoEdit Location: Magnet Room Description: The Magnet Room houses the Zalgo event, which can be accessed by finding the fenced-off section of wall with a crevice in it, and heading directly south from the crevice's position in the wall. Continue south and run into the north face of the blue cube. It should push you over the barrier and into the enclosed area with the crevice. It is unclear what version the location of the teleporter in this room changed, however, you'll want to immediately begin traveling northwest, not southeast, as there is a limited time before Urotsuki is transported out of the area and dropped off at the Intestines Maze. Note that the Motorcycle effect cannot be equipped here, as well as the background changing hence its name. Interacting with the dark gray teleporter (located in a bloody area a good distance northwest of the entry point) will bring you into a black room with a large NPC and rain. Interacting with the NPC unlocks wallpaper #62. ## Variable 44 eventsEdit The variable #44 contains a randomized value between 0 and 255, which changes every time Urotsuki goes to sleep in Urotsuki's Room when awake or while dreaming in the bed located in Urotsuki's Dream Apartments. This allows many events to be obtainable only on certain dream sessions. Not every random event in the game uses this value, but knowing which events use this variable allows us to predict if other rarer events are currently active. The following sections lists some events related to this variable. ### Nexus Background Edit Location: The Nexus The nexus changes its color and design from gray if the value of variable #44 is less than 128 to blue if is more than 128. It also changes the position of the nexus portals. As this is the easier to check event that uses the variable 44, it's highly recommended to use this event as first indicator of the current value the variable 44 has. ### Eyeball NPC Edit Location: Grass World Descending the staircase from the Gray Road, you'll come to a small room with a one-eyed creature sitting on a couch. Whenever you enter this room, it may have different poses depending on the current value of variable #44. If the value is less than 20, it will be lying on the couch. If the value is between 20 and 179, it will be sitting on the couch, awake. If the value is between 180 and 229, it will be sitting on the couch, but sleeping. If the value is between 230 and 255, it will be sitting on the couch, sleeping, but with his head even lower. As this event is easy to access, it's recommended to check this event to identify the possible values of variable 44. ### The Creature in the Well Edit Location: Abandoned Apartments This creature is found in a well located in the first area of the Abandoned Apartments. It will only appear if the value of the variable 44 is between 170 and 255 (33%). If the variable has a value between 250 and 255, the creature will be red instead. Seeing this creature unlocks the wallpaper #245. ### Snowy Path Edit Location: Riverside Waste Facility If the variable 44 has values between 230 and 255 (9.8%), the path found before reaching the Undersea Temple will be covered with snow, the rain will be replaced with snow and the wooden arc will have a christmas hat. Getting this event unlocks the wallpaper #264. ### Butcher's Wrath Edit Location: Mutant Pig Farm If you have visited the area at least 15 times, there is a chance that, as you explore, the area will take on a red tint, with the background's movement and music accelerated. Additionally, the area will shake periodically with static flashes occurring. This event is more likely to happen when the value of the variable 44 is higher. Every step you take in this area triggers an event, which generates two random numbers. One between 1 and 999.999 (X), and other between 600 and 800 (Y). Then, the event copies the current value of the variable 44 (V), substracts 257 and multiplies it by -1. Then, it multiplies the previous result with the random value Y. Finally, the random value X gets divided by the product of the previous multiplication. If the rest of this division equals 0, the event gets triggered. To sum up the equation: $X / ( - ( V - 257 ) * Y )$ Due to how this works, if variable 44 has a higher value, the product of the multiplication with the random value Y will be a lower number, increasing the chances of this number being a divider of the random value X. Aditionally, the event will always trigger if the value of variable 44 is 257, but it's impossible to get this value without manually changing the variable. Having Debug Mode activated completely disables the event. ## Start a Discussion Discussions about Events • #### Lonely Urotsuki Event 10 messages • I uploaded a video to Youtube of me attempting to trigger the event. The attempt is successful, however it is unclear w... • 2A02:17D0:470:BE00:4C63:D71D:F8D0:3700 wrote:Yeah, i have the same kind of problem, the "sayonara" thing just never shows, the panel... Community content is available under CC-BY-SA unless otherwise noted.
# statsmodels.robust.robust_linear_model.RLMResults.t_test_pairwise¶ RLMResults.t_test_pairwise(term_name, method='hs', alpha=0.05, factor_labels=None) Perform pairwise t_test with multiple testing corrected p-values. This uses the formula design_info encoding contrast matrix and should work for all encodings of a main effect. Parameters term_namestr The name of the term for which pairwise comparisons are computed. Term names for categorical effects are created by patsy and correspond to the main part of the exog names. method The multiple testing p-value correction to apply. The default is ‘hs’. See stats.multipletesting. alphafloat The significance level for multiple testing reject decision. factor_labels Labels for the factor levels used for pairwise labels. If not provided, then the labels from the formula design_info are used. Returns MultiCompResult The results are stored as attributes, the main attributes are the following two. Other attributes are added for debugging purposes or as background information. • result_frame : pandas DataFrame with t_test results and multiple testing corrected p-values. • contrasts : matrix of constraints of the null hypothesis in the t_test. Notes Status: experimental. Currently only checked for treatment coding with and without specified reference level. Currently there are no multiple testing corrected confidence intervals available. Examples >>> res = ols("np.log(Days+1) ~ C(Weight) + C(Duration)", data).fit() >>> pw = res.t_test_pairwise("C(Weight)") >>> pw.result_frame coef std err t P>|t| Conf. Int. Low 2-1 0.632315 0.230003 2.749157 8.028083e-03 0.171563 3-1 1.302555 0.230003 5.663201 5.331513e-07 0.841803 3-2 0.670240 0.230003 2.914044 5.119126e-03 0.209488 Conf. Int. Upp. pvalue-hs reject-hs 2-1 1.093067 0.010212 True 3-1 1.763307 0.000002 True 3-2 1.130992 0.010212 True
Nuclear Techniques ›› 2015, Vol. 38 ›› Issue (6): 60502-060502. • NUCLEAR PHYSICS, INTERDISCIPLINARY RESEARCH • ### Calculation of desired X-ray collection angle on XRF analyzer designed by Monte Carlo method LIU Hefan GE Liangquan XIE Xicheng ZHAO Jiankun LUO Yaoyao 1. (Chengdu University of Technology, Key Laboratory of Applied Nuclear Techniques in Geosciences, Chengdu 610059, China) • Received:2014-09-22 Revised:2014-11-20 Online:2015-06-10 Published:2015-06-05 Abstract: Background: The designing of the X-ray fluorescence (XRF) analyzer’s geometric layouts need to be considered, such as ‘detector to specimen’ distance, ‘detector to source’ distance, ‘source to specimen’ distance. The desired X-ray collection angle is one of the important factors of the detection performance. However, the experience geometric layouts have been unable to meet every XRF analyzer designing, because the performance of the excitation source or the detector is getting better, sample processing technology is much more advanced, and so on. Purpose: The aim is to study the impact of the desired X-ray collection angle on XRF analyzer designing, and provide a technical guidance on methodologies for XRF analyzer designing. Methods: In this paper, we build the XRF analyzer models by the Monte Carlo method and analyze the impacts of the desired X-ray collection angle on XRF analyzer designing. Results: Kinds of factors with the desired X-ray collection angle are analyzed, such as Cu’s X-ray characteristic fluorescence peak counts, the ‘detector axis to specimen’ distance, the Cu’s ‘peak to source’ ratio. Conclusions: With the increasing of distance between the detector and the specimen, the detector’s pulse counts satisfy an exponential decay law. With the desired X-ray collection angle increasing, the Cu’s X-ray characteristic fluorescence peak counts increase linearly. With the desired X-ray collection angle increasing, the ‘peak to source’ ratio decays exponentially, but the ‘peak to total’ ratio remains the same.
# What is the status of the Gauss Circle Problem? For $r > 0$, let $L(r) = \# \{ (x,y) \in \mathbb{Z}^2 \ | \ x^2 + y^2 \leq r^2\}$ be the number of lattice points lying on or inside the standard circle of radius $r$. It is easy to see that $L(r) \sim \pi r^2$ as $r \rightarrow \infty$. The Gauss circle problem is to give the best possible error bounds: put $E(r) = |L(r) - \pi r^2|$. Gauss himself gave the elementary bound $E(r) = O(r)$. In 1916 Hardy and Landau showed that it is not the case that $E(r) = O(r^{\frac{1}{2}})$. It is now believed that this is "almost" true: i.e.: Gauss Circle Conjecture: For every $\epsilon > 0$, $E(r) = O_{\epsilon}(r^{\frac{1}{2}+\epsilon})$. So far as I know the best published result is a 1993 theorem of Huxley, who shows one may take $\epsilon > \frac{19}{146}$. In early 2007 I was teaching an elementary number theory class when I noticed that Cappell and Shaneson had uploaded a preprint to the arxiv claiming to prove the Gauss Circle Conjecture: http://arxiv.org/abs/math/0702613 Two more versions were uploaded, the last in July of 2007. It is now a little more than three years later, and so far as I know the paper has neither been published nor retracted. This seems like a strange state of affairs for an important classical problem. Can someone say what the status of the Gauss Circle Problem is today? Is the argument of Cappell and Shaneson correct? Or is there a known flaw? - Cappell's Wikipedia page says that the paper "is still being vetted by experts." This was originally mentioned on 29 April 2008, but it has not been changed since. –  Steve Huntsman Mar 23 '10 at 2:31 @SH: Right, that was almost two years ago. I'm asking for an update from an expert. "Still vetting" is a possible answer, although in that case I'd be interested to know which part is taking so long to check. –  Pete L. Clark Mar 23 '10 at 2:40 I've sent an email to Cappell. I took a few courses from him in the nineties and I think he'll remember me. –  Steve Huntsman Mar 23 '10 at 2:58 I'll talk to Shaneson about it and forward the link. –  Justin Curry Mar 23 '10 at 15:52 Bruce Berndt gave a talk last week at Gainesville that gave history and current status, perhaps not from exactly the same viewpoint as yours of course. See math.ufl.edu/~fgarvan/antc-program/2009-10/mar-focused-week/… –  Will Jagy Apr 1 '10 at 4:50
# Vertical space around title I'm trying to draw horizontal lines above and below the title and adding some extra space between them and the text. I've being trying this code: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage[margin=2cm]{geometry} \title{ \rule{\textwidth}{2pt} \vspace{1cm} \textbf{Notas en Computación Cuántica} \vspace{1cm} \rule{\textwidth}{2pt} } \author{John Doe} \date{April 2019} \begin{document} \maketitle \section{Introduction} \end{document} However, the resulting document looks like both \vspace commands are adding the space only below the title, even the one that is placed right above. Can anyone help me? It's simpler to do it with the tools of the titling package: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage[margin=2cm]{geometry} \usepackage{titling} \pretitle{\noindent\rule{\textwidth}{2pt}\bigskip\begin{center}\LARGE\bfseries} \posttitle{\end{center}\bigskip\noindent\rule{\textwidth}{2pt}} \preauthor{\vspace*{10ex}\begin{center}\Large} \postauthor{\end{center}} \title{Notas en Computación Cuántica} \author{John Doe} \date{April 2019} \begin{document} \maketitle \section{Introduction} \end{document}
# Q. Events $E_1$ and $E_2$ from a partition of the sample space S. A is any event such that $P(E_1) = P(E_2) = \frac{1}{2}, P(E_2/A) = \frac{1}{2}$ and $P(A/E_2)=\frac{2}{3}$, then $P(E_1/A)$ is KCETKCET 2020 Solution: KCET 2020 KCET 2020 KCET 2020 ## 4. The locus represented by $xy + yz = 0$ is KCET 2018 Three Dimensional Geometry KCET 2020 KCET 2020 KCET 2020 KCET 2020 KCET 2020 ## 10. The number of terms in the expansion of $(x^2 +y^2 )^{25} - (x^2 - y^2)^{25}$ after simplification is KCET 2019 Binomial Theorem