text
stringlengths
104
605k
# 3.05 Interpreting linear relations Lesson To interpret information from a linear graph or equation, we can look at pairs of coordinates. Coordinates tell us how one variable relates to the other. Each pair has an $x$x-value and a $y$y-value in the form $\left(x,y\right)$(x,y). • The $x$x-value tells us the value of the variable on the horizontal axis. • The $y$y-value tells us the value of the variable on the vertical axis. It doesn't matter what labels we give our axes, this order is always the same. ### Intercepts Two very important points on a graph are the $x$x and $y$y-intercepts. These are the points on the graph where the line crosses the $x$x and $y$y-axes respectively. These points usually have some significance in real life contexts. The $y$y-intercept, represented by the constant term $c$c in a linear equation of the form $y=mx+c$y=mx+c, represents things such as a fixed cost, the starting distance from a fixed point, or the amount of liquid in a vessel at time zero. Another key feature is the gradient, represented by $m$m in a linear equation of the form $y=mx+c$y=mx+c. This is a measure of the slope, or steepness, of a line. The gradient is most commonly associated with the concept of rates. It can represent things like the speed of a vehicle, the rate of flow of a shower, or the hourly cost of a tradesperson. Remember! For all linear equations of the form $y=mx+c$y=mx+c • The gradient is represented by $m$m • The $y$y-intercept is represented by $c$c We can use our knowledge of linear relations to get a better understanding of what is actually being represented. Let's look at some examples and see this in action. #### Worked example ##### Question 1 The number of eggs farmer Joe's chickens produce each day are shown in the graph. What does the point $\left(6,3\right)$(6,3) represent on the graph? Think:  The first coordinate corresponds to the values on the $x$x-axis (which in this problem would represent the number of days) and the second coordinate corresponds to values on the $y$y-axis ( which in this problem would represent the number of eggs). Do: Using the given information in context, we can interpret this point to mean that in $6$6 days the chickens will produce $3$3 eggs. Reflect: How many days does it take for the chickens to lay $1$1 egg? If it takes $6$6 days to produce three eggs, we can find the time taken to lay one egg by evaluating $6\div3=2$6÷​3=2 Notice that this is not the gradient of the line graph, as the graph represents the number off eggs laid in terms of days. The gradient in this case is $3\div6=\frac{1}{2}$3÷​6=12. #### Practice questions ##### question 1 The graph shows the temperature of a room after the heater has been turned on. 1. What is the gradient of the line? 2. What is the $y$y-intercept? 3. Write an equation in the form $y=mx+c$y=mx+c to represent the temperature of the room, $y$y, in terms of the time, $x$x. 4. What does the $y$y-intercept tell you? The temperature of the room before the heater has been turned on. A The time at which the temperature is $0^\circ$0°. B The temperature of the room $4$4 minutes after the heater has been turned on. C The amount by which the temperature of the room increases in the first minute. D The temperature of the room before the heater has been turned on. A The time at which the temperature is $0^\circ$0°. B The temperature of the room $4$4 minutes after the heater has been turned on. C The amount by which the temperature of the room increases in the first minute. D 5. Find the temperature of the room after the heater has been turned on for $44$44 minutes. ##### question 2 Petrol costs a certain amount per litre. The table shows the cost of various amounts of petrol. Number of litres ($x$x) Cost of petrol ($y$y) $0$0 $10$10 $20$20 $30$30 $40$40 $0$0 $14.70$14.70 $29.40$29.40 $44.10$44.10 $58.80$58.80 1. Write an equation relating the number of litres of petrol pumped, $x$x, and the cost of the petrol, $y$y. 2. How much does petrol cost per litre? 3. How much would $14$14 litres of petrol cost at this unit price? 4. In the equation, $y=1.47x$y=1.47x, what does $1.47$1.47 represent? The total cost of petrol pumped. A The number of litres of petrol pumped. B The unit rate of cost of petrol per litre. C The total cost of petrol pumped. A The number of litres of petrol pumped. B The unit rate of cost of petrol per litre. C ##### question 3 The number of fish in a river is approximated over a five year period. The results are shown in the following table. Time in years ($t$t) Number of fish ($F$F) $0$0 $1$1 $2$2 $3$3 $4$4 $5$5 $6600$6600 $6300$6300 $6000$6000 $5700$5700 $5400$5400 $5100$5100 1. Choose the graph that corresponds to this relationship. A B C D A B C D 2. Write down the gradient of the line. 3. What does the gradient represent in this context? The decrease in the fish population over the five year period. A The average number of fish in the river at a particular time. B The rate of change of the fish population over the five year period. C The rate of change of the fish population each year. D The decrease in the fish population over the five year period. A The average number of fish in the river at a particular time. B The rate of change of the fish population over the five year period. C The rate of change of the fish population each year. D 4. What is the value of $F$F when the line crosses the vertical axis? 5. Write down an equation for the line, using the given values. 6. Hence determine the number of fish remaining in the river after $12$12 years. 7. We want to determine the number of years until $2700$2700 fish remain in the river. Substitute $F=2700$F=2700 into the equation and solve for $t$t. ### Outcomes #### VCMNA340 Solve linear equations involving simple algebraic fractions.
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01jq085p04m DC FieldValueLanguage dc.contributor.authorSchwartz, Jacob A dc.contributor.otherAstrophysical Sciences—Plasma Physics Program Department dc.date.accessioned2020-11-20T05:58:50Z- dc.date.available2020-11-20T05:58:50Z- dc.date.issued2020 dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01jq085p04m- dc.description.abstractThe lithium vapor box is a concept for a divertor that can handle the extreme heat fluxes in future fusion reactors. Within a slot lined with capillary-porous material, Li vapor induces plasma detachment by cooling the plasma until it volumetrically recombines, lowering the heat flux on the divertor surfaces. The vapor is localized within the slot by condensation on walls near its entrance and by friction with the plasma. The concept is early in its development. Two experiments are described—one completed, one proposed—in support of the lithium vapor-box divertor. They use a linear geometry for reduced size and complication, rather than being integrated into a tokamak. They each take the form of three connected cylindrical stainless steel boxes, one of which is heated to 900 K to evaporate Li. The first studies the evaporation and flow of vapor without plasma, in order to demonstrate that it is possible to create a cloud of lithium vapor of specified density within an open box. We measure the temperature of the heated box, initially loaded with 1 g of Li, during a heating cycle, and measure the vapor mass flowing out during the cycle by weighing the box before and after. We compare the measured mass with a value calculated from simulations using the code SPARTA incorporating the measured temperatures in the boundary conditions. In runs with good experimental conditions, the two agree to within 15%. The technology developed for this first experiment is to be used in the second. The second, not yet built, studies the interaction of a 4e20/m³, 1.5 eV, 1 cm radius, magnetized plasma beam, supplied by the Magnum-PSI device, with a 16 cm long Li vapor cloud. Its goal is to demonstrate that Li vapor can induce recombination within the box and cause the beam's power to flow to the box walls rather than to the target. In simulations with the code B2.5-Eunomia, a 12 Pa vapor cloud reduces the plasma pressure at the target by 93%, largely via ion-neutral collisions, and the heat flux there is reduced from 3.7 MW/m² to 0.13 MW/m² by plasma recombination and cooling via Li excitation. dc.language.isoen dc.publisherPrinceton, NJ : Princeton University dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a> dc.subjectdetached dc.subjectdivertor dc.subjectlithium vapor dc.subjectlithium vapor box dc.subjecttokamak dc.subject.classificationPlasma physics dc.titleExperimental and modeling studies for the development of the lithium vapor-box divertor
# compactMatrixForm -- global flag for compact printing ## Synopsis • Usage: compactMatrixForm = x • Consequences: • changes the display of matrices ## Description compactMatrixForm is a global flag that specifies whether to display matrices in compact form. The default value is true. The compact form is the form used by Macaulay, in which the multiplication and exponentiation operators are suppressed from the notation. i1 : R = ZZ[x,y]; i2 : f = random(R^{2},R^2) o2 = {-2} | 8x2+xy+3y2 7x2+8xy+3y2 | 1 2 o2 : Matrix R <--- R i3 : compactMatrixForm = false; i4 : f | 2 2 2 2 | o4 = {-2} | 8x + x*y + 3y 7x + 8x*y + 3y | 1 2 o4 : Matrix R <--- R ## For the programmer The object compactMatrixForm is .
# A fast-food item contains 544 nutritional calories. How do you convert this energy to joules? ##### 1 Answer Jun 11, 2016 By using the conversion factor of 4.184 Joules per calorie. #### Explanation: It is important to realize that when the term 'nutritional calorie' is used, they mean kilocalorie (= $1000$ calories). It is known that 1 calorie is approximately $4.184$ Joules. This means that 1 kilocalorie = $1000 \cdot 4.184$ Joules = $4184$ Joules Your fast-food item contains 544 kilocalories which compares to: $554 \cdot 4184$ = $2317936$ Joule = $2.32 \cdot {10}^{6}$ Joule Note that the exact conversion factor for calories to Joules depends on temperature and atmospheric pressure
Image Details Choose export citation format: Light Curves and Rotational Properties of the Pristine Cold Classical Kuiper Belt Objects • Authors: Audrey Thirouin, and Scott S. Sheppard 2019 The Astronomical Journal 157 228. • Provider: AAS Journals Caption: Figure 3. Study of 2004 VC131. The Lomb periodogram (plot a)) has one main peak, suggesting a rotational period of 2 × 7.85 hr (plot b)). The rotational phase of the light curve is between 0 and 2, and so two rotations are plotted. With plots (c) and (d), we calculated the potential mass ratios, size, and shape of the components, density, and separation (D) for a contact binary configuration.
Capacitive Sensing, the Hard Way Part 1 - Measuring Capacity This entry is the winner of the Grand Prize in the 7400 Contest. Thank you. A touchscreen is an interesting interface. I've been interested in interaction by means of touch for a long time. While pondering my next project, I wanted to make my own touch interface. Sure, you can buy a resistive screen and go from there, but there is a challenge in making your own screen. Also, you do not want to use force to activate. So, capacitive sensing is the way to go (yes, you can go IR too, next project, maybe). An extra benefit of capacitive sensing is the possibility of detecting proximity, and with proximity you open up an extra dimension of interaction. Now, you can do capacitive sensing using either PIC or AVR microprocessors. Both Microchip and Atmel have descriptions on how to do it and software that goes with it. But then the 7400 Contest came into existence and the fourth dimension was opened. Retrograde capacitive sensing; that would be neat. This is a long story, so I'll provide an index. If you are impatient and just want to see the results, then you may skip to the diagrams, the building part or the video. #### Capacitive sensing principles There are basically two ways to do capacitive sensing: Method 1: Step-response timing You apply a voltage to a RC circuit with known R and variable C and measure the time for the capacitor C to be at a certain percentage of the applied voltage. It is known that the voltage-time curve is described by: $UΔ = Ustep . ( 1 - e − t R ⁢ C )$ Method 2: Oscillator counter A RC circuit is configured in a loop-back setup, where the applied voltage changes polarity every time a certain threshold is reached. This causes the system to oscillate at a frequency dependent on known R and unknown C. The frequency will decrease when C increases. You then count the number of oscillations within a period of time. You could also use a LC circuit as the oscillator, but that is beyond our scope in this context. The advantage of method 2 is that it measures in constant time. You know when your sample will be ready. Method 1 lacks this stable timing. However, method 1 allows for a better measurement resolution. It is easier to measure time-intervals than frequencies. What has this to do with the 7400 Contest? Well, I want the best of both worlds. In the microprocessor case, you have to decide for either method 1 or method 2. You cannot use both at the same time. PIC and AVR microcontrollers have a limited measurement capability. Their master-clock is often too slow to do anything too fancy. And, what is the point in using 100% of your CPU cycles for your input detection? You compromise. I do not want to compromise and want a measurement that is both constant in time and high resolution. Furthermore, over-engineering is a matter of principle. Why do something the easy way when you can use a few extra chips and make your brain work for a change. BTW, AVR uses (a derivative of) method 1 and PIC uses method 2. #### Designing, the old-fashioned way The spec of the measurement system is as follows: • Constant time measurement • High resolution over a wide measurement area • Measure multiple capacitive channels, 16 is a nice number • Fast cycle time to detect motion; about 50 measurements per second per channel (allows for noise reduction) • Useful machine-readable output, RS-232 can be easily understood • Only 74xx and 40xx chips ##### Counting pulses The constant time measurement is done measuring the frequency of an RC oscillator. Strictly spoken, we could, of course, measure the exponential curve for constant time measuring, but that would a) require an accurate A/D converter, b) a very fast A/D converter, c) several measuring points (three or more) and d) lots of processing power. We have none of these and therefore we go with measuring frequency. A RC oscillator is easily built with an inverting gate with a Schmitt trigger input. All we have to do is count the number of pulses that are generated within a defined period of time. The frequency is then defined by count/tperiod. The accuracy is +0/-1 counts. ##### Counting fractions To improve on the accuracy, we can divide the pulses from the generator into small slots and see how many slots can be counted (measuring fractions). This can be done using a high-frequency master-clock which pulses are counted for each and every oscillator period. For example, if the master-clock is 30 times faster than the oscillator frequency, then we can improve the measurement by ~4.9 bits (${log}_{2}30$). There will still be loss of accuracy at high oscillator frequencies, but 2..3 bits improvement are still worth the effort. ##### Channel hopping Going through 16 channels can be accomplished by using analog switches. With a 50Hz scan-rate over all channels, we have 1/(50*16) seconds time for each channel. This also sets the absolute lower bound of the RC oscillator. We cannot measure any frequencies that are so low that we do not see (at least) several periods within one channel measurement's time span. A practical limit would suggest that we need at least several orders of magnitude between the RC oscillator and the channel-hopping frequency. ##### Serial protocol Having a lot of data is one thing, doing something with it is another. The data must be transmitted in a fashion that facilitates easy use. Information generated by the measurement: • Pulses counted in time period • Faction of pulse at end of time period • Channel information This is a lot of data and it is spit out at, at least, 16*50 times per second. There are two options here; use a SPI-like data stream or go for a more traditional approach and use RS232. Anyway, the counter and channel data must be converted to a serial data stream at a certain bitrate. The spec says: use RS232, and that means we have to implement start- and stop-bits. RS232 has the advantage of being an asynchronous serial stream and that eliminates any clock signaling or data ready signaling. Assuming that the data fits in 3 bytes and uses format 8n1, then the lowest bit rate is 16*50*3*(8+2) bits/s or 24000 Baud. Using standard baud rates we must opt for 38k4, 57k6 or 115k2 (for the purists: I know that the standard only goes to 9k6 or 19k2, depending version. However, most computers, since at least the mid-80ies, can go to 115k2 or more.). #### The 7400 Design The features implemented: • 16 channel capacitive inputs • 55Hz constant time scan rate • non-stop measurement • overlapped acquire and data output • 30MHz master clock operation • 12 bit RC-oscillator period counter • 8 bit fraction counter • 4 bit channel indicator • RS232 serial output at 115k2 Baud • DTR - channel hold Without more fuss, here are the drawings: Schematic diagram Timing details and data format Schematic and timing as PDF. GSchem source of schematic diagram. GSchem source of timing diagram. #### The Inner Workings Warning: Using 7400 logic at a frequency of 30MHz is a pain. You have to realize that the propagation delays are in the same order of magnitude as the clock frequency. This makes it not only hard to design, but also hard to debug. Using an oscilloscope can be a daunting task as the probe's capacitance (~10pF) delays the signal you are measuring. When looking at a cycle-time in the order of 33.3ns, you'll find that 1 (one) nano second is an eternity. The above image shows the propagation delay (tpd) of the CLK30 (upper trace) vs. CLK15 (lower trace) signals. Note that the CLK30 signal has one more inverter in the chain, so that the CLK15 signal is triggered on the falling edge. According to the datasheets, the '04 has a tpd of 6ns and the '74 D-flip-flop has a tpd for CP-to-Q of 14ns. With the measured time of 23.8ns we can see that it fits very well: 23.8 - 0.5*33.3 ~ 14 - 6; the gates are actually slightly faster than the datasheet's typical tpd. ##### Master Clock The 30MHz master clock (CLK30) is designed around a '04 inverter in a standard oscillator setup. The primary output of the oscillator is buffered once before it is used to make sure not to put additional load on the oscillator. Care has to be taken for start-up stability and the primary problem is to match the crystal with the load-capacitors. The master clock is the primary source for synchronization and fraction counting The master clock is divided by two using a '74 D-flip-flop to generate CLK15 and divided by two once more for CLK7_5. These clocks are used to feed the baud rate generator, the period timer and the clock monitor. ##### Channelized RC Generator The capacitive inputs are selected using 74xx-ized analog switches ('4051). The channels are selected using a synchronous '161 counter to feed a RC-oscillator build around a couple of '132 Schmitt-trigger nand-gates. The channel counter can be put on hold using the DTR signal from the RS232 port. The counter is automatically advanced by the channel sequencer (if DTR enabled) after the measurement period. There is of course a 16-to-1 analog multiplexer available, but it comes in a 24-pin package and that is monster size compared to the 16-pin package of the '4051. Besides, we need at least one inverting Schmitt-trigger gate and the options are 6- or 4-in-a-package. No reason to let the gates go unused. The design, as is, uses one extra analog switch package, but still saves space on the board. ##### Period Timer The capacitance measurement is timed with a '4020 14-stage counter fed with the 7.5MHz clock. The last stage output will be high after 215 counts of CLK30. The counter is reset at the start of each measurement so that all channels have consistent period timing. So, you thought that it would be 215 counts? Think again! The '4020 is a ripple counter and has tpd of 11ns for CP-to-Q1 and 6ns Qn-to-Qn+1. This means that the counter is 11+13*6=89 nano seconds late at Q14, or almost three CLK30 periods. The real problem this introduces is not the delay in itself, but the loss of synchronization, which has to be fixed. ##### Period Counter Counting the number of clocks generated by the RC-oscillator is one of the primary goals of the system. It uses two '590 8-bit synchronous counters in synchronous cascade. Only 12 bits are used for the final result, which sets the upper boundary condition for the input frequency. The counter will overflow at 30MHz * (212 - 1) / 215 ~ 3.75MHz. This is an approximate value, because the period timer is not exactly 215 counts and the count sequencer also plays a small role. The '590 chip has a nasty habit of not providing the counter output immediately, but it needs to be clocked into an output register (using the RCLK input). The channel sequencer takes care of that part at the appropriate time. ##### Fraction Counter The second goal, to provide more accuracy, is performed by the fraction counter. This counter, also an '590 chip, is clocked at the master clock frequency of 30MHz. It is set up in such a way that it will count CLK30 periods for each and every RC-oscillator period (CLOCK period). The counter is reset at the start of a period and then counts how many 33.3ns slots fit into the RC-oscillator's period. It is important that no CLOCK periods are skipped because this counter is independent of the RC-oscillator and the period timer, so we have no idea when the period timer will fire. The '590's 8 bits also sets the lower bound on the RC-oscillator frequency. A CLOCK period must be no longer than 28-1 CLK30 counts, or about 0.117MHz. Any lower frequency would overflow the counter. Note that there is nothing in the system that prevents it from measuring lower frequencies. There is just a loss of accuracy. ##### Count Sequencer A sequencer is used to put the period counter and fraction counter in the right balance. It is built around a series of D-flip-flops of a '175. Both the period timer's output PERIOD and the RC-oscillator's CLOCK are synchronized with the CLK30 master-clock. Basically, the '175 is used as two 2-bit shift registers. The effect of this is that CLOCK and PERIOD can be compared to each other without being afraid of glitches. The CLOCK signal is converted into CSYNC in shift stage one and COUNT in shift stage two. These two are then used to generate the counter reset for the fraction counter. The PERIOD signal is converted into PSYNC and HOLD in shift stages one and two respectively. The HOLD signal indicates to stop all counting as it signals the end of the measurement. For the detailed sequence see page 2 of the schematic diagram, which shows the entire timing sequence. ##### Channel Sequencer Hopping through all the channels is quite a task. The results of the measurement need to be transfered to the output stage, the channel counter must be advanced and the period timer reset. A series of D-flip-flops in a '175 chip, like above, are set up in a shift-register fashion. However, the advancement of the sequencer is not only dependent on the HOLD signal (which indicates the start of the sequencer), but it also depends on the RC-oscillator clock and a delay to let the next measurement channel settle. When the HOLD signal fires, the first thing is to load the outputs of the '590 counters with the LATCH signal to make the measured data available. The LATCH signal is a single pulse initiated by the rising edge of HOLD setting the output of a '74 D-flip-flop. The '74 is then reset by the LATCH signal so that only one single CLK30 period propagates through the '175. Stage two in the sequence activates the LOAD signal, loading the data in the serial output stage. The LOAD signal also activates a '123 timer set at ~33 micro seconds. This time is used to let the RC-oscillator settle in the next channel selection. The propagation of the pulse through the '175 is now delayed until it is synchronized with the (SAFE)CLOCK from the RC-oscillator. The reason is that it must be guaranteed that the measurement start of the new channel is exactly on a CLOCK period boundary. Otherwise, we'd see +/-1 variations on the count output with stable input frequency. The activation of the '123 timer also signals the serial output on the DSR line that data will be forthcoming. The receiver can then synchronize with this to know when the first byte of a data-set is sent. The period timer is held in reset while the '123 timer is active (RESET signal) and the channel counter is advanced at the rising edge of CCLR. Both RESET and CCLR are kept active for the settling period. After ~33 micro seconds, the CCLR signal resets the '590 counters and the new measurement begins. For the detailed sequence see page 2 of the schematic diagram, which shows the entire timing sequence. There is a small variation in the scan rate introduced here. The constant time rate is modulated with the synchronization of the (SAFE)CLOCK signal. The next channel is delayed because the sequencer waits for the RC-oscillator. However, this delay is two to three orders of magnitude away from the actual scan rate. Additionally, the clock monitor will prevent variations larger than 15 micro seconds. Who is counting micro seconds in a milli seconds world? ##### Clock Monitor A problem exists in the channel sequencer if the capacitive input(s) are not functioning. The channel sequencer can only advance through the complete cycle if and only if the RC-oscillator is running. When an input is shorted to ground or Vcc, then the sequencer would get stuck. The clock monitor prevents this from happening by using the re-trigger functionality of a '123 timer. If, for any reason, the RC-oscillator stops, then the timer will expire at about 15 micro seconds. This will then switch the '157 multiplexer into selecting the CLK15 signal instead of CLOCK. The channel sequencer is then free to advance to set up the new channel measurement. Once the RC-oscillator is running again, the multiplexer automatically reverts to the CLOCK input. The SAFECLOCK signal is not guaranteed to have a frequency above the monitor threshold. A channel that is stuck will result in a zero count in the period counter. Whereas a low frequency input signal is counted by the period counter, but will be prevented from stalling the sequencer too long. ##### Baud rate Generator The RS232 serial output requires a fixed clock of 115.2kHz to shift out the bits onto the serial port. The master clock runs at 30MHz and that means a division by 30MHz/115.2kHz ~260.42. We could, of course, do fancy divisions, but there is no need. Dividing CLK30 by 260 would result in 115384.6Hz, which is 115k2 plus 0.16%. That is close enough. There is a crystal at a frequency of 29.4912MHz, which would create a perfect 115k2 rate with a division factor of 256. However, at the time of building this contraption, there was none available at the shop, so I settled for 30MHz instead and let some gray cells ponder a suitable division. The 260 division is a problem because it is more than 8 bits. However, CLK15 divided by 130 yields the same result. Two '161 synchronous counters are cascaded to generate the division by 130. The '161 counter has a synchronous parallel load, which can be used to preset the counter, making it possible to do fancy counting. The parallel load value is set at -130 (minus 130), which, in 8-bit truncated two's complement binary, is 01111110. The '161s will then count up to 11111111 and the ripple carry output is set. At the next clock the parallel load value is again loaded and the process starts again. This is effectively a divide-by-130 counter. It must be noted that the ripple carry output of the '161 counter is not glitch-free. This is important because there is in fact a glitch on it (found out the hard way). So the carry out cannot be used as a clock output. However, the highest bit in the counter has the same frequency with two CLK15 periods low-time, which is fine as a baud rate clock. Finally, the baud rate generator is reset every time new data is latched into the '590s counter outputs. This is necessary to ensure the validity of the serial data. The baud rate generator operates independent of the measurement timing and that could result in a baud rate clock pulse (B115K2) too close to the data transfer from the counters into the shift registers. Therefore, the baud rate generator is reset to guarantee an idle-period on the serial output at the time of new data. ##### Serial Output The measurement data is transfered from the '590 counters to four cascaded '165 parallel-in-serial-out shift registers. There are 24 bits of data to be sent; 12 bits period counter (Cnt{0-11}), 8 bits fraction counter (Frac{0-7}) and 4 bits channel information (Q{0-3}). That makes 3 bytes of data. The data bits also need to include start- and stop-bits, which are an additional 6 bits. Finally, we must ensure that the serial data, at the time of loading and when done shifting, generates an idle condition on the TX line (a logic '1'). That means that the first and the last bit in the cascaded shift register are static '1'. When there is no more data available, the TX line is kept idle by re-inserting idle-bits into the shift register's serial cascade input. The bit-order in RS232 is LSB first, while the '165 has a (named) convention of MSB first. Therefore, data is physically connected in bit-reversed fashion with respect to the '165 naming convention. Once the LOAD signal is released, the baud rate generator will work through all the bits, transmitting them to any connected computer. It should be noted that the parallel load on the '165 is level active. Therefore we reset the baud rate generator before the parallel load. Otherwise we could have a B115K2 pulse that works glitchy. The data is sent in following order: bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0 Byte 0 Cnt11 Cnt10 Cnt9 Cnt8 Chan3 Chan2 Chan1 Chan0 Byte 1 Cnt7 Cnt6 Cnt5 Cnt4 Cnt3 Cnt2 Cnt1 Cnt0 Byte 2 Frac7 Frac6 Frac5 Frac4 Frac3 Frac2 Frac1 Frac0 ##### RS232 Level Shifter The final part in the design is, strictly seen, not required for operation. And is, because of its non-7400 nature, marked as optional. The level shifter, a MAX202, converts the RS232 signal levels of +/-15V to TTL logic level. The nice thing about the level shifter is that it has pull-ups on-chip, which helps to set the default (active-) level for the DTR input. However, you could always put a pull-up on it, but things for free are things for free. #### Building the Real Thing Now that we have a design we need to make it too. So, I made my way to the local makerspace Open Space Aarhus, got my stuff spread out on a table and this is the result: It took about 6 hours to solder all the connections. There are really many... As you can see, all bypass capacitors are mounted under the chips (in the socket), connected directly to the power supply pins. If you miss a bypass capacitor in such high-frequency design, you are toast, things will simply not work. ##### First Results Power was put on the board without delay after soldering was done, and surely, the polarity was put on right. No smoke or nasty smell was detected with a power consumption of about 275mW (5V*55mA). But, as you might imagine, there are always some problems with systems of this complexity. This was no exception. The first problem encountered was the master-clock oscillator. The crystal is a 3rd overtone type and it was not running at 30MHz, but got stuck on the base-frequency of 10MHz. Touching the leads of the inverter made the frequency hop over to 30MHz. There has to be done some work on tuning the capacitive load on the oscillator. Second problem was more nasty, but still easy to solve. For the observant, there is a chip difference in the build and the schematic drawing. The problem was that the channel sequencer was lacking the '74 D-flip-flop to feed the HOLD signal to the '175. I mistakingly used a simple gate instead of the edge-triggered flip-flop, and that made the LATCH output oscillate at 15MHz for a CLOCK period. So, instead of a fine single pulse propagating I had a pulse-train propagating. Luckily there was "plenty" of space left to add one more '74 and make sure that only one pulse would be generated on the HOLD activation. It also provided me with one extra D-flip-flop used for the third problem. The third problem was that I, apparently, lacked the ability to count to 215. The period timer was originally fed with CLK15, but that means that Q14 of the '4020 activates at 214. That is a whole factor of two short of the plan. Two possible solutions offered themselves: connect the baud rate generator to the '4020 and use output Q8, or use an extra divide-by-two on the CLK15 line. Because of the previous issue, there was a '74 D-flip-flip unused, so the nice thing to do would be to generate CLK7_5 and use that to feed the '4020 period timer. After fixing these problems there is a functional device: And with the touch-panel attached. Just beneath the crystal is the extra '74 chip mounted. You can also see my improvised capacitive touch-panel, which is detailed in part 2. But on we go to do some computer work and see if we can receive the serial data. ##### Computer Results Fire up minicom on /dev/ttyUSB0, set parameters to 115k2, 8n1 and see what happens. Bingo! There is junk coming in. Assured, this was a proud moment, it looks like things actually work. A quick software hack was needed to take in the data and see if it made sense. A command line utility was written in the most simple form: take input from stdin and find the channel data (running it with \$ ./chanread < /dev/ttyUSB0). Eventhough the DSR line can be used to find the start of the data, it is actually easier to scan the data for the channel identification. In the data stream you know that the low nibble of the first byte in any 3-byte set is the channel, which counts 0,1,2,...E,F,0,1,etc.. It is relatively easy to sync up with the data stream when looking for 16 consecutive channels. The chance of failure is remote as the rest of the data will be all over the place. A simple sync procedure looks like: ```/* Synchronize to input data stream */ void sync_datastream(void) { int i, ch; retry: for(i = 0; i < 16; i++) { ch = getchar(); if(i != (ch & 0x0f)) goto retry; else { getchar(); /* Read the two other data bytes */ getchar(); return; } } } ``` When returning from this function it is known that the next data byte is byte 0 for channel 0. It was soon determined that the data was coming out at the correct rate and was looking right. Time to visualize. A quick hack later with a small QT4 test-app. Please note that the test-app opens a named pipe (\$ mkfifo myfifo) and is run together with cat in the form "\$ ./app & cat < /dev/ttyUSB0 > myfifo" (a simple hack when you don't want to write complex modular config stuff). Each channel's data has to be converted into sane data. It is possible to use only the period counter values (12 bits), but the point was to get more accurate data. It can be determined from the timing of the system that the measured RC-oscillator frequency is: $fosc = Nperiod ∙ 3 ∙ 107 215 + 3 − Nfraction [Hz]$ The "+3" is caused by the propagation delay of the '4020. ```/* Calculate the frequency from the data and extract the channel identification */ int calc_frequency(uint8_t data[], int *channel) { *channel = data[0] & 0x0f; /* Low nibble is channel */ int cnts = (data[0] << 4) + (data[1] & 0xff); int frac = (data[2] & 0xff); return (int)((int64_t)cnts * 30000000LL / (int64_t)((1<<15) + 3 - frac)); } ``` The QT application reads the 16 channels, which are divided into 8 rows by 8 columns. This gives 64 visualized cross-points. The image above shows the system in idle mode, where you can see small differences in the channel's idle capacitive load marked at different shades of magenta. The image below shows the active situation when one finger is on the touch-panel. While the channel data is received, both maximum and minimum measurement values are recorded for each channel. The color on screen is based on the measurement span and provides a dynamic view. The color is calculated as the vector sum for each cell from the row and column data and mapped into a HSV cone on both Hue and Value with Saturation at maximum: ```/* Update channel c with input frequency f */ void update_channel(int c, int f) { fmin[c] = MIN(fmin[c], f); fmax[c] = MAX(fmax[c], f); if(fmin[c] == fmax[c]) ch_val[c] = 1.0; else ch_val[c] = (float)(f - fmin[c]) / (float)(fmax[c] - fmin[c]); } /* Get color for position x,y */ QColor get_color(int x, int y) { float vx = ch_val[x]; /* Channel 0.. 7 are columns */ float vy = ch_val[8 + y]; /* Channel 8..15 are rows */ float d = (vx*vx + vy*vy) / 2.0; return QColor::fromHsvF(d, 1.0, 1.0-0.8*d); } ``` With some more hacking, the channels are mapped row/column on top/left to see what the individual channels' values are doing. This makes it easier to see that there is a need for adaptive amplification end decay. Something left for further analysis for the moment. First a better touch-panel must be produced. The final set up in overview; even a pen induces enough change in capacitance that can be detected: #### Conclusion What a funny project. Retro logic is good practice once in a while. It makes you think about the daily conveniences you have using micro-controllers and powerful PCs. On the technical side, there are some things that are not entirely as they should be, while others were very positive. To name a few: • There is a lot of noise on the input. Most of it is caused by 50Hz line induction. The scan rate is too low for effective filtering because the line noise is above the Nyquist frequency. The period timer should probably count 214 to increase the scan rate to 110Hz. How is that for a design problem; maybe I can count after all, just not on a concious level. There would be no immediate problem for the output resolution if the period time is halved. • The system is very sensitive. The measured capacitance change induces 1..3% change in frequency with one finger on the touch-panel. Converting this to the actual induced capacitance would range below 1pF. The detection sensitivity is high enough so that a hand at 5cm is already clearly seen in the output. • The master-clock oscillator is still not 100% stable at startup. The drive on the crystal is probably too high and a series-resistor must be added to reduce the loop-gain. Something to work on, but I guess I must digest some theory first to get it right. • The touch-panel is a major design factor. There needs to be done a lot of work here to get this working better. From the pictures above, it can be seen that the wires are not entirely straight and not equidistant. However, the panel works better than expected. The wires are embedded in plastic with very poor dielectric properties, but is still transparent enough for the capacity change. It should be noted that this project could not have been built without using the facilities of the local makerspace@OSAA. Testing logic at high speeds requires expensive equipment. So, now just get this into the 7400 Contest... #### Update(s) • 2011-09-26: Published on Hack a Day. • 2011-09-28: The entry has been on Dangerous Prototypes' frontpage for the 7400 contest. • 2011-10-02: Finally got around to fix the master-clock oscillator startup issue. The problem, as suspected, was a too high loop-gain. The solution was to move the '04 inverter better into the linear zone by reducing R14 from 10MΩ to 4.7kΩ. The drive on the crystal is still a bit high ('04's input, at pin 1, swings 6.2V, which is more than Vcc, due to the high Q-factor). A series resistor of 100Ω or 220Ω should get that fixed, but the oscillator is rock solid now (the frequency reads 30.000574MHz on the scope), so I don't know if I should bother. • 2011-11-02: This entry is the winner of the Grand Prize in the 7400 Contest. Thank you. Posted: 2011-09-14 Updated: 2011-11-02 Overengineering @ request Prutsen & Pielen since 1982
# Meeting Details Title: An Algebraic Multilevel Preconditioner for Anisotropic Elliptic Equations based on Subgraph Matching CCMA Luncheon Seminar Yao Chen, Penn State We present a multilevel method for the solution of algebraic systems arising from discretizations of anisotropic (not necessarily grid aligned) diffusion equation. The overall solution procedure is the Algebraic MultiLevel Iteration (AMLI) method. The coarsening phase in the proposed algorithm is based on matching along strong connections in the graph associated with the underlying stiffness matrix. To identify the strong connections, we introduce a measure based on localized estimate of the stability of the $\ell_2$-orthogonal projection on the coarse spaces. The algorithm is algebraic and does not use the underlying geometry of the finite element of finite difference mesh. In case of anisotropic diffusion, the proposed strength of connection measures correctly indicate the direction of the underlying anisotropy. We present several numerical tests to illustrate the performance of this algorithm.
# Elasticsearch Internals: Networking Introduction UPDATE: This article refers to our hosted Elasticsearch offering by an older name, Found. Please note that Found is now known as Elastic Cloud. This article introduces the networking part of Elasticsearch. We look at the network topology of an Elasticsearch cluster, which connections are established between which nodes and how the different Java clients works. Finally, we look a bit closer on the communication channels between two nodes. ## Node Topology Elasticsearch nodes in a cluster form what is known as a full mesh topology, which means that each Elasticsearch node maintains a connection to each of the other nodes. In order to simplify the code base, the connections are used as one-way connections, so the connection topology actually ends up looking like this: When a node starts up, it reads a list of seed nodes from its configuration using unicast, and sends multicast messages looking for other nodes in the same cluster. As the node discovers more instances in the same cluster, it connects to them one by one, asks them for information about all the nodes they know and then attempts to connect to them all and officially join the cluster. In this way, previously running instances assist in quickly getting fresh instances up to speed on the current nodes in a cluster. ## Node Clients Even “client” Elasticsearch nodes (i.e nodes configured with node.client: true or build with NodeBuilder.client(true)) connect to the cluster this way. This implies that the client node becomes a full participant in the full mesh network. In other words, once it starts joining the cluster, all the existing nodes in the cluster will connect back to the instance. This means that both the client and server firewalls and publish hosts must be properly configured in order to allow this. Additionally, whenever a node client joins, leaves or experiences networking issues, it causes extra work for all the other nodes in the cluster, such as opening / closing network connections and updating the cluster state with the information about the node. ## Transport Clients On the other hand, “transport” clients work differently. When a Transport client connects to one or more instances in a cluster, the instances do not connect back, nor is the existence of the transport client part of the cluster state. This means that a joining / leaving transport client causes minimum extra work for the other nodes in the cluster. ## Connections and Channels What do we mean when we talk about the connection to a Node in Elasticsearch? In the sections above, we refrained from being specific about it and only used the term as a logical connection that allows communication between two nodes. Let's go into more detail. Usually, when we talk about connections in the context of networks, we refer to TCP-connections, which provide a reliable line of communication between two nodes. Elasticsearch uses (by default) TCP for communication between nodes, but in order to prevent important traffic such as fault-detection and cluster state changes from being affected by far less important, slower moving traffic such as query/indexing requests, it creates multiple TCP connections between each node. For each of these connections, Elasticsearch uses the term channel, which encapsulates the serialization / deserialization of messages over a particular connection. In earlier Elasticsearch versions there used to be three different classes of channels: low, med and high. After a while, ping was added, and a recent change renamed these channels such that they are more descriptive about their intention. At of the time of writing, the following channel classes exist: • recovery: 2 channels for the recovery of indexes • bulk: 3 channels for low priority bulk based operations such as bulk indexing. Previously called low. • reg: 6 channels for medium priority regular operations such as queries. Previous called med. • state: 1 channel dedicated to state based operations such as changes to the cluster state. Previously called high. • ping: 1 channel dedicated to pings between the instances for fault detection. The default number of channels in each of these class are configured with the configuration prefix of transport.connections_per_node. Elasticsearch has support for TCP keepalive which is not explicitly set by default. This prevents unused or idle channels from being closed, which would otherwise cascade into a complete disconnect-reconnect cycle as explained above. A consequence of the above is that adding a new instance to an existing cluster causes it to establish 13 connections to each node, and each node establishes 13 connections back to the new instance. In this article we've shown the basic network topology of an Elasticsearch cluster and introduced the concept of channels that are used for communication between nodes. In a later article we'll take a closer look at what goes on inside each of these channels.
for Journals by Title or ISSN for Articles by Keywords help Subjects -> ENGINEERING (Total: 2251 journals)     - CHEMICAL ENGINEERING (188 journals)    - CIVIL ENGINEERING (179 journals)    - ELECTRICAL ENGINEERING (98 journals)    - ENGINEERING (1194 journals)    - ENGINEERING MECHANICS AND MATERIALS (387 journals)    - HYDRAULIC ENGINEERING (55 journals)    - INDUSTRIAL ENGINEERING (61 journals)    - MECHANICAL ENGINEERING (89 journals) CHEMICAL ENGINEERING (188 journals) Geochemistry International   [SJR: 0.399]   [H-I: 18]   [2 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 1556-1968 - ISSN (Online) 0016-7029    Published by Springer-Verlag  [2336 journals] • Deep differentiation of alkali ultramafic magmas: Formation of carbonatite melts • Authors: I. D. Ryabchikov; L. N. Kogarko Pages: 739 - 747 Abstract: The study of melt microinclusions in olivine megacrysts from meimechites and alkali picrites of the Maimecha–Kotui alkali ultramafic and carbonatite province (Polar Siberia) revealed that the melt compositions corrected for loss of olivine due to post-entrapment crystallization of olivine on inclusion walls (differentiates of primary meimechite magma) match well to the composition of nephelinites and olivine melilitites belonging to carbonatite magmatic series. Modeling of fractional crystallization of meimechite magmas results in the high-alkali melt compositions corresponding to the silicate–carbonate liquid immiscibility field. The appearance of volatile-rich melts at the base of magma-generating plume systems at early stages of partial melting can be explained by extraction of incompatible elements including volatiles, by near-solidus melts at low degrees of partial melting, and meimechites are an example of such magmas. Subsequent accumulation of CO2 in the residual melt results in generation of carbonate magma. PubDate: 2016-09-01 DOI: 10.1134/s001670291609007x Issue No: Vol. 54, No. 9 (2016) • Isotopic characteristics of the ermakovskoe fluorite–bertrandite–phenakite deposit (Western Transbaikalia) • Authors: G. S. Ripp; I. A. Izbrodin; E. I. Lastochkin; A. G. Doroshkevich; M. O. Rampilov; V. F. Posokhov Pages: 748 - 764 Abstract: Isotope-geochemical study of the Ermakovskoe fluorine–beryllium deposit was carried out to estimate the ore sources and role of host carbonate rocks in its formation. We analyzed oxygen and carbon isotope compositions in marbles, skarn carbonates, ore and post-ore parageneses; oxygen isotope compositions in oxides, silicates, apatite; and sulfur isotope composition in sulfides and sulfates. Sources of fluids participating in the rock and ore formation were determined using hydrogen and oxygen isotope compositions in hydroxyl-bearing minerals: phlogopite from marbles, vesuvian from skarns, eudidymite and bertrandite from ore parageneses, and bavenite of the post-ore stage. Isotopic studies suggest crustal source of sulfur, oxygen, and carbon dioxide, while oxygen and hydrogen isotope compositions in the hydroxyl-bearing minerals points to the contribution of meteoric waters in the formation of the fluorine-beryllium ores. PubDate: 2016-09-01 DOI: 10.1134/s0016702916090056 Issue No: Vol. 54, No. 9 (2016) • Zirconology of miaskites from the Ilmeny Mountains, South Urals • Authors: A. A. Krasnobaev; P. M. Valizer; S. V. Busharina; E. V. Medvedeva Pages: 765 - 780 Abstract: It is shown that the replacement and long evolution of miaskitic zircons led to the formation of two main age groups: 420–380 Ma (I) and 260–240 Ma (II). The age of miaskites is estimated at 440–445 Ma. Zircons I bear traces of fragmentation, dissolution, and replacement; they have “flat” REE patterns typical of metasomatic (hydrothermal) types, which is caused by allochthonous nature of the studied miaskites. Zircons II with differentiated REE patterns are similar to magmatic varieties, but have metamorphic origin. Mineralogical–geochemical and age characteristics of zircons in combination with structural–compositional features of miaskites define their metasomatic nature. The origin of the early zircon generations was related to the Ordovician rifting, while late generations were formed during shear deformations at the final stage of the evolution of the Uralian orogen. PubDate: 2016-09-01 DOI: 10.1134/s0016702916070041 Issue No: Vol. 54, No. 9 (2016) • Linear growth rate and sectorial growth dynamics of diamond crystals grown by the temperature-gradient techniques (Fe–Ni–C system) • Authors: Y. V. Babich; B. N. Feigelson; A. I. Chepurov Pages: 781 - 787 Abstract: The paper reports data on the linear growth rates of synthetic diamond single crystals grown at high P–T parameters by the temperature-gradient technique in the Fe–Ni–C system. Techniques of stepwise temperature changes and generation of growth microzoning were applied to evaluate the growth rates of various octahedral and cubic growth sectors and variations in these rates with growth time. The maximum linear growth rates of the order of 100–300 µm/h were detected at the initial activation of crystal growth, after which the growth rates nonlinearly decreased throughout the whole growth time to 5–20 µm/h. The fact that the linear growth rates can broadly vary indicates that the inner structure and growth dynamics of single diamond crystals grown by the temperature-gradient technique should be taken into account when applied in mineral–geochemical studies (capture of inclusions, accommodation of admixture components, changes of the defective structure, etc.). PubDate: 2016-09-01 DOI: 10.1134/s0016702916080036 Issue No: Vol. 54, No. 9 (2016) • Y–REE-Rich zircons of the Timan region: Geochemistry and economic significance • Authors: A. B. Makeyev; S. G. Skublov Pages: 788 - 794 Abstract: Mineralogical–geochemical studies of zircon from the Ichet’yu occurrence revealed unusually high Y and HREE contents (correlative with the P content) in the inner parts and zones of approximately 10% of the grains. They represent the intermediate members of the zircon–xenotime join with a heterovalent scheme of isomorphism Zr4+ + Si4+ → (Y + HREE)3+ + P5+. Geochronological and mineralogical–geochemical data suggest that the Middle Timan basement (the most probable source of zircon of the Ichet’yu occurrence) is made up of the Paleoproterozoic rocks and possibly represents a continuation beneath the Mezen syneclise and Middle Timan of the Paleoproterozoic collisional structure, to which the Arkhangelsk diamond province is confined. PubDate: 2016-09-01 DOI: 10.1134/s0016702916080073 Issue No: Vol. 54, No. 9 (2016) • Hydrogeochemistry at mining districts • Authors: R. F. Abdrakhmanov; R. M. Akhmetov Pages: 795 - 806 Abstract: The Southern Urals exemplifies hydrogeochemical environments at mining districts. Information obtained by studying the geochemistry of nonferrous-metal industrial wastes (both mine and dump drainage) is important not only because these wastes are potential sources of base metals but also in the context of geoecological problems. The Southern Urals is one of Russia’s principal producers of Cu and Zn concentrates for metallurgical processing: the region produces 12–15% Cu and 49% Zn concentrates in the country and 35% Cu and 69% Zn concentrates in the Urals. The Yubileinoe, Podol’skoe, Sibai, Uchaly, Novy Uchaly, and Gai deposits are the largest in the Urals. The ores of these deposits contain certain components (Se, Te, Cd, Co, Ga, Ge, In, Be, etc.) that are environmental contaminants. The volume of mine and dump drainage in the Southern Urals amounts to 9 million m3/year, and its mineralization varies from 3.0 to 30–40 g/L, occasionally as high as 365 g/L, with a sulfate, chloride–sulfate calcic–magnesian, magnesian–sodic, and magnesian–calcic composition of the waters. The minor and trace elements of the regional waste waters whose concentrations exceed the regional background values are Cu, Zn (one to four orders of magnitude), As, Cd (one to three orders of magnitude), Li and Be (one to two orders of magnitude). All waste waters transfer various contaminants into environmental subsystems and most actively modify the composition of the groundwaters. At the same time, dump drainage is a potentially important secondary source of valuable mineral components. PubDate: 2016-09-01 DOI: 10.1134/s0016702916080024 Issue No: Vol. 54, No. 9 (2016) • Geochemistry of sediments from Lake Grand, Northeast Russia • Authors: P. S. Minyuk; V. Ya. Borkhodoev Pages: 807 - 816 Abstract: Major and trace element distribution in the bottom sediments from Hole 13 drilled in Lake Grand, Magadan district, was studied using the method of principal components. It was established that geochemical characteristics are correlated with environmental changes. The sediments of cold MIS2 and MIS4 are characterized by the enriched TiO2, MgO, Al2O3, Fe2O3, and Cr and low Na2O, K2O contents, which is related to the grain-size composition of sediments. Sediments of warm stages show an opposite tendency. High concentration peaks of iron, phosphorus, and manganese correspond to the accumulation levels of vivianite and ferromanganese rocks. Silica is represented by biogenic and abiogenic varieties. Maximum SiO2 contents were found in the Late Holocene sediments and mark the high biological productivity of the basin. Revealed variations of some elements are correlated with the Heinrich events. PubDate: 2016-09-01 DOI: 10.1134/s0016702916070065 Issue No: Vol. 54, No. 9 (2016) • Geochemical features of mature hydrocarbon systems and indicators of their recognition • Authors: S. A. Punanova; T. L. Vinogradova Pages: 817 - 823 PubDate: 2016-09-01 DOI: 10.1134/s0016702916080103 Issue No: Vol. 54, No. 9 (2016) • Calculation of equilibria in CO 2 –water–salt systems using the Frezchem model • Authors: M. V. Mironenko; V. B. Polyakov; G. M. Marion Pages: 824 - 828 PubDate: 2016-09-01 DOI: 10.1134/s0016702916080085 Issue No: Vol. 54, No. 9 (2016) • Behavior of lanthanides during the formation of the Svetloe deposit, Chukotka • Authors: Yu. A. Popova; A. Yu. Bychkov; S. S. Matveeva Pages: 732 - 738 PubDate: 2016-08-01 DOI: 10.1134/s0016702916060057 Issue No: Vol. 54, No. 8 (2016) • Diamonds in the products of the 2012–2013 Tolbachik eruption (Kamchatka) and mechanism of their formation • Authors: E. M. Galimov; G. A. Karpov; V. S. Sevast’yanov; S. N. Shilobreeva; A. P. Maksimov Pages: 829 - 833 Abstract: The origin of diamonds in the lava and ash of the recent Tolbachik eruption of 2012–2013 (Kamchatka) is enigmatic. The mineralogy of the host rocks provides no evidence for the existence of the high pressure that is necessary for diamond formation. The analysis of carbon isotope systematics showed a similarity between the diamonds and dispersed carbon from the Tolbachik lava, which could serve as a primary material for diamond synthesis. There are grounds to believe that the formation of Tolbachik diamonds was related to fluid dynamics. Based on the obtained results, it was suggested that Tolbachik microdiamonds were formed as a result of cavitation during the rapid movement of volcanic fluid. The possibility of cavitation-induced diamond formation was previously theoretically substantiated by us and confirmed experimentally. During cavitation, ultrahigh pressure is generated locally (in collapsing bubbles), while the external pressure is not critical for diamond synthesis. The conditions of the occurrence of cavitation are rather common in geologic processes. Therefore, microdiamonds of such an origin may be much more abundant in nature than was supposed previously. PubDate: 2016-10-01 DOI: 10.1134/s0016702916100037 Issue No: Vol. 54, No. 10 (2016) • Micro- and nano-inclusions in a superdeep diamond from São Luiz, Brazil • Authors: Hiroyuki Kagi; Dmitry A. Zedgenizov; Hiroaki Ohfuji; Hidemi Ishibashi Pages: 834 - 838 Abstract: We report cloudy micro- and nano-inclusions in a superdeep diamond from São-Luiz, Brazil which contains inclusions of ferropericlase (Mg, Fe)O and former bridgmanite (Mg, Fe)SiO3 and ringwoodite (Mg, Fe)2SiO4. Field emission-SEM and TEM observations showed that the cloudy inclusions were composed of euhedral micro-inclusions with grain sizes ranging from tens nanometers to submicrometers. Infrared absorption spectra of the cloudy inclusions showed that water, carbonate, and silicates were not major components of these micro- and nano-inclusions and suggested that the main constituent of the inclusions was infrared-inactive. Some inclusions were suggested to contain material with lower atomic numbers than that of carbon. Mineral phase of nano- and micro-inclusions is unclear at present. Microbeam X-ray fluorescence analysis clarified that the micro-inclusions contained transition metals (Cr, Mn, Fe, Co, Ni, Cu, Zn) possibly as metallic or sulfide phases. The cloudy inclusions provide an important information on the growth environment of superdeep diamonds in the transition zone or the lower mantle. PubDate: 2016-10-01 DOI: 10.1134/s0016702916100062 Issue No: Vol. 54, No. 10 (2016) • Fundamentals of the mantle carbonatite concept of diamond genesis • Authors: Yu. A. Litvin; A. V. Spivak; A. V. Kuzyura Pages: 839 - 857 Abstract: In the mantle carbonatite concept of diamond genesis, the data of a physicochemical experiment and analytical mineralogy of inclusions in diamond conform well and solutions to the following genetic problems are generalized: (1) we substantiate that upper mantle diamond-forming melts have peridotite/eclogite–carbonatite–carbon compositions, melts of the transition zone have (wadsleyite ↔ ringwoodite)–majorite–stishovite–carbonatite–carbon compositions, and lower mantle melts have periclase/wüstite–bridgmanite–Ca-perovskite–stishovite–carbonatite–carbon compositions; (2) we plot generalized diagrams of diamondforming media illustrating the variable compositions of growth melts of diamonds and paragenetic phases, their genetic relationships with mantle matter, and classification relationships between primary inclusions; (3) we study experimentally equilibrium diagrams of syngenesis of diamonds and primary inclusions characterizing the diamond nucleation and growth conditions and capture of paragenetic and xenogenic minerals; (4) we determine the fractional phase diagrams of syngenesis of diamonds and inclusions illustrating regularities in the ultrabasic–basic evolution and paragenetic transitions in diamond-forming systems of the upper and lower mantle. We obtain evidence for physicochemically similar melt–solution ways of diamond genesis at mantle depths with different mineral compositions. PubDate: 2016-10-01 DOI: 10.1134/s0016702916100086 Issue No: Vol. 54, No. 10 (2016) • Indicator reactions of K and Na activities in the upper mantle: Natural mineral assemblages, experimental data, and thermodynamic modeling • Authors: O. G. Safonov; V. G. Butvina Pages: 858 - 872 Abstract: The paper presents a review of data on mineral assemblages and reactions that are potential indicators of K and Na activities in upper mantle fluids and melts modifying upper mantle rocks in the course of mantle metasomatism. Results of experimental modeling of these reactions are discussed. These data are utilized to calculate phase reactions in $$\log \left( {{a_{{H_2}O}}} \right) - \log \left( {{a_{{K_2}O}}} \right)and\log \left( {{a_{{H_2}O}}} \right) - \log \left( {{a_{N{a_2}O}}} \right)$$ space by minimizing the Gibbs free energy (constructing pseudosections). The calculations of this type make it possible to estimate variations in K and Na activities in processes modifying upper mantle rocks, to predict successions of mineral assemblages that are formed when these parameters vary, and to compare metasomatic processes in rocks of various composition. The approach is illustrated by examples of peridotite and eclogite xenoliths in kimberlite and alkaline basalt. PubDate: 2016-10-01 DOI: 10.1134/s0016702916100098 Issue No: Vol. 54, No. 10 (2016) • Spectral and structural properties of carbon nanoparticles synthesized in natural and anthropogenic processes • Authors: S. A. Voropaev; V. S. Sevast’yanov; A. Yu. Dnestrovskii; E. A. Ponomareva; N. V. Dushenko; V. M. Shkinev; A. S. Aronin Pages: 873 - 881 Abstract: In this contribution, we considered the character of carbon nanoparticle formation in the cosmos and during volcanic eruptions of a certain type and compared it with existing methods of synthesis in nanotechnology. Using the methods of electron diffraction and Raman spectroscopy, we investigated nanodiamond samples synthesized by hydrodynamic cavitation in various hydrocarbon liquids. Different forms of nanometer-sized carbon were distinguished, including complex fullerenes, nanodiamonds, and a face-centered cubic (fcc) carbon phase. The synthesized nanodiamonds were doped with silicon, their photoluminescence spectra were analyzed, and application of the results for geochemistry and cosmochemistry were discussed. PubDate: 2016-10-01 DOI: 10.1134/s0016702916100104 Issue No: Vol. 54, No. 10 (2016) • Relationships between textural and photoluminescence spectral features of carbonado (natural polycrystalline diamond) and implications for its origin • Authors: Hidemi Ishibashi; Hiroyuki Kagi; Shoko Odake; Hiroyuki Ohfuji; Hiroshi Kitawaki Pages: 882 - 889 PubDate: 2016-10-01 DOI: 10.1134/s0016702916100050 Issue No: Vol. 54, No. 10 (2016) • The mineralogy of Ca-rich inclusions in sublithospheric diamonds • Authors: D. A. Zedgenizov; A. L. Ragozin; V. V. Kalinina; H. Kagi Pages: 890 - 900 Abstract: This paper discusses mineralogy of Ca-rich inclusions in ultra-deep (sublithospheric) diamonds. It was shown that most of the Ca-rich majoritic garnets are of metabasic (eclogitic) affinity. The observed variation in major and trace element composition is consistent with variations in the composition of the protolith and the degree of enrichment or depletion during interaction with melts. Major and trace element compositions of the inclusions of Ca minerals in ultra-deep diamonds indicate that they crystallized from Ca-carbonatite melts that were derived from partial melting of eclogite bodies in deeply subducted oceanic crust in the transition zone or even the lower mantle. The occurrence of merwinite or CAS inclusions in ultra-deep diamonds can serve as mineralogical indicators of the interaction of metaperidotitic and metabasic mantle lithologies with alkaline carbonatite melts. The discovery of the inclusions of carbonates in association with ultra-deep Ca minerals can not only provide additional support for their role in the diamond formation process but also help to define additional mantle reservoirs involved in global carbon cycle. PubDate: 2016-10-01 DOI: 10.1134/s0016702916100116 Issue No: Vol. 54, No. 10 (2016) • Mineralogical and geochemical patterns of mantle xenoliths from the Jixia region (Fujian Province, southeastern China) • Authors: G. S. Zhang; A. V. Bobrov; J. S. Long; W. H. Han Pages: 901 - 913 Abstract: The paper discusses the results of mineralogical and petrographic studies of spinel lherzolite xenoliths and clinopyroxene megacrysts in basalt from the Jixia region related to the central zone of Cenozoic basaltic magmatism of southeastern China. Spinel lherzolite is predominantly composed of olivine (Fo89.6–90.4), orthopyroxene (Mg# = 90.6–92.7), clinopyroxene (Mg# = 90.3–91.9), and chrome spinel (Cr# = 6.59–14.0). According to the geochemical characteristics, basalt of the Jixia region is similar to OIB with asthenospheric material as a source. The following equilibrium temperatures and pressures were obtained for spinel peridotite: 890–1269°C and 10.4–14.8 kbar. Mg# of olivine and Cr# of chrome spinel are close to the values in rocks of the enriched mantle. It is evident from analysis of the textural peculiarities of spinel lherzolite that basaltic melt interacted with mantle rocks at the xenolith capture stage. Based on an analysis of the P–T conditions of the formation of spinel peridotite and clinopyroxene megacrysts, we show that mantle xenoliths were captured in the course of basaltic magma intrusion at a significantly lower depth than the area of partial melting. However, capture of mantle xenoliths was preceded by low-degree partial melting at an earlier stage. PubDate: 2016-10-01 DOI: 10.1134/s0016702916100049 Issue No: Vol. 54, No. 10 (2016) • Interaction of Fe and Fe 3 C with hydrogen and nitrogen at 6–20 GPa: a study by in situ X-ray diffraction • Authors: K. D. Litasov; A. F. Shatskiy; E. Ohtani Pages: 914 - 921 Abstract: A method of in situ X-ray diffraction at Spring-8 (Japan) was used to analyze simultaneously the hydrogen incorporation into Fe and Fe3C, as well as to measure the relative stability of carbides, nitrides, sulfides, and hydrides of iron at pressures of 6–20 GPa and temperatures up to 1600 K. The following stability sequence of individual iron compounds was established in the studied pressure and temperature interval: FeS > FeN > FeC > FeH > Fe. A change in the unit-cell volume as compared to the known equations of state was used to estimate the hydrogen contents in carbide Fe3C and hydride FeHx. Data on hydride correspond to stoichiometry with x ≈ 1. Unlike iron sulfides and silicides, the solubility of hydrogen in Fe3C seemed to be negligibly low—within measurement error. Extrapolating obtained data to pressures of the Earth’s core indicates that carbon and hydrogen are mutually incpompatible in the iron–nickel core, while nitrogen easily substitutes carbon and may be an important component of the inner core in the light of the recent models assuming the predominance of iron carbide in its composition. PubDate: 2016-10-01 DOI: 10.1134/s0016702916100074 Issue No: Vol. 54, No. 10 (2016) • Manifestation of nitrogen interstitials in synthetic diamonds obtained using a temperature gradient technique (Fe–Ni–C system) • Authors: Yu. V. Babich; B. N. Feigelson; A. I. Chepurov Pages: 922 - 927 Abstract: The IR-peak 1450 cm–1 (H1a-center) associated with nitrogen interstitials have been studied in nitrogen-bearing diamonds synthesized at high P-T parameters in the Fe–Ni–C system. FTIR study shows that manifestation of this nitrogen form is restricted to the regions of active transformation of C-defects into A-defects, which confirms the connection of its formation with C => A aggregation process. An examination of the dependence of the 1450 cm–1 peak on the degree of nitrogen aggregation indicates that H1a-centers are not only formed during C/A aggregation but also disappear simultaneously with the end of C => A transformation. Established facts suggest direct involving of nitrogen as interstitials in the C => A aggregation and serve as strong experimental argument in support of the “interstitial” mechanism of nitrogen migration during aggregation in diamonds containing transition metals. PubDate: 2016-10-01 DOI: 10.1134/s0016702916100025 Issue No: Vol. 54, No. 10 (2016) JournalTOCs School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh, EH14 4AS, UK Email: [email protected] Tel: +00 44 (0)131 4513762 Fax: +00 44 (0)131 4513327 Home (Search) Subjects A-Z Publishers A-Z Customise APIs
# Young Women in Harmonic Analysis and PDE ## December 2-4, 2016 ### Janina Gärtner (Karlsruhe Institute of Technology) #### Existence of solutions of the Lugiato-Lefever equation on $\mathbb{R}$ The stationary Lugiato-Lefever equation is given by \begin{align*} -du''+(\zeta-\mathrm i)u-|u|^2u+\mathrm i f\,=\,0 \end{align*} for $d\in\mathbb{R}$, $\zeta,f>0$. Here, we are interested in solutions in $\{u=u^\star+\widetilde{u}: u^\star=const., \widetilde{u}\in H^2(\mathbb{R}), u'(0)=0\}$. It is well known that for $d>0$, $\zeta>0$ the solutions of the nonlinear Schr\"odinger equation \begin{align*} \left\{ \begin{array}{l l} u\in H^1(\mathbb{R}), u\neq0, \\ -du''+\zeta u-|u|^2u\,=\,0 \end{array} \right. \end{align*} are given by $u(x)=\mathrm e^{\mathrm i \alpha}\varphi(x)$, $\alpha\in\mathbb{R}$, where $\varphi(x)=\sqrt{2\zeta}\frac{1}{\cosh\sqrt{\frac{\zeta}{d}}x}$. Using a theorem of Crandall-Rabinowitz, we can show that bifurcation with respect to the parameter $f$ only arises for $\alpha\in\left\{\frac{\pi}{2},\frac{3\pi}{2}\right\}$. \noindent Afterwards, we are interested in solutions $u_\varepsilon$ of \begin{align*} -du''+(\widetilde{\zeta}-\varepsilon \mathrm i)u-|u|^2 u+\mathrm i \widetilde{f}\,=\,0 \end{align*} for small $\varepsilon>0$. Then $a(x):=\frac{1}{\sqrt{\varepsilon}}\cdot u_{\varepsilon}\left(\frac{1}{\sqrt{\varepsilon}}x\right)$ solves the stationary Lugiato-Lefever equation for $\zeta=\frac{\widetilde{\zeta}}{\varepsilon}$ and $f=\frac{\widetilde{f}}{\varepsilon^{\frac{3}{2}}}$. Using a reformulation of this equation, Sturm's oscillation and comparison theorem, Agmon's principle and a suitable version of the implicit function theorem, we can find a quantitative neighborhood where the reformulated equation is uniquely solvable.
# Market Discrepancy in ETFs [closed] Today Yahoo, Google, CNBC, etc. are all reporting an open for DDM of \$106.95, a close of \$107.69 and a delta on the day of \$2.69. But the arithmetic difference between the open and the close is actually \$0.74. Why is there such a huge discrepancy? - ## closed as off-topic by Joshua Ulrich, olaker♦Jan 2 at 19:39 This question appears to be off-topic. The users who voted to close gave this specific reason: • "Basic financial questions are off-topic as they are assumed to be common knowledge for those studying or working in the field of quantitative finance." – Joshua Ulrich, olaker If this question can be reworded to fit the rules in the help center, please edit the question. ## 1 Answer The day's net change is typically displayed as the $Close$ to $Close$ difference. So today's close of $\$107.69$, yesterday's close is$\$105.00$. So the net change, $\$107.69 - \$105.00 = \$2.69$. If you were to calculate$Open$to$Close$, you'd get what you expect,$\$0.74$. - Much thanks... I should have known that. –  Roger Smith Dec 8 '13 at 0:40 Please accept the answer if it resolves your question. –  Louis Marascio Dec 9 '13 at 0:37
# 20% in one year vs 4% per year over 5 years I am looking for information for some extended family members. A medical procedure has a 4% chance of stroke per year over 5 years. One family member has said that that is a 20% chance of stroke over all (5*4%). Others argue that it is less because 4% each year is smaller. I am not sure how to frame this question in order to get an accurate answer. Is the chance of something happening being 4% a year for 5 years the same as say an instantaneous 20% chance of something happening? - If I interpret your question correctly, you want to know the probability of a stroke occurring in $5$ years given that there is a constant $4\%$ chance of stroke each year? Using complementary probability is the best approach here. Since there is a $4\%$ chance of a stroke happening each year, there is a $96\%$ chance of a stroke not happening each year. Then there is a $(.96)^5\approx.8153$, or $81.53\%$ chance that no stroke happens in those $5$ years. This means that there is a $100\%-81.53\%=18.47\%$ chance of stroke happening in those $5$ years, not $20\%$. Think about this also, if something has a $4\%$ chance of happening each year, does this mean there is a $25\cdot 4\%=100\%$ chance that it happens in $25$ years? It doesn't make very much intuitive sense that a relatively rare event is guaranteed to happen in a certain time span. @chrisfs Yes, when writing my answer, I switched the $1$ and $5$ in $.8153$ accidentally, I have since edited it. My apologies! It should be correct now. –  yunone Jan 26 '11 at 4:47
## Rocky Mountain Journal of Mathematics ### A hyperbolic non-local problem modelling MEMS technology #### Article information Source Rocky Mountain J. Math., Volume 41, Number 2 (2011), 505-534. Dates First available in Project Euclid: 2 May 2011 https://projecteuclid.org/euclid.rmjm/1304345451 Digital Object Identifier doi:10.1216/RMJ-2011-41-2-505 Mathematical Reviews number (MathSciNet) MR2794451 Zentralblatt MATH identifier 1228.35132 #### Citation Kavallaris, N.I.; Lacey, A.A.; Nikolopoulos, C.V.; Tzanetis, D.E. A hyperbolic non-local problem modelling MEMS technology. Rocky Mountain J. Math. 41 (2011), no. 2, 505--534. doi:10.1216/RMJ-2011-41-2-505. https://projecteuclid.org/euclid.rmjm/1304345451 #### References • J.W. Bebernes and A.A. Lacey, Global existence and finite-time blow-up for a class of nonlocal parabolic problems, Adv. Diff. Eqns. 2 (1997), 927-953. • –––, Shear band formation for a non-local model of thermo-viscoelastic flows, Adv. Math. Sci. Appl. 15 (2005), 265-282. • P. Bizon, T. Chmaj and Z. Tabor, On blowup for semilinear wave equations with a focusing nonlinearity, Nonlinearity 17 (2004), 2187-2201. • P. Bizon, D. Maison and A. Wasserman, Self-similar solutions of semilinear wave equations with a focusing nonlinearity, Nonlinearity 20 (2007), 2061-2074. • C.Y. Chan and K.K. Nip, On the blow-up of $|u_tt|$ at quenching for semilinear Euler-Poisson-Darboux equations, Comp. Appl. Mat. 14 (1995), 185-190. • P.H. Chang and H.A. Levine, The quenching of solutions of semilinear hyperbolic equations, SIAM J. Math. Anal. 12 (1981), 893-903. • P. Esposito and N. Ghoussoub, Uniqueness of solutions for an elliptic equation modeling MEMS, Methods Appl. Anal. 15 (2008), 341-354. • P. Esposito, N. Ghoussoub and Y. Guo, Compactness along the first branch of unstable solutions for an elliptic problem with a singular nonlinearity, Comm. Pure Appl. Math. 60 (2007), 1731-1768. • G. Flores, G. Mercado, J.A. Pelesko and N. Smyth, Analysis of the dynamics and touchdown in a model of electrostatic MEMS, SIAM J. Appl. Math. 67 (2007), 434-446. • V.A. Galaktionov and S.I. Pohozaev, On similarity solutions and blow-up spectra for a semilinear wave equation, Quart. Appl. Math. 61 (2003), 583-600. • P.R. Garabedian, Partial differential equations, John Wiley, New York, 1964. • N. Ghoussoub and Y. Guo, On the partial differential equations of electrostatic MEMS devices: Stationary case, SIAM J. Math. Anal. 38 (2007), 1423-1449. • –––, On the partial differential equations of electrostatic MEMS devices II: Dynamic case, Nonlinear Diff. Eqns. Appl. 15 (2008), 115-145. • F.K. N'Gohisse and Th.K. Boni, Quenching time of some nonlinear wave equations, Arch. Mat. 45 (2009), 115-124. • Y. Guo, On the partial differential equations of electrostatic MEMS devices III: Refined touchdown behavior, J. Diff. Equations 244 (2008), 2277-2309. • –––, Global solutions of singular parabolic equations arising from electrostatic MEMS, J. Diff. Equations 245 (2008), 809-844. • J.-S. Guo, B. Hu and C.-J. Wang, A non-local quenching problem arising in micro-electro mechanical systems, Quart. Appl. Math. 67 (2009), 725-734. • J.-S. Guo and N.I. Kavallaris, On a non-local parabolic problem arising in electrostatic MEMS control, preprint. • Y. Guo, Z. Pan and M.J. Ward, Touchdown and pull-in voltage behavior of a MEMS device with varying dielectric properties, SIAM J. Appl. Math 166 (2006), 309-338. • K.M. Hui, Existence and dynamic properties of a parabolic non-local MEMS equation, Nonlinear Anal. TMA 74 (2011), 298-316. • N.I. Kavallaris, T. Miyasita and T. Suzuki, Touchdown and related problems in electrostatic MEMS device equation, Nonlinear Diff. Eqns. Appl. 15 (2008), 363-385. • H.A. Levine, Quenching, nonquenching, and beyond quenching for solution of some parabolic equations, Ann. Mat. Pura Appl. 155 (1989), 243-260. • H.A. Levine and M.W. Smiley, Abstract wave equations with a singular nonlinear forcing term, J. Math. Anal. Appl. 103 (1984), 409-427. • J.A. Pelesko, Mathematical modeling of electrostatic MEMS with Taylored dielectric properties, SIAM J. Appl. Math. 62 (2002), 888-908. • J.A. Pelesko and D.H. Bernstein, Modeling MEMS and NEMS, Chapman Hall and CRC Press, Boca Raton, 2002. • J.A. Pelesko and A.A. Triolo, Non-local problems in MEMS device control, J. Engineering Math. 41 (2001), 345-366. • R.A. Smith, On a hyperbolic quenching problem in several dimensions, SIAM J. Math. Anal. 20 (1989), 1081-1094.
# Studies of Euler diagrams/decompose ## Shape of the Euler diagram the four cells of bundle AB   (outside of sets C and D) the two cells of bundle C   (inside of set B, outside of A and D) the two cells of bundle D   (outside of sets A, B and C) disjunction of the three conjunctions above These images represent the 4-ary Boolean function ${\displaystyle (\neg C\land \neg D)\lor (\neg A\land B\land \neg D)\lor (\neg A\land \neg B\land \neg C)}$. It is true in 6 of 16 cases, and is part of BEC 127 (rows (22,2) and (23,0)). The images below show the truth table with false places grayed out, the Euler diagram, and the corresponding graph. The images on the right show the decomposition into three bundles, i.e. parts of the Euler diagram that are connected by crossing circles. For this particular Boolean function, the colored squares can be seen as true. The colored truth tables on the left represent the cells of the bundle, i.e. all cells that touch one of its circles. The gray/white truth tables on the right are always minterms, and represent the position of the bundle in relation to every other set, i.e. inside or outside of it. ## Functions with this Euler diagram shape Light gray areas of the Euler diagram are empty, i.e. the corresponding fields in the truth table are false. The Euler diagram could not be drawn without these gap cells, because each border can belong to only one set. A border between always corresponds to a change of one bit. Truth table fields in dark gray do not have corresponding fields in the Euler diagram. (They are the same in each file.) There seem to be only four Boolean functions with this Euler diagram shape. Many cells could not be empty, without changing the shape of the Euler diagram. E.g. the intersection of A and B can not be empty, because then the two circles would be separate. And the left crescent can not be empty, because then A would be within B. And no set can be empty, because that would make it's complement the universe, which should be displayed as the border of the Euler diagram. murife The intersection ¬A ∧ B ∧ ¬C is marked as empty. Thus the truth table of bundle AB is TTFT and that of the bundle C is FT.     BEC 41, row (5,0) tefabi Both cells mentioned above are empty.     BEC 205, rows (16,3) and (17,1)
# Chapter in ToC without page number I'm working in sharelatex and a requirement for my thesis is a summary in the ToC which is an unnumbered chapter and without page number. Now, I fixed the numberless chapter, but there is no documentation of latex of excluding the page number for a single chapter in the ToC. Is this possible? To avoid the page number in the ToC you can add the ToC entry manually with \addtocontents: \documentclass{book} \newcommand\fakechapter[2][]{% \ifx&#1&% \fakechapter[#2]{#2}% \else \chapter*{#2}% \chaptermark{#1}% \fi } \begin{document} \tableofcontents \fakechapter{Abc} \chapter{Def} \fakechapter[Something]{Ghi} \end{document} If you are using the memoir class (which caters for the book, report, and article classes) you can do this: \documentclass[...]{memoir} \begin{document} \frontmatter \tableofcontents* \mainmatter \chapter*{Summary} % No ToC entry \addtocontents{toc}{\cftpagenumbersoff{chapter}} % no chapter page numbers \addcontentsline{toc}{chapter}{Summary} % put Summary chapter title in ToC \addtocontents{toc}{\cftpagenumberson}{chapter}} % chapter page numbers printed Summary of the thesis. \chapter{One}
Table of Content Volume 32 Issue 2 30 March 2008 Research Articles BROAD-LEAVED KOREAN PINE (PINUS KORAIENSIS) MIXED FOREST PLOT IN CHANGBAISHAN (CBS) OF CHINA: COMMUNITY COMPOSITION AND STRUCTURE HAO Zhan-Qing, LI Bu-Hang, ZHANG Jian, WANG Xu-Gao, YE Ji, YAO Xiao-Lin Chin J Plan Ecolo. 2008, 32 (2):  238-250.  doi:10.3773/j.issn.1005-264x.2008.02.002 Abstract ( 3442 )   PDF (2371KB) ( 1778 ) Related Articles | Metrics Aims To ultimately understand of the mechanisms of species coexistence, the Chinese Academy of Sciences, in collaboration with the Center for Tropical Forest Science (CTFS) of the Smithsonian Tropical Research Institute, has recently initiated an ambitious large-scale, long-term forest dynamics and diversity plots network. Following the protocols of the CTFS forest dynamism plots, the China Network has been designed to establish four 20-25 hm2 plots along the latitudinal gradient from North to South China. The Changbaishan plot (the CBS plot), the northern most plot of the China Network, is established for the benefit of understanding temperate forest ecosystem. This paper aims to address the community composition and structure of the CBS plot, serving as baseline information accessible for a wide range of future studies. Methods Following the field protocol of the 50 hm2 plot in Barro Colorado Island in Panama, a 25 hm2 broad-leaved Korean pine mixed forest permanent plot of 500 m × 500 m was established in the summer of 2004 in Changbaishan. All free-standing individuals with DBH (diameter at breast height) ≥1 cm were tagged, mapped and identified to species. Important findings There are 38 902 genotype individuals (59 121 individuals with branch), belonging to 52 species, 32 genera and 18 families. Floristic characteristics of the community belong to Changbaishan plant flora, including some tertiary relic species and subtropical species. Three species comprise 52.5% of all individuals, and 14 species comprise 95.2% of all individuals, while other 38 species comprise fewer than 5% of all individuals. The statistics of species abundance, basal area, mean DBH, and important value showed that there are obviously dominant species in the community. The size-class structure of main species in the overstory layer showed nearly normal or bimodal distribution, while the species in the midstory and understory layers showed invert “J" distribution, even “L" distribution. Spatial distribution patterns of the main species, Pinus koraiensis, Tilia amurensis, Quercus mongolica, Fraxinus mandshurica, Acer mono and Ulmus japonica, changed differently with size-class and scales. Meanwhile, spatial patterns of some other species also showed spatial heterogeneity. NATURAL SECONDARY POPLAR-BIRCH FOREST IN CHANGBAI MOUNTAIN: SPECIES COMPOSITION AND COMMUNITY STRUCTURE HAO Zhan-Qing, ZHANG Jian, LI Bu-Hang, YE Ji, WANG Xu-Gao, YAO Xiao-Lin Chin J Plan Ecolo. 2008, 32 (2):  251-261.  doi:10.3773/j.issn.1005-264x.2008.02.003 Abstract ( 2782 )   PDF (2266KB) ( 1145 ) Related Articles | Metrics Aims Natural secondary poplar-birch forest is one of main secondary forests in Changbai Mountain, which is formed by the restoration after clear-cutting or fire. And it is an important stage in the secondary succession of broad-leaved Korean pine (Pinus koraiensis) mixed forest. A 5 hm2 natural secondary poplar-birch forest plot was established in Changbai Mountain Natural Reserve in 2005 in order to gain insights into the processes driving regeneration and succession of the forest and its climax community—broad-leaved Korean pine mixed forest. This paper aims were to give some basic information on the forest, including species composition and community structure. Methods In the plot, all free-standing trees at least 1 cm in diameter at breast height (DBH,1.3 m above ground) were mapped, tagged, and identified to species, and their geographic coordinates were recorded following a standard field protocol. A total survey station was used to determine the elevations of edge points. Important findings There are 16 565 genotype individuals (20 101 individuals with branch), belonging to 44 species, 28 genera, and 16 families. Floristic characteristics of the community are very prominent. At the generic levels, North temperate areal-type is the main part of genus areal-types. The statistics of species abundance, basal area, mean DBH, and important value showed that pioneer species Betula platyphylla and Populus davidiana are obviously dominant species. However, the regeneration of the two species is very poor by the analysis of size-class distributions, which indicated that they will quit the stage along with the succession of the community. Some tree species, such as Korean pine and Tiliaam urensis, which are main species in broad-leaved Korean pine mixed forest, account for a large proportion in the understory layer. Therefore, they will replace the pioneer species and dominate the overstory layer. Spatial distribution patterns of species were analyzed. For pioneer species and those species that are main species in broad-leaved Korean pine mixed forest, there is no obviously clumped pattern captured. For the midstory and understory species, there are obviously clumped patterns of many species by the observation. There is no obvious correlation between the clumped patterns and the topography. COMMUNITY COMPOSITION AND STRUCTURE OF GUTIANSHAN FOREST DYNAMIC PLOT IN A MID-SUBTROPICAL EVERGREEN BROAD-LEAVED FOREST, EAST CHINA ZHU Yan, ZHAO Gu-Feng, ZHANG Li-Wen, SHEN Guo-Chun, MI Xiang-Cheng, REN Hai-Bao, YU Ming-Jian, CHEN Jian-Hua, CHEN Sheng-Wen, FANG Teng, MA Ke-Ping Chin J Plan Ecolo. 2008, 32 (2):  262-273.  doi:10.3773/j.issn.1005-264x.2008.02.004 Abstract ( 3071 )   PDF (2574KB) ( 1542 ) Related Articles | Metrics Aims Mainly distributed in China, subtropical evergreen broad-leaved forest is one of important vegetation types in the world. Here we report preliminary results of floristic characteristics, community composition, vertical structure, size class structure, and spatial structure of Gutianshan(GTS) forest plot.  Methods We established a 24-hm2 (600 m×400 m) forest permanent plot from November, 2004 to September, 2005 in mid- subtropical evergreen broad-leaved forest of Gutianshan Nature Reserve, China. Following the standard census procedure of the Centre for Tropical Forest Science (CTFS), all free-standing trees ≥1 cm in diameter at breast height (DBH) in the forest were mapped, tagged and identified to species. We employed software R 2.6.0 to analyze our data. Important findings The results of floristic characteristics indicates that the tropical elements are more than temperate elements. At family level, the proportion of the pantropic type is the greatest (28.6%), the number of the tropic elements are more than temperate ones (24/13). At genus level,there are 53 tropic genera and 44 temperate ones. As for community composition, there are 159 species, 103 genera and 49 families, 140 700 individuals in total. The evergreen tree species in community are dominant (i.e. 91 species, total relative dominance is 90 .6%, importance value is 85.6%, accounts for 85.9% of the total abundance). GTS forest plot is typical mid-subtropical evergreen broad-leaved forest, which displays characteristics of both temperate deciduous broad-leaved forest and tropical rain forest. On the one hand, community composition has obvious dominant species, which is similar to temperate deciduous broad-leaved forest. There are 3 mostly dominant species, Castanopsis eyrei, Schima superba and Pinus massoniana. Large numbers of rare species (59 rare species, equal to or less than one tree per hm2) in the community account for 37.1% species richness, which is similar to tropical rain forest. Vertical structure is composed of canopy layer (63 species ), sub-uree layer (70 species), shrub layer (26 species).The structure of DBH size class of all species in the plot generally appears reverse J’ shape, which indicates successful community regeneration. Spatial distribution of several dominant species, from small to adult tree or old tree, shifts from closer aggregation to looser aggregation, and shows different habitat preference. Finally, we compare the large plot approach with conventional sampling method. COMMUNITY STRUCTURE OF A 20 HM2 LOWER SUBTROPICAL EVERGREEN BROADLEAVED FOREST PLOT IN DINGHUSHAN, CHINA YE Wan-Hui, CAO Hong-Lin, HUANG Zhong-Liang, LIAN Ju-Yu, WANG Zhi-Gao, LI Lin, WEI Shi-Guang, WANG Zhang-Ming Chin J Plan Ecolo. 2008, 32 (2):  274-286.  doi:10.3773/j.issn.1005-264x.2008.02.005 Abstract ( 3294 )   PDF (2695KB) ( 1265 ) Related Articles | Metrics Aims Lower subtropical evergreen broadleaved forest in Dinghushan is one of typical vegetations in Southern China. Its vegetation is protected very well. Because of its geographical location, the composition of its flora is transitional between the subtropical and tropical. A 20 hm2 permanent plot of 400 m × 500 m was established in 2005 for long-term monitoring of the biodiversity in t he forest. Methods The plot was established following the field protocol of the 50 hm2 plot in Barro Colorado Island (BCI) in Panama. All free-standing trees with diameter at breast height (DBH) at least one centimeter were mapped and identified to species. Important findings There are 71 617 individuals, belonging to 210 species, 119 genera and 56 families. Its floristic composition is transitional between the sub tropical and tropical. The vertical structure of the forest is clear. There are five layers from the top of the canopy to the ground floor, three tree layers (upper, middle and low), one shrub layer and one herb layer, respectively. Based on important value, Castanopsis chinensis, Schima superba and Engelhardtia roxburghiana are the three most dominant species in the upper layer. There are many shade-tolerant and intermediate light-demanding species, such as Cryptocarya chinensis, Xanthophyllum hainanense , Machilus chinensis in mid-layer. Species in low layer are rich and complex, which composition varies a lot. The species-area curve indicates that there is high diversity in the forest and the number of species is close to BCI. There is high proportion of rare species represented by <20 individuals which account for 52.38% of the total number of species. Among these rare species 45% of them lead to be rare by species characteristics, 20% by the floristic transitional nature of the plot, while the rest by disturbances. Size distribution of all individuals shows an invert J-shape, which indicates that the community is in a stable and normal growth status. Size distributions of the dominant species are classified into four types based on their size-class frequencies, unimodal in the top layer, inverse J-shape in middle layer, close to inverse J-shape in middle and low layer, and L-shape in low and shrub layer. Dominant species in different layers are aggregated by the spatial pattern analysis and the spatial patterns of these species in different layers vary with size-classes. However, spatial patterns of them also show complementary within the same size classes, especially in 10-40 cm DBH. The individuals with DBH> 40 cm are randomly distributed. ESTABLISHMENT OF XISHUANGBANNA TROPICAL FOREST DYNAMICS PLOT: SPECIES COMPOSITIONS AND SPATIAL DISTRIBUTION PATTERNS LAN Guo-Yu, HU Yue-Hua, CAO Min, ZHU Hua, WANG Hong, ZHOU Shi-Shun, DENG Xiao-Bao, CUI Jing-Yu, HUANG Jian-Guo, LIU Lin-Yun, XU Hai-Long, SONG Jun-Ping, HE You-Cai Chin J Plan Ecolo. 2008, 32 (2):  287-298.  doi:10.3773/j.issn.1005-264x.2008.02.006 Abstract ( 3103 )   PDF (2310KB) ( 797 ) Related Articles | Metrics Aims Tropical seasonal rain forest in Xishuangbanna is one of the most species-rich forest ecosystems in China. This area is also one of the biodiversity hotspots for conservation priorities of the world. For the purpose of monitoring long-term dynamics of tree populations, a 20-hm2 plot was established in a dipterocarp forest in Mengla Nature Reserve in 2007 by Xishuangbanna Tropical Botanical Garden, Chinese Academy of Sciences and Xishuangbanna Administration of Nature Reserves, in collaboration with Alberta University, Canada and Tunghai University (Taiwan, China).  Methods The construction technology and field protocol followed those applied in the establishment of the 50 hm2 plot in the tropical forest of Barro Colorado Island in Panama, developed by Center for Tropical Forest Science, Smithsonian Tropical Research Institute in 1980. All free-standing trees with diameter at breast height (DBH) ≥ 1 cm were tagged, mapped, measured (girth) and identified to species in the plot. Spatial distribution patterns of four dominant canopy tree species  (among different tree size classes) and 12 rare species were analyzed by using a point pattern analysis Ripley’s L-function. Important findings A total of 95 834 free-standing individuals with DBH ≥ 1 cm were recorded in the 20 hm2 plot. Of which, 95 498 individuals were identified to species level. This plot included 468 species belonging to 213 genera and 70 families, except for another 336 individuals that could not be identified yet. The flora of plot was mainly composed of species in tropical families. Shorea wantianshuea that dominates the forest canopy was ranked the second in terms of importance value, although it had the largest basal area. Pittosporopsis kerrii, an understory tree species showed the highest abundance (20 918 individuals). The four canopy species had a large number of juveniles and exhibited size structures with reverse-J shape associated with continuously regenerating populations. Young trees (saplings and poles) revealed a clumped spatial distribution, but adults tended to have a random distribution. Most of the 12 rare tree species in the plot also showed aggregated distribution pattern. LONG-TERM STUDIES OF FOREST DYNAMIC IN THE DUKE FOREST, SOUTH EASTERN UNITED ST ATES: A SYNTHESIS (REVIEW) XI Wei-Min, PEET Robert K. Chin J Plan Ecolo. 2008, 32 (2):  299-318.  doi:10.3773/j.issn.1005-264x.2008.02.007 Abstract ( 2820 )   PDF (5490KB) ( 1013 ) Related Articles | Metrics A growing need for long-term condition and trend information across natural and anthropogenic landscapes is promoting interest in long-term permanent plot research. In this review, we introduced the 76-year history of management and research on forest dynamics in the Duke Forest, NC. This forest has been intensively studied since the early 1930s and has become a model system for ecologic al and environmental education and research in the eastern United States. We summarize and assess research in the Duke Forest on forest environment, the current network of long-term permanent vegetation plots, survey protocols, data management procedures, and major research findings from those long-term plot data. We also summarize more broadly the current status of long-term research on the natural dynamics of Piedmont forests of the southeastern United States. Lessons learned from the Duke Forest research site could inform the design of a world-wide, long-term network of research plots for monitoring and assessment of forest dynamics and trends in species composition and biodiversity. THE RELATIONSHIP BETWEEN NDVI AND CLIMATE ELEMENTS FOR 22 YEARS IN DIFFERENT VEGETATION AREAS OF NORTHWEST CHINA GUO Ni, ZHU Yan-Jun, WANG Jie-Min, DENG Chao-Ping Chin J Plan Ecolo. 2008, 32 (2):  319-327.  doi:10.3773/j.issn.1005-264x.2008.02.008 Abstract ( 3155 )   PDF (2292KB) ( 1323 ) Related Articles | Metrics Aims We sought to understand the impacts of climate change on vegetation in Northwest China and the relationship between normalized difference vegetation index (NDVI) and climate elements. Methods Correlation analyses were done using the GIMMS NDVI data and monthly mean temperature and precipitation data from January 1982 to December 2003. We selected different regions in Northwest China, representing major types of vegetation, such as forest, grassland, oasis, and rain-fed cropland, for detailed study. Important findings We found strong correlations between NDVI and temperature/precipitation, except for the Gobi and other desert areas. Correlation coefficients of NDVI and temperature are higher than them of NDVI and precipitation in almost all regions, particularly for the Hexi Corridor and most of the Xinjiang area. During the vegetation growth period, temperature has greater effect on the various types of vegetation than precipitation. The forests in the higher latitude of Xinjiang area  are most sensitive to temperature. This sensitivity reduces in sequence from forests to oases, grasslands and unirrigated croplands. Grasslands are most sensitive to precipitation. The sensitivity to precipitation decreases from grasslands to forests, unirrigated croplands, and oases. In summer, the NDVI of forest decreased during the last 22 years, especially forest in the eastern portion of the northwest. This was related to decreases of precipitation and increases of temperature in these areas. The NDVI in most grassland increased. The trends were significant for high cold meadow and halophytic meadow. Climate warming is the main reason for grass growth speeding up. For oases, the NDVI increases were the most significant. The trends were the highest in Xinjiang oasis. Climate warming was one of the factors driving increases in NDVI. The impact of human activities, such as oasis expanding, crop structure change and crop varieties on the NDVI variation cannot be ignored. The NDVI interannual change was high and varied among the unirrigated croplands. NDVI had a strong positive correlation with precipitation and a negative correlation with temperature. The temperature increase and the precipitation decrease caused the NDVI decrease in these areas. DETECTING DESERTIFICATION PROCESSES USING TM AND ETM+ DATA, NORTH OF ISFAHAN, IRAN Khajeddin SJ, Akbari M, Karimzadeh HR, Eghbal MK Chin J Plan Ecolo. 2008, 32 (2):  328-335.  doi:10.3773/j.issn.1005-264x.2008.02.009 Abstract ( 2114 )   PDF (1693KB) ( 950 ) Related Articles | Metrics Aims Desertification results in ecological and biological diminution of the earth, and can happen naturally or cause by anthropogenic activities. This process especially affects arid and semi-arid regions, such as the Isfahan region, where the spread of desertification is reaching critical proportions. The aim of this study is to use remotely sensed data to review the trend of desertification in the northern of Isfahan, Iran. Methods Multi-temporal images were employed to evaluate the trend of desertification, specifically the TM and ETM+ data of September, 1990 and September, 2001. Geometric and radiometric corrections were applied to each image prior to image processing and supervised classification, and vegetation indices were applied to produce a land use map of each image in nine classes. The land use classification s in the two map images were compared and changes between land use classes were detected over the 11 year period using a fuzzy and post-classification technique. Important findings The maps and their comparison with false color composite images showed the differences efficiently. With the fuzzy and post-classification method the land use changes were sited on the map. Fuzzy confirmed 53% changed area and 47% unchanged areas in the study region. The results verify the desertification expansion in the study areas. Because of poor land management, agricultural lands converted to desert and abandoned areas, and some marginal pasture lands had to be changed to agricultural land which are desertification spreading according to United Nations Conference on Desertification (UNCOD). Also farmland and pastures have been converted to urban and industrial areas, and the rangelands have been spoiled due to opencast mine excavations. With the mine margins eroding as well as their debris accumulating on the pasture lands, desertification has become worse. Three areas of less-elevated mountains have remained unchanged. This study confirmed that the anthropogenic activities accelerated the desertification process and severely endangered the remaining areas. SEASONAL ASPECT STAGES OF PLANT COMMUNITIES AND ITS SPATIAL-TEMPORAL VARIATION IN TEMPERATE EASTERN CHINA CHEN Xiao-Qiu, HAN Jian-Wei Chin J Plan Ecolo. 2008, 32 (2):  336-346.  doi:10.3773/j.issn.1005-264x.2008.02.010 Abstract ( 2920 )   PDF (2318KB) ( 1068 ) Related Articles | Metrics Aims Our objectives were to determine the seasonal aspect stages of plant communities at seven sites in the temperate area of eastern China and analyze spatial-temporal variation of the onset dates of seasonal aspect stages and its relation to climatic factors. Methods We developed a simulating method of phenological cumulative frequency to determine the seasonal aspect stages of plant communities. Basic ideas of the method were to establish a mixed data set composed of the occurrence dates of all phenophases of observed deciduous trees and shrubs for each site and each year , and then calculate the frequency and cumulative frequency of the occurrence dates of phenophases in every five-day period throughout each year and for each site. We used a logistic curve to simulate the phenological cumulative frequency and computed the corresponding dates of maximum changing rate of the curvature as the onset dates of seasonal aspect stages. Important findings The annual mean dates of greenup and active photosynthesis onsets were delayed with increased latitude, whereas the annual mean dates of senescence and dormancy onsets advanced with increased latitude. Other than the onset dates, the annual mean lengths of greenup, active photosynthesis and senescence periods did not change obviously with latitude but the annual mean length of dormancy period was apparently prolonged with increased latitude. From 1982 to 1996, the regional mean onset date and duration of greenup period significantly advanced at a rate of 0.6 d&#8226;a-1 and lengthened at a rate of 0.7 d&#8226;a-1; the mean onset date and duration of active photosunthesis period were insignificantly delayed and shortened; the mean onset date and duration of senescence period slightly advanced and lengthened; and the mean onset date and duration of dormancy period slightly advanced but significantly shortened at a rate of 0.9 d&#8226;a-1. The onset date of seasonal aspect stages generally correlates better with temperature than with precipitation. The regional mean greenup onset date shows a significantly negative correlation with mean temperature in the current month in a changing rate of 4 .3 d&#8226;℃-1, whereas the regional mean active photosynthesis onset date presents a significantly negative correlation with mean temperature during the former second month to the current month in a changing rate of 4.4 d&#8226;℃-1. However, the regional mean senescence and dormancy onset dates do not correlate significantly with mean temperatures. TESTING THE NEUTRAL THEORY OF PLANT COMMUNITIES IN SUBALPINE MEADOW DU Xiao-Guang, ZHOU Shu-Rong Chin J Plan Ecolo. 2008, 32 (2):  347-354.  doi:10.3773/j.issn.1005-264x.2008.02.011 Abstract ( 2503 )   PDF (1995KB) ( 1030 ) Related Articles | Metrics Aims We tested the neutral theory of biodiversity on subalpine meadows of the eastern Tibetan Plateau that exhibited comparatively complicated species composition. Our objective was to explain the species abundance distribution pattern and the underlying mechanism of biodiversity.  Methods We fit the neutral model to a randomly sampled data set obtained in three different habitats (north-facing slope, level field and south-facing slope) and used three methods to test the fitness of the neutral model to the real community: confidence interval, goodness of fit and diversity index. Important findings We found no significant difference (p>0.05) between the neutral theory predictions and observed species abundance distributions in the three habitats according to the goodness of fit method. The observed data nearly completely fall into 95% confidence intervals of the neutral model predictions (only one out of 63 species in level field communities and 2 out of 75 species in the north-facing slope communities deviate from the 95% confidence interval). There is no significant difference between the neutral theory predictions and observed species abundance patterns, in which the fit of richness predictions is the best (0.49<p<0.56) and the fitness of evenness predictions is relatively poor. However, for the three different habitats, the fitness of these three indices in north-facing slope communities is perfect and the p-values vary between 0.49 and 0.70, but the fitness in level field communities is poorer (p-value of the Simpson diversity index is less than 0.1). Although the test results of the neutral theory by three different test methods and habitats are somewhat different, we conclude that the neutral model can predict species abundance distribution patterns in the three habitats of subalpine meadow. STUDY ON PHENOTYPIC DIVERSITY OF CONE AND SEED IN NATURAL POPULATIONS OF PICEA CRASSIFOLIA IN QILIAN MOUNTAIN, CHINA WANG Ya-Li, LI Yi Chin J Plan Ecolo. 2008, 32 (2):  355-362.  doi:10.3773/j.issn.1005-264x.2008.02.012 Abstract ( 2508 )   PDF (1818KB) ( 1081 ) Related Articles | Metrics Aims Our objective was to determine 1) the phenotypic variation of cone and seed in natural populations and 2) the relationship between phenotypic variation of natural population and different distribution areas in Picea crassifolia. Methods Field investigation and analysis of the natural distribution of P. crassifolia in Qilian Mountain led to our selection of four cone characters and four seed traits in 10 trees from each of 10 populations. We examined morphological diversity among/within populations based on analysis of eight phenotypic traits. Variance analysis, multi-comparison, correlation analysis and hierarchical cluster analysis were used to analyze experimental results. Important finding Analysis of variance for all traits showed significant differences among/within populations except for cone dry weight and cone length/cone width. The mean phenotypic differentiation coefficient (Vst) among populations was 27.18%, compared to 72.82% within populations. In different individuals within populations, the CV of cone length, cone width, cone dry weight, cone length/cone width, seed length, seed width, seed length/seed width, 1 000 seeds weight was 10.08%, 5.80%, 19.29%, 9.66%, 8.38%, 15.34%, 6.52% and 13.94%, respectively . Most of the cone and seed traits were positively correlated. The cone dry weight, seed length, 1 000 seeds weight, cone length, cone width were thought to be the most important cone and seed traits that were easy to measured in P. crassifolia. The spatial variation of traits of natural populations was related most strongly to longitude. According to UPGMA cluster analysis, the 10 populations of P. crassifolia could be divided into four groups. This study indicates that there is rich phenotypic variation of cone and seed in natural populations of P. crassifolia in Qilian Mountain and thereby provides theoretical references and basic data for genetic resources conservation, utilization and improvement in P. crassifolia. STUDIES ON THE LEAF SIZE-TWIG SIZE SPECTRUM OF SUBTROPICAL EVERGREEN BOARD-LEAVED WOODY SPECIES LIU Zhi-Guo, CAI Yong-Li, LI Kai Chin J Plan Ecolo. 2008, 32 (2):  363-369.  doi:10.3773/j.issn.1005-264x.2008.02.013 Abstract ( 2718 )   PDF (1422KB) ( 1122 ) Related Articles | Metrics Aims An important aim of plant ecology is to identify and quantify key dimensions of ecological variation among species and to understand the basis for them. The leaf size-twig size spectrum is an important dimension that is under development. Our aims were to determine if there is an invariant allometric scaling relationship between leaf size and twig size in subtropical evergreen broad-leaved woody species and to determine what indicates this relationship. Methods We investigated leaf and stem traits—including leaf and stem mass, individual leaf area, total leaf area, stem cross-sectional area, leaf number and stem length—at the twig level for 68 evergreen broad-leaved species on Meihuashan Mountain in the subtropical zone of East China. We determined the scaling relationships between leaf traits and stem traits at the twig level.  Important findings Twig cross-sectional area has an invariant allometric scaling relationships with leaf mass (SMA slope 1.29), total leaf area (1.23) and individual leaf area (1.18), all with common slopes being 1-1.5. Leaf mass is isometrically related to stem mass and leaf area. This suggests that there would be different metabolic ways between animal body and plant. Species with larger leaves deployed a greater total leaf area distal to the final branching point than smaller leaved species, with this leaf surface made up of fewer leaves per twig, even though the twigs were longer. This might result from the humid climate and weak light in subtropical evergreen broad-leaved forest of this region. VARIATION OF LEAF STRUCTURE OF TWO DOMINANT SPECIES IN ALPINE GRASSLAND AND THE RELATIONSHIP BETWEEN LEAF STRUCTURE AND ECOLOGICAL FACTORS HU Jian-Ying, GUO Ke, DONG Ming Chin J Plan Ecolo. 2008, 32 (2):  370-378.  doi:10.3773/j.issn.1005-264x.2008.02.014 Abstract ( 2578 )   PDF (1733KB) ( 856 ) Related Articles | Metrics Aims A comprehensive survey on anatomical features of the leaves of Carex moorcroftii and Stipa purpurea, two dominant species in Tibetan Plateau, has been conducted. Quantitative analysis on the relation between ecological factors and leaf structure variation was carried out in order to find out how they are acclimated to environments and whether these two species with different reproductive behavior have different adaptation mechanisms. Methods A transect was set along the Qinghai-Tibet Road from Xidatan to Yangbajing, with great change in ecological features: altitude from 4 586 to 4 901 m, growing season precipitation from 384 to 202 mm, growing season monthly average temperature from 5.1 to 1.4 ℃, growing season monthly average humidity from 65% to 54%, growing season evaporation from 1 242 to 798 mm, and growing season monthly average wind speed from 2.4 to 4.0 m&#8226;s-1. We collected leaf samples along the transect, embedded them in paraffin, stained embedded sections by astrablue-basic fuchsin, and measured them. Variation coefficient, multi-comparison, correlation analysis and regression analysis were used to analyze structural diversity and the relation between diversity and ecological factors. Important findings The leaf of S. purpurea curls inward, with lower epidermis outside, and stomata and epidermal hairs appear only on the upper epidermis inside. The leaf of C. moorcroftii usually unfolds like “V” in cross section with well- developed aerenchyma, and stomata and epidermal hairs appear only on the lower epidermis. The leaf structure of both species differs remarkably among populations. Multiple linear step by step regressions revealed for S. purpurea that there are significant linkages between soil available K and the size of mesophyll cells, growing season monthly average cloud coverage and lower epidermis thickness, growing season monthly average cloud coverage and phloem area, growing season monthly average humidity and single vessel semi-diameter, and growing season monthly average humidity and average vessel transverse section area. For C. moorcroftii, there are significant linkage between growing season monthly average lowest temperature and upper epidermis thickness, continentality and thickness of bulliform cells, soil pH value and size of upper epidermis cells, soil available phosphor us and vessel numbers, soil available phosphorus and phloem area, and soil avail able K and leaf aerenchyma area. Comparison of variance coefficients showed that C. moorcroftii had greater integrative variability than S. purpurea . ADAPTIVE SIGNIFICANCE OF SAUSSUREA PARVIFLORA’S SEXUAL ORGANS, QINGHAI-TIBETAN PLATEAU, CHINA WANG Yi-Feng, GAO Hong-Yan, SHI Hai-Yan, WANG Jian-Hong, DU Guo-Zhen Chin J Plan Ecolo. 2008, 32 (2):  379-384.  doi:10.3773/j.issn.1005-264x.2008.02.015 Abstract ( 2541 )   PDF (1086KB) ( 814 ) Related Articles | Metrics Aims Saussurea parviflora is the dominant species on the Qinghai-Tibetan Plateau. This study addresses the following: 1) correlations between S. parviflora’s sexual organs and altitude, 2) correlations among S. parviflora’s sexual organs, and 3) reasons for S. parviflora’s adaptation to this stressful environment. Methods During the flowering phase in August-September 2005, we collected 11 populations of S. parviflora from different altitudes. We harvested 20 individual s from each population and randomly selected 10 capitula from each individual. We randomly selected 20 flowerlets from different capitula from the same altitude and fixed them in FAA (18∶2∶2, alcohol∶formaldehyde∶glacial acetic acid). We measured the length of sexual organs in 20 fully-opened flowerlets and counted pollen in 10 mature flowerlets with undehisced anthers. At the fruiting stage in October 2005 and 2006, we harvested 10 individuals of each population and randomly selected 200 capitula to count the maturation rate. All experimental data were analyzed with the statistical analysis software SPSS11.5.  Important findings There was a strong positive correlation among filament length, anther length and altitude (p<0.01) and a strong negative correlation between pollen number and altitude (p<0.01). Moreover, there were strong positive correlations among 1) style length, length of style ramification and altitude (p< 0.01 ) and 2) style length, length of style ramification, filament length and anther number (p<0.05), and between maturation rate and altitude (p<0.05). Therefore, variation of intraspecific sexual organs under specific environmental condition made S. parviflora adapted to the Qinghai-Tibetan Plateau. With reduced pollen number and insect diversity, abundance and activity with increased altitude, the style ramification lengthened and maturation rate improved. This enhanced the sensitivity to pollinators, ensuring that the decreased pollen was sufficiently spread by them, resulting in increased success of reproduction and dominance in the stressful environment. MORPHOGENESIS AND CHANGES IN ENDOGENOUS PHYTOHORMONES IN REAUMURIA TRIGYNA MAXIM DURING FISSURATE GROWTH YANG Rui-Li, WANG Ying-Chun, CHANG Yan-Xu Chin J Plan Ecolo. 2008, 32 (2):  385-391.  doi:10.3773/j.issn.1005-264x.2008.02.016 Abstract ( 2548 )   PDF (1697KB) ( 749 ) Related Articles | Metrics Aims Many xerophilous plants in the West Erdos region of Inner Mongolia share the same vegetative reproduction, fissurate growth, which is an important adaptation to drought environments. There are few correlated studies of fissurate growth. Methods We studied the morphogenesis of fissurate growth of Reaumuria trigyna by sequential slicing and determined changes in the content of ABA, IAA, GA3 and ZR using ELISA (Enzyme-linked immunosorbent assay). Important findings Fissurate growth started from the base of stem. During certain phases of growth, the cambium layer of this part was asymmetric and the vessels of secondary xylem became smaller and fewer, while the amount of xylem fiber increased. The constriction formed here. Then the cells gradually disintegrated and the constriction became deeper. One constriction connected with the next one when they extended to the center of the fissurate part. The entire vascular bundle split into many single vascular bundles, and they separated from each other. There was abnormal structure in the fissurate part of R. trigyna, as xylem was divided into several rings by several layers of flat living cells. This could play a role in fissurate growth. IAA and ZT were more concentrated in the fissurate part of transitional plants than in roots, which probably regulated and promoted the growth and splitting of cells in the fissurate part. EFFECTS OF NITROGEN AVAILABILITY AND COMPETITION ON LEAF CHARACTERISTICS OF SPARTINA ALTERNIFLORA AND PHRAGMITES AUSTRALIS ZHAO Cong-Jiao, DENG Zi-Fa, ZHOU Chang-Fang, GUAN Bao-Hua, AN Shu-Qing, CHEN Lin, LU Xia-Mei Chin J Plan Ecolo. 2008, 32 (2):  392-401.  doi:10.3773/j.issn.1005-264x.2008.02.017 Abstract ( 2563 )   PDF (2132KB) ( 956 ) Related Articles | Metrics Aims Spartina alterniflora, originating from North America, has become an invasive species in Europe and China. Meanwhile, Phragmites australis, a species experiencing die-back’ in Europe, has invaded coastal ecosystems in North America. Each species is invading the other’s native habitat. We studied changes of leaf characters for the two species under different nitrogen and planting densities in the greenhouse to 1) compare the relative competitiveness and invasive capacity of the two species and 2) reveal potential mechanisms that determine successful invasion in different regions. Methods We grew artificial populations of Spartina alterniflora(S) and Phragmites australis (P) at three different densities in monoculture (S, SS, SSS and P, PP, PPP) and mixed-culture (SP, SPP and PSS), and under three levels of nitrogen (0, 60 and 120 mg·kg-1). Plants were harvested after 15 weeks, and their leaf characteristics, including area, length, width, thickness and number were measured.  Important findings Nitrogen addition increased leaf area in both species whether in monoculture or mixed-culture (p<0.05), but the change in leaf area of P. australis in mixed-culture decreased with high nitrogen level, which may be due to greater interspecific competition from S. alterniflora. In monoculture, the effects of nitrogen addition on leaf number were greater than on the other leaf traits (p<0.01), while the effects on leaf number (S. alterniflora) or leaf width (P. australis) were greatest (p<0.05) in mixed-culture. Plant densities decreased leaf area of the two species in all treatments (p<0.05). In monoculture, the effects of plant densities on leaf number were greatest (p<0.05). However, in mixed-culture, P. australis mainly reduced leaf width and leaf number of S. alterniflora (p<0.05), while S. alterniflora reduced all the parameters of P. australis (p<0.05). The intensity of competition which S. alterniflora imposed on P. australis was greater than the reverse with low and high nitrogen levels, but this outcome was reversed with medium nitrogen level. At high nitrogen levels, S. alterniflora dominated interspecific competition, with its increased leaf area restraining the leaf growth and reducing the leaf area of P. australis; this may be a mechanism for the successful invasion of S. alterniflora into P. australis populations. RELATIONSHIP BETWEEN N AND P CONTENTS IN AQUATIC MACROPHYTES, WATER AND SEDIMENT IN TAIHU LAKE, CHINA LEI Ze-Xiang, XU De-Lan, XIE Yi-Fa, LIU Zheng-Wen Chin J Plan Ecolo. 2008, 32 (2):  402-407.  doi:10.3773/j.issn.1005-264x.2008.02.018 Abstract ( 2591 )   PDF (1295KB) ( 993 ) Related Articles | Metrics Aims Our objectives were to 1) examine relationships between N and P contents in aquatic macrophytes, water and sediment in Taihu Lake, a eutrophic ecosystem, and 2) explore factors controlling N and P contents in plants in relation to plant characters, such as growth status and developmental stage.  Methods We selected 32 sampling sites in the emerging-plants-free water zone of Taihu Lake and collected plant samples twice with the quadrat method, sampled sediment with column-shaped samplers at different depths, and measured relevant parameters. Concentrations of total phosphorus and total nitrogen of sediment samples with and without macrophytes were measured with the Kjeldahl method and ICP-AES atomic absorption spectrometry, respectively, and those in tissues of aquatic plants were measured by Kjeldahl automatic N analyzer and molybdate-ascorbic colorimetric determination, respectively.  Important findings The N and P contents of tissues of submerged macrophytes were higher than floating-leaved macrophytes, and the nutrient contents were higher in May than in September. Significant correlations were found between the concentrations of N and P in plants and in water, but not in sediments. The N and P contents in plants were highly dependent on plant growth status and developmental stage. Our study provides the insight for the restoration of the eutrophic Taihu Lake ecosystem. EFFECTS OF HERBICIDE BENSULFURON-METHYL ON GAMETOPHYTE DEVELOPMENT AND SEX ORGAN DIFFERENTIATION IN CERATOPTERIS PTERIDOIDES TAO Ling, YIN Li-Yan, LI Wei Chin J Plan Ecolo. 2008, 32 (2):  408-412.  doi:10.3773/j.issn.1005-264x.2008.02.019 Abstract ( 2357 )   PDF (1127KB) ( 855 ) Related Articles | Metrics Aims Herbicides applied to rice fields may contaminate nearby aquatic systems through a variety of mechanisms, including drift, surface runoff and leaching, and may have adverse effects on non-target aquatic plants that play a major role in the aquatic ecosystem. The second category of the key protected wild plant, Ceratopteris pteridoides, is frequently observed in paddy fields and adjacent lakes and ponds. Its sexual reproduction phase coincides with herbicide application; therefore, its gametophytes are likely exposed to rice herbicides. The objective of this study is to assess the ecological hazard of herbicide contamination on this aquatic plant. Methods We cultured spores of C. pteridoides in aqueous solutions of different bensulfuron-methyl concentrations and observed gametophyte growth and sex organ differentiation. Important findings Bensulfuron-methyl had no effect on spore germination of C. pteridoides. However, it inhibited gametophyte growth with the EC50 of 0.086μg&#8226;L-1, which was below the reported environment concentration. Bensulfuron-methyl reduced the ratio of hermaphrodite gametophyte and delayed archegonium differentiation of C. pteridoides with the in crease in concentration. When bensulfuron-methyl concentration was 10μg&#8226;L-1, gametophytes ceased growth and did not form archegonia. Therefore, bensulfuron-methyl inhibits gametophyte growth and sex organ differentiation of C. pteridoides at low concentration and may pose a risk to sexual reproduction of C. pteridoides in the field. PHOTOSYNTHETIC RATES AND PARTITIONING OF ABSORBED LIGHT ENERGY IN LEAVES OF SUBTROPICAL BROAD-LEAF TREES UNDER MODERATELY HIGH-TEMPERATURE ZHAO Ping, SUN Gu-Chou, ZENG Xiao-Ping Chin J Plan Ecolo. 2008, 32 (2):  413-423.  doi:10.3773/j.issn.1005-264x.2008.02.020 Abstract ( 2571 )   PDF (1986KB) ( 1062 ) Related Articles | Metrics Aims This study aimed at better understanding the physiological mechanisms involved in the moderately high temperature treatment, particularly those relating to partitioning of absorbed light energy in some dominant tree species of low subtropical broad-leaf forest in South China, which would affect vegetation succession. Methods Two-year old saplings of three tree species, Schima superba , Castanopsis hystrix and Cryptocarya concinna, which represent different successional stages in subtropical broad-leaf forest were treated with moderately high temperature (38 ℃ ). Their photosynthetic rate and chlorophyll fluorescence parameters were measured using a photosynthesis measurement system (Li-COR 6400 and leaf chamber fluorometer) in order to evaluate the effects of moderately high temperature on photosynthesis and partitioning of absorbed light energy under subsequent irradiance. Important findings Exposure to moderately high temperature caused decrease of photosynthetic capacity of all experimented tree species under the subsequent irradiance, and such decrease was more obvious in sun plant, S. superba and mesophytic plant, C. hystrix than in shade-adapted plant, C. concinna. The fraction of energy consumed by photochemical reaction decreased in the exposed leaves of S. superba, in comparison with those in control at 25 ℃, and a similar response was al so found in C. hystrix and C. concinna. The results showed that moderately high temperature could restrict the fraction of absorbed energy utilized in photochemical reaction in leaves treated with moderately high temperature under the subsequent irradiance. The portion of total absorbed light energy that was excessive and the fraction of energy absorbed by the inactive PSⅡ also increased in the exposed leaves at 38 ℃ irrespective of species difference, and the increments were more remarkable in C. concinna than in C. hystrix and S. superba. Different responses to moderately high temperature were dependent on tree species in subtropical broad-leaf forest. The results may mean that the increase of ambient temperature by changing climate would more severely restrict photosynthetic process in the late-successional species, C. concinnia than in the early- or mesophytic species,S. superb and C. hystrix. EFFECTS OF ELEVATED CO2 CONCENTRATION ON PHOTOSYNTHSIS, GROWTH AND DEVELOPMENT OF THE BROMELIAD GUZMANIA HUI Jun-Ai, YE Qing-Sheng Chin J Plan Ecolo. 2008, 32 (2):  424-430.  doi:10.3773/j.issn.1005-264x.2008.02.021 Abstract ( 2243 )   PDF (1460KB) ( 805 ) Related Articles | Metrics Aims Our objective was to determine the effects of elevated CO2 concentration on photosynthetic characteristics, growth rate, flowering percentage and activity of photosynthetic enzyme in Guzmania Denise’ and Guzmania Cherry’. Methods We measured Pn, Gs and Tr under various growth conditions using the Li-6400 and used the data to calculate WUE (water use efficiency, WUE=Pn/Tr). We determined chlorophyll and total soluble sugar levels by the method of Zhang & Qu (2003), starch content by the method of Xu et al. (1998) and rubisco and glycolic acid oxidase levels by the method of Ye et al. (1993). Important findings Under two elevated CO2 concentrations, net photosynthetic rate increased by 6.24%-31.91% and 11.92%- 41.48% over plants grown in ambient CO2 concentration during 30 d. Elevated CO2 concentration caused a marked rise in soluble sugar and starch accumulation in leaves, but significantly reduced stomatal conductance and transpiration rate. In addition, Rubisco activity was increased, and glycolate oxidase activity obviously was decreased. Plant height and leaf area increased 6.94%-14.63% and 1.66%-7.06% over plants grown in ambient CO2. There were 9.71%-20.85% and 2.87%-11.62% increases for (900± 40) μmol CO2&#8226;mol-1. Dry and fresh weights also increased with elevated CO2 concentration, as was flowering in Guzmania Cherry’. CH4 EMISSION FLUX FROM SOIL OF PINE PLANTATIONS IN THE QIANYANZHOU RED EARTH HILL REGION OF CHINA LIU Ling-Ling, LIU Yun-Fen, WEN Xue-Fa, WANG Ying-Hong Chin J Plan Ecolo. 2008, 32 (2):  431-439.  doi:10.3773/j.issn.1005-264x.2008.02.022 Abstract ( 2442 )   PDF (2053KB) ( 754 ) Related Articles | Metrics Aims Methane (CH4) plays an important role in the greenhouse effects. Our objectives were to evaluate the CH4 budget, understand seasonal variation of CH4, and explore effects of temperature and moisture on CH4 flux in a mid- subtropical pine plantation to provide data for estimating the influence of subtropical forest ecosystems on greenhouse effects.  Methods We analyzed CH4 flux from soils in the Qianyanzhou red earth hill region of China for 16 months from September 2004 to December 2005, using a static chamber-gas chromatograph technique. Important findings The soil of this type of pine plantation was a sink of CH4 to the atmosphere as a whole; annual CH4 flux ranged from 7.67 to -67.μg&#8226;m-2&#8226;h-1 with average of -15.530 μg&#8226;m-2&#8226;h-1under a forest soil treatment and from 9.31 to -90.36μg&#8226;m-2&#8226;h-1 with average of -16.μg&#8226;m-2&#8226;h-1 under a litter-free treatment. CH4 absorption had similarly seasonal variations with a sequence of autumn > summer > spring > winter for both treatments, but differed in variation ranges and time. The litter-free soil had larger ranges of seasonal variations, maximum CH4 sink was in October and minimum sink was in March. Meanwhile , the corresponding maximum and minimum CH4 sinks in the forest soil were in the September and February, respectively, a month earlier than litter-free treatment. Analysis of correlations between CH4 flux and temperature and moisture showed that CH4 flux had a significant positive correlation to soil temperature at 5 cm depth and a significant negative correlation to soil water content at 5 cm depth. Partial correlations showed the combined effects of moisture and temperature on CH4 flux in different seasons. Temperature was a limiting factor for soil absorption of CH4 during winter (December to February), but soil absorption increased during the rainy season (March to May). From July to August, CH4 absorption increased with the declining soil moisture but was restricted by high temperature. During the fall (September to November), CH4 absorption reached the maximum value for suitable combined effects of temperature and moisture. In summary, CH4 absorption increased with soil temperature and decreased with soil water content, but was restricted by high temperature. LEAF AREA CHARACTERISTICS OF PLANTATION STANDS IN SEMI-ARID LOESS HILL-GULLY REGION OF CHINA YIN Jing, QIU Guo-Yu, HE Fan, HE Kang-Ning, TIAN Jing-Hui, ZHANG Wei-Qiang, XIONG Yu-Jiu, ZHAO Shao-Hua, LIU Jian-Xin Chin J Plan Ecolo. 2008, 32 (2):  440-447.  doi:10.3773/j.issn.1005-264x.2008.02.023 Abstract ( 2381 )   PDF (326KB) ( 989 ) Related Articles | Metrics Aims Our objectives are to explore the relationship of leaf area and stand density and determine a convenient way to measure stand leaf area. Methods Using direct and indirect methods during the growing season (from May to October) of 2004, we measured seasonal variation of the leaf area of tree and shrub species: Robinia pseudoacacia stands with four densities (3 333,1 666, 1 111 and 833 plants per hm2), Platycladus orientalis stands with three densities (3 333, 1 666 and 1 111 plants per hm2), Caragana korshinskii, Hippophae rhamnoidesand Amorpha fruticosa . We developed formulas for leaf area by correlating leaf fresh weight, diameter of base branch and leaf area. Important findings Maximum leaf area and leaf area index (LAI) occurred in September for trees and August for shrubs. There is a significant power relation between leaf area and leaf fresh weight for Robinia pseudoacacia, significant linear relation between leaf area and leaf fresh weight for P. orientalis, C. korshinskii, H. rhamnoides and A. fruticosa, significant power relation between leaf area and diameter of base branch for C. korshinskii, and significant linear relation between leaf area and diameter of base branch for H. rhamnoides and A. fruticosa. After the planted vegetation in Loess hilly-gully region enters into a rapid growth stage, the LAI of R. pseudoacacia stands with different densities converge and LAI is not affected by initial or current density. This was also observed for P. orientalis stands. However, the leaf area of individual trees is negatively linearly related with stand density. Robinia pseudoacacia stands and P. orientalis stands have reached their maximum bearing capacities in the Loess hilly-gully region as a result of limited soil water. Therefore, to improve the quality of individual trees, we recommend stand densities not exceed 833 and 1 111 plants per hm2 for R. pseudoacaciaand P. orientalis, respectively. To improve the quality of entire stands, we suggest reducing the density a little. INFLUENCES OF SUBSTITUTING SIZE VARIABLES FOR AGE ON POPULATION SURVIVAL ANALYSIS: A CASE STUDY OF PINUS TABULAEFORMIS AND P. ARMANDII IN QINLING MOUNTAIN, CHINA HE Ya-Ping, FEI Shi-Min, JIANG Jun-Ming, CHEN Xiu-Ming, ZHANG Xu-Dong, HE Fei Chin J Plan Ecolo. 2008, 32 (2):  448-455.  doi:10.3773/j.issn.1005-264x.2008.02.024 Abstract ( 2865 )   PDF (1769KB) ( 934 ) Related Articles | Metrics Aims Our objectives were to 1) explore the feasibility of using individual height and basal diameter as substitutes for age, and 2) determine the influence of using these parameters on survival curve analysis for Pinus armandii and P. tabulaeformis populations in an area damaged by water disaster in the northern Qinling Mountain.  Methods We investigated two dominant species, P. tabulaeformis and P. armandii, which had regenerated after damage by water disaster 17 years earlier. We measured height, basal diameter and age of two species in 6 plots (total plot area was 11 900 m2). We determined age by counting number of whorls of branches. Important findings It was infeasible to use basal diameter and tree height in place of age for P. armandii because the relationship to age was an exponential function for both parameters; therefore, we used logarithmic values for basal diameter and tree height. In contrast, the functions were linear for P. tabulaeformis. The survival curves were all linear type with no type differences between base diameter, tree height and factual age. Our study indicates that it is feasible to use basal diameter and tree height as indices for factual age when the relationships of basal diameter and tree height to age are linear. Investigation of relations of age and size and their influential factors is important for the analysis of plant population demography. VERTICAL DISTRIBUTION OF ALGAE IN DIFFERENT LOCATIONS OF SAND DUNES IN THE GURBANTUNGGUT DESERT,XINJIANG, CHINA ZHANG Bing-Chang, ZHAO Jian-Cheng, ZHANG Yuan-Ming, LI Min, ZHANG Jing Chin J Plan Ecolo. 2008, 32 (2):  456-464.  doi:10.3773/j.issn.1005-264x.2008.02.025 Abstract ( 2786 )   PDF (1790KB) ( 944 ) Related Articles | Metrics Aims It is well known that desert algae play significant and irreplaceable roles in the early formation and structural maintenance of the biological soil crusts. Although species composition and community structure of algae have be en widely studied, only a few investigations have been made on the vertical distribution of soil algae in deserts. Our objective was to further reveal species community structure and ecological distribution of desert algae in vertical layers.  Methods We selected typical sand dunes in the Third Site of the southern Gurbantunggut Desert and collected 112 soil samples twice in summers of 2005 and 2006. Using a fixed section from different locations of the sand dunes, we gathered soil samples at serial sections of 0-0.5,0.5-1,1-2, 2-5 cm depth respectively. Algae species were identified by direct microscopic observation and liquid culture observation (dominant species were identified by direct observation). Each sample was checked three times, with ten observations for each sample. Algae biomass was determined by measurement of chlorophyll a.  Important findings Algae species composition differ among soil layers. Dominant algae species are mainly in the 0-2 cm layers and seldom exist in lower layers. The most dominant algae species in most layers is Microcoleus vaginatus.In 1-2 c m layers of interdune and windward, Oscillatoria pseudogeminata is the most dominant. In addition, Synechocystis crassa, Naviculasp. and Amphora ovalis are more dominant than other algae. Except for the top of sand dunes, algae biomass in different layers exhibited highly significant differences (p<0.01) at other locations. Algal biomass dramatically decreased with soil depth from the surface to lower levels. At the same soil depths, algae biomass declined from the inter dune position to the windward and leeward, to the top, with differences highly significant (p<0.01) or significant (p<0.05). SOIL MICROBIAL ACTIVITY AND BIOMASS C AND N CONTENT IN THREE TYPICAL ECOSYSTEMS IN QILIAN MOUNTAINS, CHINA WU Jian-Guo, AI Li  Chin J Plan Ecolo. 2008, 32 (2):  465-476.  doi:10.3773/j.issn.1005-264x.2008.02.026 Abstract ( 2727 )   PDF (2680KB) ( 998 ) Related Articles | Metrics Aims Our objectives were to measure the soil microbial biomass carbon (SMB C), soil microbial biomass nitrogen (SMB N) content and soil microbial activity, and determine the relationship between these parameters and other soil properties (including organic carbon, total nitrogen content and water content) in montane forest (dominated by Picea crassifolia), steppe and alpine meadow ecosystems in Qi Lian Mountains, China.  Methods We measured SMB C and SMB N content using fumigation-incubation method and soil microbial activity using substrate-induced respiration. Important findings The SMB C content under forest was 60% and 120% higher than under steppe and alpine meadow, respectively, and it was 40% higher under steppe than alpine meadow. The SMB N content was 64% and 111% higher in 0-5 cm soil depth under forest than alpine meadow and steppe, respectively, and it was 29% higher under alpine meadow than steppe. Also, it was 7% and 191% higher in 5-15 cm soil depth under forest than steppe and alpine meadow, respectively, and it was 171% higher under steppe than alpine meadow (p<0.05). The ratio SMB C (SMB C-to-SOC (Soil organic carbon), 0.4%-2.8%) was 32% higher under forest and steppe than alpine meadow, and the ratio of SMB N (SMB N-to-total soil N, 0.5%-2.8%) in 0-5 and 5-15 cm soil depths was 150% higher under forest and steppe than alpine meadow (p<0.05). Soil microbial activity in 0-5 or 5-15 cm soil depth was 26% higher under forest or alpine meadow than steppe, and in 15-35 cm soil depth it was 28% higher under forest than steppe and alpine meadow (p<0.05). The SMB C and SMB N content was positively correlated with SOC content, and the SMB N content or its ratio was also positively correlated with the SMB C content and its ratio (r2>0.30, p<0.000 1). The SMB N content, SMB C ratio, SMB N ratio and microbial activity were significantly negatively correlated with soil pH. The SMB C content, SMB N content and their ratio and microbial activity were positively correlated with soil water content. INTERCROPPING ADVANTAGE AND CONTRIBUTION OF ABOVE- AND BELOW-GROUND INTERACTIONS IN WHEAT-MAIZE INTERCROPPING LIU Guang-Cai, YANG Qi-Feng, LI Long, SUN Jian-Hao Chin J Plan Ecolo. 2008, 32 (2):  477-484.  doi:10.3773/j.issn.1005-264x.2008.02.027 Abstract ( 2524 )   PDF (1759KB) ( 888 ) Related Articles | Metrics Aims Optimization of resource structure is important for improving the yield of intercropping systems. Our objective was to clarify the contribution of above- and below-ground interactions to the intercropping advantage.  Methods We employed a micro-plot experiment and root barriers in a wheat-maize intercropping system with or without maize plastic sheet mulching. Important findings Non-plastic sheet mulching wheat-maize intercropping system has a yield advantage with land equivalent ratios (LERs) for grain yield and biomass of 1.30 and 1.29, respectively. Plastic sheet mulching with maize can significantly increase the yield advantage of intercropping, with LERs for grain yield and biomass of 1.41 and 1.40, respectively. There is increased nitrogen, phosphorous and potassium uptake in the non-plastic sheet mulching wheat-maize intercropping and with plastic sheet mulching with maize. In the non-plastic sheet mulching intercropping system, the relative contribution to the intercropping advantage is 75% above-ground and 25% below-ground, but in the plastic sheet mulching intercropping system it is 67% above-ground and 33% below-ground. The relative contribution of above-and below-ground interactions to nutrient advantage are 67% and 33% for nitrogen and phosphorus and 50% and 50% for potassium, respectively, in non-plastic sheet mulching intercropping; however, plastic sheet mulching with maize can increase the below-ground contribution to nitrogen and phosphorus advantage in the intercropping (there is no significant influence to potassium advantage). Intercropping advantage can be obtained by crop matching and controlled by plastic sheet mulching. Plastic sheet mulching with maize can significantly increase yield advantage, nutrients absorption advantage and the below-ground contribution. EFFECTS OF ROW SPACING IN WINTER WHEAT ON CANOPY STRUCTURE AND MICROCLIMATE IN LATER GROWTH STAGE YANG Wen-Ping, GUO Tian-Cai, LIU Sheng-Bo, WANG Chen-Yang, WANG Yong-Hua, MA Dong-Yun Chin J Plan Ecolo. 2008, 32 (2):  485-490.  doi:10.3773/j.issn.1005-264x.2008.02.028 Abstract ( 2553 )   PDF (1380KB) ( 1044 ) Related Articles | Metrics Aims Row spacing is a cultivation technique used for many field crops , but not wheat, for which traditional row spacing of 20 cm is used regardless of spike-type wheat cultivar. Our objective was to examine the effects of row spacing on canopy structure, its microclimate and yield in heavy-ear winter wheat, Triticum aestivum. Methods We conducted a field experiment on the farm of National Engineering Research Center for Wheat, Zhengzhou, China, using heavy-ear winter wheat cultivar Lankao Aizao 8’. Three row spacing treatments (15, 20 and 30 cm) were used in a randomized complete block arrangement with three replications. Important findings Leaf area index decreased and canopy openness increased with increased row spacing. Moreover, light interception at different layers, extinction coefficient, and relative humidity decreased, canopy temperature at each layer increased, and the spread of carbon dioxide was unchanged. Yield could be in creased by reducing row spacing to have an even plant-to-plant distribution that could weaken competition. Our finding that 15 cm row spacing was beneficial to canopy structure and yield indicated that heavy-ear winter wheat cultivar ‘Lankao Aizao 8’should not be planted in conventional 20 cm row spacing. EFFECTS OF WATER STRESS ON OSMOLYTES AT DIFFERENT GROWTH STAGES IN RICE LEAVES AND ROOTS CAI Kun-Zheng, WU Xue-Zhu, LUO Shi-Ming Chin J Plan Ecolo. 2008, 32 (2):  491-500.  doi:10.3773/j.issn.1005-264x.2008.02.029 Abstract ( 2778 )   PDF (495KB) ( 1486 ) Related Articles | Metrics Aims Water stress is one of the most important ecological factors affecting yield and quality of rice (Oryza sativa), and osmotic adjustment is the main mechanism for the crop to adjust to drought. Our objective was to elucidate the effects of water stress on osmolytes at different growth stages in leaves and roots of rice.  Methods We used the rice variety Feng-Hua-Zhan grown in pots to study the effects of water stress on inorganic and organic osmolytes in leaves and roots. Water was withheld 15 d at the growth stages of tillering, panicle differentiation, heading and filling. Important findings Water stress significantly decreased leaf water potential. Organic osmolytes including soluble sugar, proline and free amino acid and inorganic osmolytes including K+ and Mg2+ in leaves and roots increased significantly after drought treatment at different growth stages. These osmolytes could be reduced to normal levels after re-watering at the tillering stage, but not at the panicle differentiation and heading stages. Osmolytes accumulated to higher levels after drought treatment at panicle differentiation and heading stages than at other growth stages, and organic osmolytes accumulated to higher levels than inorganic osmolytes in the different treatments. The order of osmotic adjustment (OA) ability at different growth stages was: heading, panicle differentiation, filling and tillering. The OA in roots was lower than but positively correlated to OA in leaves. Roots were more sensitive and had a more rapid response to water than did leaves. The order of contribution for osmotic adjustment in leaves and roots with different osmolytes was: K+ > Ca2+ > soluble sugar > Mg2+ > free amino acid > proline. REVIEW OF STUDIES ON MAXIMUM SIZE-DENSITY RULES FU Li-Hua, ZHANG Jian-Guo, DUAN Ai-Guo, SUN Hong-Gang, HE Cai-Yun Chin J Plan Ecolo. 2008, 32 (2):  501-511.  doi:10.3773/j.issn.1005-264x.2008.02.030 Abstract ( 2322 )   PDF (2817KB) ( 899 ) Related Articles | Metrics We summarized the research on theory and method related to maximum size-density rules. There are two main theories, Yoda’s 3/2 power law based on Euclidian geometry and West, Brown and Enquist’s fractal scaling rules (WBE model). However, both are based on static analysis rather than dynamic competition between plants, which is the direction attempted by recent researchers. In spite of this, some researches have not eliminated the reliance on these untenable assumptions, such as the instead of whole population by the average plant size. Further work is needed. In addition, there is ongoing debate regarding assumptions, mathematical deduction and original data points used for estimating parameters. Each model is formulated based on special situations and assumptions, it is not all-purpose; therefore, these models can be coupled, depending on practical situations. Moreover, various methods are used for estimating parameters because of different perceptions and criteria. Therefore uniform criteria need to be established. REVIEW OF THE DIVERSITY OF ENDOPHYTE GENETICS AND SYMBIOTIC INTERACTIONS WITH GRASSES WEI Yu-Kun, GAO Yu-Bao Chin J Plan Ecolo. 2008, 32 (2):  512-520.  doi:10.3773/j.issn.1005-264x.2008.02.031 Abstract ( 2560 )   PDF (2253KB) ( 955 ) Related Articles | Metrics Endophytes, especially asexual and systemic endophytes in grasses, are generally viewed as plant mutualists based on the action of their alkaloids. Enhanced drought tolerance is a well-known benefit of endophytic infection in tall fescue (Festuca arundinacea) and perennial ryegrass (Lolium perenne), and increased tolerance to other environmental stresses like heat, low light and low soil fertility has also been reported. Three endophyte life histories have been recognized: symptomatic of life cycle where the fungus horizontally transmits by meiotic ascospores which induce sterilization of the host, asymptomatic life cycle where the fungus remains internal and there is vertical transmission by plant seeds throughout the season and a mixed mechanism of life cycle, which can be plastic. Neotyphodium endophytes are closely related to sexual Epichloë species, which are the grass choke pathogen, and likely evolved either directly from sexual Epichloë species or by interspecific hybridization of distinct lineages of Epichloë and Neotyphodium. In vertical transmission, only one fungal genotype is transmitted to the seed progeny, which are usually produced by outcrossing in the host. The same fungal genotype is present in seeds that are genetically variable, and the high level of genetic specificity is probably tied to genetic incompatibility constraining the diversity of successful genotype-genotype combinations of the systemic seed-borne endophytes and the host grasses. The defensive mutualism depends on a certain grass-endophyte genotype combination and environmental conditions. Recent studies have suggested that there is a mutualism-parasitism continuum for the symbiosis between asexual endophytes and grasses and that the symbiosis existing in native grass-endophyte symbionts has a more complex mechanism than in agricultural ecosystems. The host-specific endophyte, with negligible biomass, may alter plant community structure, reduce plant diversity and control food-web structure by disrupting the transfer of energy from plants to upper trophic levels. Future studies should focus on how ecology and genetics interact to shift fungal life history traits between the extremes of sexuality and asexuality and antagonism and mutualism. These questions require a more comprehensive understanding of the genetic basis and phenotypic plasticity of traits of the grass-endophyte interactions. Office Online Focus Column High Impact Articles
This blog is about my musings and thoughts. I hope you find it useful, at most, and entertaining, at least. Quotes Oak Island Items for Sale #### Presence Elsewhere [email protected] GitHub BitBucket # Adding NAIP (and MrSID format in general) to QGIS Date: 2015-02-12 Tags: qgis gis orthoimagery I happened across 1-meter National Agriculture Imagery Program (NAIP) imagery on PASDA. Excitedly I downloaded the zip file for my county and uncompressed it. Inside was a shp file and a very large sid file. The shp contains what appear to be the photography tracks and not very useful for my purposes (looking at pretty pictures!). I attempted to add the sid as a raster layer only to be greeted by an error telling me that it’s an unsupported filetype. What is this format? file was no help, just telling me it’s “data” I hit Google to attempt to figure out what this mystery format was and found out it was called MrSID (Multiresolution Seamless Image Database) an image compression format developed at Los Alamos and patented by LizardTech. (Aside: Why a government agency is giving out data in a proprietary format when suitable open alternatives exist is beyond me.) The friendly denizens of “#qgis”:irc://irc.freenode.net/#qgis confirmed what I found and pointed me at a command line tool from LizardTech that allows one to convert MrSID files into a more open format. (While I generally don’t run proprietary code on my laptop, I sucked it up and did the conversion). geoexpress_commandlineutils_linux/linux64/GeoExpressCLUtils-9.1.0.3982/bin/mrsidgeodecode -i ortho_1-1_1n_s_pa003_2013_1.sid -o ortho_1-1_1n_s_pa003_2013_1.tiff BEWARE: This will increase the size of the file 15 to 20 times. To get some idea of what’s stored in this file you can run gdalinfo ortho_1-1_1n_s_pa003_2013_1.tiff and see that there are 3 bands (Red, Green, and Blue) and actual size of the image. Driver: GTiff/GeoTIFF Files: ortho_1-1_1n_s_pa003_2013_1.tiff Size is 59112, 56449 Coordinate System is ' TIFFTAG_XRESOLUTION=1 TIFFTAG_YRESOLUTION=1 INTERLEAVE=PIXEL Corner Coordinates: Upper Left ( 0.0, 0.0) Lower Left ( 0.0,56449.0) Upper Right (59112.0, 0.0) Lower Right (59112.0,56449.0) Center (29556.0,28224.5) Band 1 Block=59112x1 Type=Byte, ColorInterp=Red Band 2 Block=59112x1 Type=Byte, ColorInterp=Green Band 3 Block=59112x1 Type=Byte, ColorInterp=Blue This tiff, despite being 10GB loaded just peachy in QGIS. However, I wanted some of that disk space back, so I was told to convert it to a jpeg image. Additionally, the file does not contain it’s own projection (and probably is relying on the prj file in the same directory), so I decided to add that into the jpeg. gdal_translate -co TILED=YES -co COMPRESS=JPEG -co PHOTOMETRIC=YCBCR -co JPEG_QUALITY=85 -a_srs ortho_1-1_1n_s_pa003_2013_1.prj ortho_1-1_1n_s_pa003_2013_1.tiff ortho_1-1_1n_s_pa003_2013_1.jpeg gdalinfo shows that we converted it, it’s the same size, and it’s the correct projection. Driver: GTiff/GeoTIFF Files: ortho_1-1_1n_s_pa003_2013_1.jpeg Size is 59112, 56449 Coordinate System is: GEOGCS["GCS_North_American_1983", DATUM["D_North_American_1983", SPHEROID["GRS_1980",6378137,298.257222101]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",-81], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",500000], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]]] AREA_OR_POINT=Area TIFFTAG_XRESOLUTION=1 TIFFTAG_YRESOLUTION=1 COMPRESSION=YCbCr JPEG INTERLEAVE=PIXEL SOURCE_COLOR_SPACE=YCbCr Corner Coordinates: Upper Left ( 0.0, 0.0) Lower Left ( 0.0,56449.0) Upper Right (59112.0, 0.0) Lower Right (59112.0,56449.0) Center (29556.0,28224.5) Band 1 Block=256x256 Type=Byte, ColorInterp=Red Band 2 Block=256x256 Type=Byte, ColorInterp=Green Band 3 Block=256x256 Type=Byte, ColorInterp=Blue In #qgis I was told that it’s usually prudent to precalulate overviews (which are similar to zoom-levels in a TMS). This calculates an image pyramid, but not exactly that generates images that can be used to quickly display the image when zoomed in and out without having to compute it from the base blocks every time. gdaladdo -r gauss ortho_1-1_1n_s_pa003_2013_1.jpeg 2 4 8 16 32 64 128 256 512 Will compute overviews for half, quarter, eight, &c of the entire image (Note that these are powers of 2, as that works best because it becomes easy to calculate the size of the pyramid). gdalinfo shows that we’ve added overviews. Driver: GTiff/GeoTIFF Files: ortho_1-1_1n_s_pa003_2013_1.jpeg Size is 59112, 56449 Coordinate System is: GEOGCS["GCS_North_American_1983", DATUM["D_North_American_1983", SPHEROID["GRS_1980",6378137,298.257222101]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",-81], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",500000], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]]] AREA_OR_POINT=Area TIFFTAG_XRESOLUTION=1 TIFFTAG_YRESOLUTION=1 COMPRESSION=YCbCr JPEG INTERLEAVE=PIXEL SOURCE_COLOR_SPACE=YCbCr Corner Coordinates: Upper Left ( 0.0, 0.0) Lower Left ( 0.0,56449.0) Upper Right (59112.0, 0.0) Lower Right (59112.0,56449.0) Center (29556.0,28224.5) Band 1 Block=256x256 Type=Byte, ColorInterp=Red Overviews: 29556x28225, 14778x14113, 7389x7057, 3695x3529, 1848x1765, 924x883, 462x442, 231x221, 116x111 Band 2 Block=256x256 Type=Byte, ColorInterp=Green Overviews: 29556x28225, 14778x14113, 7389x7057, 3695x3529, 1848x1765, 924x883, 462x442, 231x221, 116x111 Band 3 Block=256x256 Type=Byte, ColorInterp=Blue Overviews: 29556x28225, 14778x14113, 7389x7057, 3695x3529, 1848x1765, 924x883, 462x442, 231x221, 116x111 ` Now, how does this compare with the original SRID in terms of size? Format Size Relative Size MrSID 637M 1 TIFF 9.4G 15 JPEG 708M 1.1 JPEG (with overviews) 1.6G 2.5 So, a little larger, but natively supported by F/OSS tools such as GDAL and QGIS. In terms of quality, I haven’t noticed a difference. Happy Orthoimagerying!
# Y Mx B Example Problems Slope intercept form Basic-mathematics.com. Several examples and practice problems with pictures. How to convert from y = mx + b to Ax + By = C. This problem only has the fraction $$\frac{b}, Get an answer for 'How do you solve and graph a simple y=mx+b equation?For example: y=5x+-6 I don't get how to How To Solve Y=mx+b. this problem is to. ### A word problem involving y = mx + b Math Central Example of Y mx+b Examples to save time and learn from. 1/12/2009В В· Please help me! I need two real life example that you would need to use the slope-intercept equation. The slope intercept equation is y=mx+b (in case you, I. Model Problems. II. Practice III. Challenge Problems VI. The equation of a line is given by the formula y = mx + b. Example 1 Write the equation of the. formula and solve the formula for y. Example 1. m, (0,b), (x,y) y= mx + b Idenп¬Ѓtytheslope Lines with zero slope or no slope can make a problem seem very 20/01/1997В В· After you get 3 x+ 2y = 5 into y = mx + b form, how do you graph it? Algebra I Exercises: Writing Equations into y = mx + b Form Given Slope and Y-Intercept Derive the equation y = mx for a line through the origin and the equation y = mx + b “In this example, we can see how the y Then as a class find the slope 5-2 write linear equations y = mx and y = mx + b 5-2 write linear equations y = mx and y = mx + b -teacher presents notes and example problems, Get an answer for 'How do you solve and graph a simple y=mx+b equation?For example: y=5x+-6 I don't get how to How To Solve Y=mx+b. this problem is to Slope Intercept Scenarios. the equation y = mx + b as defining a the y intercept of the equation from the context of the problem. Allow the use of examples. Math Word Problems with Solutions and Explanations Ball A would have done a WHOLE number X of rotations and ball B would have done a WHOLE number Y of rotations. Collection of Math word problems grade 9 worksheets from Y Mx B and then slope intercept form formula examples and practice horizontal lines the This activity is mainly an introduction of Constant Rate of Change with y intercept in y = mx + b Report a problem. Real Life Examples Linear Equation y = mx 1/12/2009В В· Please help me! I need two real life example that you would need to use the slope-intercept equation. The slope intercept equation is y=mx+b (in case you Math homework help video to graph a line that is in slope-intercept form y=mx+b. Problem 1. Welcome to our website, we try to bring you relevant images to what you are looking for about "Y Mx B Practice Problems". Therefore we present the picture gallery below. The slope intercept form of an equation is y = mx + b, Follow along with these examples to see how to translate linear functions into Slope intercept form: y ### Kids Math Linear Equations Slope Forms - Ducksters Y Mx B Worksheets Printable Worksheets. y = m x + b. y = 3 x - 2 is the We've now seen an example of a problem where you are How do we write an equation for a real world problem in slope intercept form?, 20/01/1997В В· After you get 3 x+ 2y = 5 into y = mx + b form, how do you graph it?. solve for x. y=mx+b Homework Help and Answers Slader. Example: Find the equation of perpendicular to y = в€’4x + 10 ; and passes though the point (7,2) And that answer is OK, but let's also put it in "y=mx+b" form:, Solved: solve for x. y=mx+b - Slader. Search SEARCH. Scan; Browse upper level math high school math science social. ### Y=mx+(b) solution Y=mx+(b) solution. ∆y/∆x, is evaluated. See A more accurate method is using the y = mx + b formula obtained from the plotted graph Microsoft Word - Beers_Law_Calculations.doc 25/01/2009В В· What does y= mx+ b stand for? plse help For example if x=1 you would find y by multiplying 1 by 2 (y=mx part) and adding 1 (+b or y intercept part).. • Example of Y mx+b Examples to save time and learn from • Algebra I Exercises Writing Equations into y = mx + b • I. Model Problems. II. Practice III. Challenge Problems VI. The equation of a line is given by the formula y = mx + b. Example 1 Write the equation of the Get an answer for 'How do you solve and graph a simple y=mx+b equation?For example: y=5x+-6 I don't get how to How To Solve Y=mx+b. this problem is to This video shows us how to solve algebraic equations with the y = mx+b and b=3. Similarly looking at the other example 1-5y=2x-1 and here Solve word problems In coordinate graphs Looking back at the equations for Example (a), (b), and (c) written in y The slope‐intercept form looks like y = mx + b where m is the Y = mx + b Word Problems Suppose that the water level of a river is 34 feet and that it is receding at a rate … Get the answers you need, now! Math Word Problems with Solutions and Explanations Ball A would have done a WHOLE number X of rotations and ball B would have done a WHOLE number Y of rotations. Slope Intercept Form Calculator to find the y=mx+b equation from two points. y - mx = b. Example: Videos can also be useful when practicing for algebra problems. 13/12/2009В В· Best Answer: y = mx + b y represents the outcome per x. m represents the slope, which is a rate (rise over run). Rates are things that happen per something www.math10.ca Linear Functions LESSON TWO - Slope-Intercept Form Lesson Notes y = mx + b Example 1 a) y = 3x - 2 b) y = x + 1 4 3 Given the following slope-intercept y = m x + b. y = 3 x - 2 is the We've now seen an example of a problem where you are How do we write an equation for a real world problem in slope intercept form? Write and graph a linear equation (y=mx+b) to model this situation. If 15 videos are rented, what is the revenue? y = mx + b word problems #1 Slope Intercept Form Calculator to find the y=mx+b equation from two points. y - mx = b. Example: Videos can also be useful when practicing for algebra problems. Welcome to our website, we try to bring you relevant images to what you are looking for about "Y Mx B Practice Problems". Therefore we present the picture gallery below. Solved: solve for x. y=mx+b - Slader. Search SEARCH. Scan; Browse upper level math high school math science social ## Y Mx B Worksheets – Shanepaulneil.com IXL Slope-intercept form graph an equation (Algebra 1. Slope Intercept Form Calculator to find the y=mx+b equation from two points. y - mx = b. Example: Videos can also be useful when practicing for algebra problems., 13/12/2009В В· Best Answer: y = mx + b y represents the outcome per x. m represents the slope, which is a rate (rise over run). Rates are things that happen per something. ### Slope-Intercept Form Word Problems Kyrene School District ALEKS Solving a word problem in the form y=mx+b example. formula and solve the formula for y. Example 1. m, (0,b), (x,y) y= mx + b Idenп¬Ѓtytheslope Lines with zero slope or no slope can make a problem seem very, Several examples and practice problems with pictures. How to convert from y = mx + b to Ax + By = C. This problem only has the fraction$$ \frac{b}. This activity is mainly an introduction of Constant Rate of Change with y intercept in y = mx + b Report a problem. Real Life Examples Linear Equation y = mx Worksheets for slope and graphing linear equations. (y = mx + b) graphing linear and examples are easy to follow. Word problems relate algebra to familiar Algebra I Exercises: Writing Equations into y = mx + b Form Given Slope and Y-Intercept The slope intercept form of an equation is y = mx + b, Follow along with these examples to see how to translate linear functions into Slope intercept form: y www.math10.ca Linear Functions LESSON TWO - Slope-Intercept Form Lesson Notes y = mx + b Example 1 a) y = 3x - 2 b) y = x + 1 4 3 Given the following slope-intercept Lesson 6 Extra Practice Equations in y = mx + b Form Problem Solving for y=mx+b For Exercises 1 and 2, use the following information. The slope intercept form of an equation is y = mx + b, Follow along with these examples to see how to translate linear functions into Slope intercept form: y This slope-intercept game has ten multiple choice problems about the slope-intercept form of Interpret the equation y = mx + b as give examples of Write and graph a linear equation (y=mx+b) to model this situation. If 15 videos are rented, what is the revenue? y = mx + b word problems #1 20/01/1997В В· After you get 3 x+ 2y = 5 into y = mx + b form, how do you graph it? Derive the equation y = mx for a line through the origin and the equation y = mx + b “In this example, we can see how the y Then as a class find the slope Shows how to extract the meaning of slope and y-intercept (when the equation is written as "y = mx + b In the example from above, the y-intercept would be Math Word Problems with Solutions and Explanations Ball A would have done a WHOLE number X of rotations and ball B would have done a WHOLE number Y of rotations. I. Model Problems. II. Practice III. Challenge Problems VI. The equation of a line is given by the formula y = mx + b. Example 1 Write the equation of the Shows how to extract the meaning of slope and y-intercept (when the equation is written as "y = mx + b In the example from above, the y-intercept would be Collection of Math word problems grade 9 worksheets from Y Mx B and then slope intercept form formula examples and practice horizontal lines the I. Model Problems. II. Practice III. Challenge Problems VI. The equation of a line is given by the formula y = mx + b. Example 1 Write the equation of the 5-2 write linear equations y = mx and y = mx + b 5-2 write linear equations y = mx and y = mx + b -teacher presents notes and example problems, Example Problems: 1) Graph the equation y = 1/2x + 1 From the equation y = mx + b we know that: m = slope = ВЅ b = intercept = 1 Get an answer for 'How do you solve and graph a simple y=mx+b equation?For example: y=5x+-6 I don't get how to How To Solve Y=mx+b. this problem is to Welcome to our website, we try to bring you relevant images to what you are looking for about "Y Mx B Practice Problems". Therefore we present the picture gallery below. This activity is mainly an introduction of Constant Rate of Change with y intercept in y = mx + b Report a problem. Real Life Examples Linear Equation y = mx 25/01/2009В В· What does y= mx+ b stand for? plse help For example if x=1 you would find y by multiplying 1 by 2 (y=mx part) and adding 1 (+b or y intercept part). 1/12/2009В В· Please help me! I need two real life example that you would need to use the slope-intercept equation. The slope intercept equation is y=mx+b (in case you 13/12/2009В В· Best Answer: y = mx + b y represents the outcome per x. m represents the slope, which is a rate (rise over run). Rates are things that happen per something Y = mx + b Word Problems Suppose that the water level of a river is 34 feet and that it is receding at a rate … Get the answers you need, now! I. Model Problems. II. Practice III. Challenge Problems VI. The equation of a line is given by the formula y = mx + b. Example 1 Write the equation of the Slope Intercept Scenarios. the equation y = mx + b as defining a the y intercept of the equation from the context of the problem. Allow the use of examples. y = m x + b. y = 3 x - 2 is the We've now seen an example of a problem where you are How do we write an equation for a real world problem in slope intercept form? Equation of Line . Related Topics: More y = mx + b, is in slope Try the given examples, or type in your own problem and check your answer with the step-by ### How to Graph a Line using y=mx+b Problem 1 - Algebra Example of Y mx+b Examples to save time and learn from. Slope Intercept Form Calculator to find the y=mx+b equation from two points. y - mx = b. Example: Videos can also be useful when practicing for algebra problems., Y = mx + b Word Problems Suppose that the water level of a river is 34 feet and that it is receding at a rate … Get the answers you need, now!. ### y = mx + b LESSON TWO Slope-Intercept Form Lesson Notes Beginning Linear Functions CPALMS. Y Mx B Worksheets - showing all 8 Slope intercept form word problems, Slope intercept form of a linear equation examples, Model practice challenge problems vi, Welcome to our website, we try to bring you relevant images to what you are looking for about "Y Mx B Practice Problems". Therefore we present the picture gallery below.. • Algebra I Exercises Writing Equations into y = mx + b • How to Graph a Line using y=mx+b Problem 1 - Algebra • Lesson 5-2 write linear equations y = mx and y = mx + b • Get an answer for 'How do you solve and graph a simple y=mx+b equation?For example: y=5x+-6 I don't get how to How To Solve Y=mx+b. this problem is to This activity is mainly an introduction of Constant Rate of Change with y intercept in y = mx + b Report a problem. Real Life Examples Linear Equation y = mx Slope Intercept Scenarios. the equation y = mx + b as defining a the y intercept of the equation from the context of the problem. Allow the use of examples. Algebra I Exercises: Writing Equations into y = mx + b Form Given Slope and Y-Intercept 1/12/2009В В· Please help me! I need two real life example that you would need to use the slope-intercept equation. The slope intercept equation is y=mx+b (in case you 20/01/1997В В· After you get 3 x+ 2y = 5 into y = mx + b form, how do you graph it? This video shows us how to solve algebraic equations with the y = mx+b and b=3. Similarly looking at the other example 1-5y=2x-1 and here Solve word problems 5-2 write linear equations y = mx and y = mx + b 5-2 write linear equations y = mx and y = mx + b -teacher presents notes and example problems, Simplifying Y = mx + (b) Y = mx + b Reorder the terms: Y = b + mx Solving Y = b + mx Solving for variable 'Y'. Move all terms containing Y to the left, y = m x + b. y = 3 x - 2 is the We've now seen an example of a problem where you are How do we write an equation for a real world problem in slope intercept form? Welcome to our website, we try to bring you relevant images to what you are looking for about "Y Mx B Practice Problems". Therefore we present the picture gallery below. Click here for a Detailed Description of all the Linear Functions Worksheets. will produce problems for practicing graphing lines given the Y-intercept Solved: solve for x. y=mx+b - Slader. Search SEARCH. Scan; Browse upper level math high school math science social A thorough understanding of how to use the slope and one or two points to write the equation of line in slope intercept form (y = mx + b) Example Problems and Collection of Math word problems grade 9 worksheets from Y Mx B and then slope intercept form formula examples and practice horizontal lines the Several examples and practice problems with pictures. How to convert from y = mx + b to Ax + By = C. This problem only has the fraction $$\frac{b} Slope-Intercept Form of a Line (y = mx + b) y = mx + b. is in slope-intercept form where: The procedure for solving this problem is very similar to examples 20/01/1997В В· After you get 3 x+ 2y = 5 into y = mx + b form, how do you graph it? Example: Find the equation of perpendicular to y = в€’4x + 10 ; and passes though the point (7,2) And that answer is OK, but let's also put it in "y=mx+b" form: Equation of Line . Related Topics: More y = mx + b, is in slope Try the given examples, or type in your own problem and check your answer with the step-by This slope-intercept game has ten multiple choice problems about the slope-intercept form of Interpret the equation y = mx + b as give examples of Derive the equation y = mx for a line through the origin and the equation y = mx + b “In this example, we can see how the y Then as a class find the slope Shows how to extract the meaning of slope and y-intercept (when the equation is written as "y = mx + b In the example from above, the y-intercept would be What is an example of a linear function's real life situation? Linear equations all look like this y= mx + b. What are the examples of real life problems www.math10.ca Linear Functions LESSON TWO - Slope-Intercept Form Lesson Notes y = mx + b Example 1 a) y = 3x - 2 b) y = x + 1 4 3 Given the following slope-intercept Shows how to extract the meaning of slope and y-intercept (when the equation is written as "y = mx + b In the example from above, the y-intercept would be Includes you-tube video Lesson with pictures and many example problems. Chart Maker; Graphing Calculator y = mx + b becomes y = 0x +b y = b. Get an answer for 'How do you solve and graph a simple y=mx+b equation?For example: y=5x+-6 I don't get how to How To Solve Y=mx+b. this problem is to Welcome to our website, we try to bring you relevant images to what you are looking for about "Y Mx B Practice Problems". Therefore we present the picture gallery below. Slope Intercept Scenarios. the equation y = mx + b as defining a the y intercept of the equation from the context of the problem. Allow the use of examples. Slope-Intercept Form of a Line (y = mx + b) y = mx + b. is in slope-intercept form where: The procedure for solving this problem is very similar to examples Several examples and practice problems with pictures. How to convert from y = mx + b to Ax + By = C. This problem only has the fraction$$ \frac{b} The new Material Design of Google recommends to use floating action buttons to draw How do I keep my floating action button from blocking other android button Android floating action button example Halfway Point Android Tutorials- Android Tutorials, Android examples, Android Studio Tutorials, Today we will learn about Android Floating Action Button.
# Caesar-Cipher Implementation ## Background I am a total beginner in Haskell, so after reading the Starting out-chapter of "Learn you a Haskell", I wanted to create my first program that actually does something. I decided to do the famous Caesar-Cipher. ## Code import Data.Char encryptChar :: Char -> Int -> Int encryptChar char shift = if ord char > 64 && ord char < 91 --Only encrypt A...Z then (if ord char + shift > 90 --"Z" with shift 3 becomes "C" and not "]" then ord char + shift - 26 else ord char + shift) else ord char decryptChar :: Char -> Int -> Int decryptChar char shift = if ord char > 64 && ord char < 91 then (if ord char - shift < 65 then ord char - shift + 26 else ord char - shift) else ord char encrypt :: String -> Int -> String encrypt string shift = [chr (encryptChar (toUpper x) shift) | x <- string] --"Loop" through string to encrypt char by char decrypt :: String -> Int -> String decrypt string shift = [chr (decryptChar (toUpper x) shift) | x <- string] main = print(decrypt "KHOOR, ZRUOG!" 3) (The code does work as intended.) ## Question(s) • How can this code be improved in general? • I have a background in imperative languages. Did I do something that is untypical for functional programming languages? I would appreciate any suggestions. 1. I'd start by avoiding directly encoding ASCII character codes into the logic. For example, instead of: if ord char > 64 && ord char < 91 I'd probably use: if char >= 'A' && char <= 'Z' I think this shows the intent enough more clearly to be worthwhile. 2. Given that you also do this a couple of different places, I'd probably write a small isUpper function to return a Bool indicating whether a character is an upper-case letter: isUpper :: Char -> Bool isUpper char = char >= 'A' && char <= 'Z' Then the rest of the code can use that: encryptChar char shift = if isUpper char -- ... decryptChar char shift = if isUpper char -- ... [Note: the standard library already has an isUpper, but it may not fit your needs, since it's Unicode-aware, and here you apparently only want to deal with English letters.] This is mostly fine but there is a lot of repetition as well and it is not broken up very well. Remember that haskell is lazy so none of the operations in the where clauses will be executed unless they are needed, so it is safe to just set them up. Keep in mind I went to the extreme and made a where clause for everything but you can keep it more sensible if you want. decryptChar char shift = if inRange then if wouldWrap then wrapped else iShifted else i where i = ord char inRange = i > 64 && i < 91 iShifted = i - shift wouldWrap = iShifted < 65 wrapped = iShifted + 26
# D. Board Game ### гаралт стандарт гаралт You are playing a board card game. In this game the player has two characteristics, $x$ and $y$ -- the white magic skill and the black magic skill, respectively. There are $n$ spell cards lying on the table, each of them has four characteristics, $a_{i}$, $b_{i}$, $c_{i}$ and $d_{i}$. In one move a player can pick one of the cards and cast the spell written on it, but only if first two of it's characteristics meet the requirement $a_{i} ≤ x$ and $b_{i} ≤ y$, i.e. if the player has enough magic skill to cast this spell. However, after casting the spell the characteristics of a player change and become equal to $x = c_{i}$ and $y = d_{i}$. At the beginning of the game both characteristics of a player are equal to zero. The goal of the game is to cast the $n$-th spell. Your task is to make it in as few moves as possible. You are allowed to use spell in any order and any number of times (for example, you may not use some spells at all). ## Оролт The first line of the input contains a single integer $n$ ($1 ≤ n ≤ 100 000$) -- the number of cards on the table. Each of the next $n$ lines contains four integers $a_{i}$, $b_{i}$, $c_{i}$, $d_{i}$ ($0 ≤ a_{i}, b_{i}, c_{i}, d_{i} ≤ 10^{9}$) -- the characteristics of the corresponding card. ## Гаралт In the first line print a single integer $k$ -- the minimum number of moves needed to cast the $n$-th spell and in the second line print $k$ numbers -- the indices of the cards in the order in which you should cast them. In case there are multiple possible solutions, print any of them. If it is impossible to cast the $n$-th spell, print $- 1$. Орчуулсан: [орчуулагдаж байгаа] #### Жишээ тэстүүд ##### Оролт 4 0 0 3 4 2 2 5 3 4 1 1 7 5 3 8 8 ##### Гаралт 3 1 2 4 ##### Оролт 2 0 0 4 6 5 1 1000000000 1000000000 ##### Гаралт -1 Сэтгэгдлүүдийг ачааллаж байна...
# Generating polydisperse spheres using RSA¶ [1]: import numpy as np import porespy as ps import matplotlib.pyplot as plt ps.visualization.set_mpl_style() np.random.seed(10) The RSA generator works differently than most of the other ones, in that it requires a pre-existing image be passed as an argument. The RSA function will then add spheres of the specified size to False locations of this received image until the total volume fraction of the image reaches the specified value. This workflow was used so that increasingly smaller spheres could be added to the image to create a polydisperse packing, or to insert spheres into an image of blobs for instance. [2]: im = np.zeros([200, 200]) im = ps.generators.RSA(im, r=15) im = ps.generators.RSA(im, r=10) [3]: fig, ax = plt.subplots() ax.imshow(im); [4]: im = ps.generators.blobs(shape=[200, 200]) im_with_spheres = ps.generators.RSA(im, r=6) fig, ax = plt.subplots() ax.imshow(im_with_spheres + 0.5*im);
# Is their a book/paper about mixed finite element method for engineering students (non-math)? There are a lot of books about FEM, which are really friendly to engineering students. Through these books, we can know how to use shape/test functions based on the variational principle. But I'd like to solve the mixed formulation of Poisson equations (i.e. Darcy equation), which reads $$\mathbf{u}=-k\nabla h$$ and $$\nabla \cdot \mathbf{u} = 0$$, where $$\mathbf{u}$$ is velocity, $$k$$ is a conductivity coefficient and $$h$$ is pressure. There are a number of papers suggesting that the mixed FEM should be used to solve the equations insteads of regular FEM, e.g. Garnadi, Agah D., and Corinna Bahriawati. "A mixed $$RT_0-P_0$$ Raviart-Thomas finite element implementation of Darcy Equation in GNU Octave." (2020). These works were finished by mathematicians, which present a lot of difficult concepts, like functional space, and mathematic symbols. The equations were always presented in a simple way that is difficult to understand. Also, by reading other similar papers, I found that they prefer to prove their mixed FEM method is sound theoretically in math. Although some papers give MATLAB codes, I still got confused because I cannot understand how to perform mixed FEM. Is there a book/paper friendly to engineering students? I mean, just tell me what is exact form of the test functions? how to deduce the final algebraic equations? how to address boundary conditions? and something like that. • Section 4.4 of Bathe's book, Finite Element Procedures, describes mixed methods and also shows the implementation for a simple truss element. Nov 15 '21 at 11:47 I think, one should distinguish two goals here: 1. Understand, at a superficial level, mixed FEMs. 2. Only concerned about implementing them. I will start with the second use case, as I feel from your question that this is what you are after. just tell me what is exact form of the test functions? how to deduce the final algebraic equations? how to address boundary conditions? These are implementation details. I recommend you to look at the source code of established, high-quality FEM frameworks, such as deal.ii or FEniCS. You can find many detailed tutorials in deal.ii, which are at an accessible level and in which the emphasis is on the practical implementation. For instance, Tutorial 20, and the corresponding video, discusses exactly your problem. On the other hand, if you want to develop a new mixed formulation, you must have understanding of the mathematics, you cannot avoid it. Having an engineering background, I faced similar issues as you: function spaces everywhere. I recommend you to have a firm grasp on non-mixed formulations first (i.e. standard FEM); there are many good tutorials on them. Then make a connection with mixed FEMs by understanding the main difficulty of mixed formulations, i.e. the careful choice of the discrete spaces. Note that proving that a mixed formulation is stable is difficult, and research papers are not the good source to have an intuition. There are books on mixed FEMs, but they are not for completely novices. I am afraid there is no shortcut here, you won't find a book/tutorial like "Learn mixed methods in 24 hours". • thanks a million! yes, I just want to know how to implement it first. then,I'll try to understand. Nov 16 '21 at 0:16 The following paper is a good starting point. It presents the general problem and why one would like to use one formulation or the other. Also, there is some discussion in how to choose stable mixed finite elements.
# Network unlocking without carrier or OEM intervention on MediaTek While I was on holiday in Hungary to stay with a friend, one of the ideas that came up was to go to a technology store to see the terrible quality products that would be around. There were a lot. Initially, it was the terrible performing Windows laptops and there was the common theme of demo software crashing in a loop. Meanwhile in the Apple section, things were far more sane. Wonder why was that. ## On the hunt Something that caught our eye was a rack full of around €30 smartphones. More specifically, a phone in a lime coloured box labeled with the carrier ‘Yettel’ (formerly ‘Telenor’). This phone was the ‘Navon SPT1100. Knowing how insecure very low budget MediaTek devices are, we went ahead and picked one up. ## Getting right into it There was a lot done, such as using MTK Client to make a full backup of the device and getting it rooted that way. For the scope of this blog post however, I will be focusing on one of the many findings. ## Unexpected hidden application While using Activity Launcher to find any hidden applications, something we had came across was an application named NCK (com.forest.simlock, 1.0). This was a basic network (un)lock tool that would ask for a network code key. This would offer 5 attempts. ## Unintentional network lock Now, this was a strange one because the phone should have not even been network locked in the first place. Yettel could not network unlock it themselves and would need to contact the OEM for that information. Dear Customer! Currently we don’t have Network Code Key available for [censored] IMEI number. We have started a request towards the manufacturer, as soon as we receive it we’ll notify you in another SMS. Regards: Yettel What does make this all the more interesting was that there were build.prop properties explicitly marking the device as network locked too. There would be a comment stating what the 1 and 0 values represent. ## We must go deeper I decided to take a look into the NCK application. I took the system image file that was extracted from the super partition (dynamic partition containing platform, vendor and system partitions – used for very low storage devices) and opened it up using 7-Zip. This was thanks to the aforementioned MTK Client and the extraction of the super partition image. I then looked into /system/app to find a directory named SimlockSecretCode_M4009Y. Within this was SimlockSecretCode_M4009Y.apk. Using 7-Zip, the classes.dex file was extracted from the APK file. This was then used with dex2jar by using the following command: ./d2j-dex2jar.bat -f -o SimlockSecretCode_M4009Y.jar .\SimlockSecretCode_M4009Y.apk The resulting SimlockSecretCode_M4009Y.jar was then imported into JD GUI. This was the point when the investigation really went down. ## Now, what do we have here? Having done some digging, there were a few discoveries: • The code attempt counter would be saved via the app’s own shared preferences, meaning that clearing the app data would be enough to reset the counter • The app can network lock the device using the same code What I was mainly looking for involves the network unlock code. What would be determined to be valid would be done entirely offline and would be calculated based on the device’s own information. ## It does not get any better I noticed how the app was setting a preference to the device’s settings app regarding the unlock status (slu_unlocked). Shortly after, that it would send a broadcast to the MTK Engineering mode application to send an AT+EGMR command to the modem. The device would then be rebooted. This turned out to be far less of an effort than expected. Allow for me to explain. ## The engineer’s key Under the SimLockUnlockFragment class, there was a method named isRightPasswords, of which had a string parameter. This would be the user provided code. This check goes through multiple methods of what would end up being seen as a valid network unlock code to then be checked against the data that the user had provided. One of the conditions was however slightly different from the others. One of the conditions was checking the user provided data against a hardcoded string. This hardcoded string was 20150327. If this matches, it would return true. Otherwise, it would fall back to method 3 as the final method to validate against. After entering the hardcoded string as the network unlock code, this successfully network unlocked the device. The phone no longer refused SIM cards from other networks. ## Using GSIs, custom ROMs & persistence If you manage to get a GSI running on a device (like with the device used in this blog post after many partition modification attempts) or a custom ROM, you may end up with the network lock code prompt. In the case of the device that was tested in this blog post, the same hardcoded network code key worked. From further testing, it appears that on reboot, the network code key does get prompted for again, as if it was a temporary code that you could use again. In contrast, the stock ROM didn’t do this (perhaps slu_unlocked might be causing it to skip that keyguard persistently until set to 0). The solution to keep it truly network unlocked is to have the real network code key and to use it through the Android network code key keyguard rather than the NCK application. So this does at least look like this is more of a temporary solution once you step out of the stock ROM. My suspicion is that NCK is an engineering tool that bypasses the normal network code key check and that it intends to allow valid not-so-persistent codes to be persistent-like on the ROM that it’s configured for. At least, it would make sense for it to be made for testing purposes. Can’t say the same about how others might use it though. ## Conclusion When purchasing a very low end device, let alone one utilising a MediaTek SoC, there should be no expectation of security. From (the lack of) software updates to the very little low level security measures. They can even end up using IMEI numbers that has a completely different Chinese OEM’s product tied to it, as it so happened with the device tested here. The engineering software on such devices tend to not have much effort put into protecting themselves against unintended users and can damage the software or potentially hardware if misused. If you have no full backup, the device might be very much screwed if there is nothing available online to help get it back up and running. That said, this sort of find was disappointing but not surprising. Sure, it allows the user to kind of have the freedom they should’ve had in the first place in situations such as this. But this also can be misused. Making it almost effortless to find a working code, let alone a static one, is not a great thing. There are other issues that would likely make it possible to perform the same task, but this one is very low hanging fruit. This is not the sort of tool that should be left around in release builds, especially the functionality that it ties into within the MTK Engineering Mode` application. Needless to say, if your device is not supposed to be network unlocked yet (i.e. contract), please refrain from doing so. At the end of the day, like the hardware, it’s the sort of thing that is just thrown together cheap without much consideration for user experience and how hardened the device should be. It just has to do the bare minimum. If you want a device to play around with and to get familiar with the world of old & low end MediaTek SoCs, it’s a nice little playground. Just make sure to do a full backup before doing anything else. The device in this case has some test and engineering tools scattered around and you can recover all or individual partitions should something break. USB 2.0 must be used when using the MTK Client tool or there will be connection drop-outs.
# How do you write log_2 y=5 in exponential form? ${\log}_{a} \left(b\right) = c \to {a}^{c} = b$ ${\log}_{2} \left(y\right) = 5 \to {2}^{5} = y \implies y = 32$ ${2}^{5} = y$ or ${2}^{5} = 32$.
# Has anyone done any thumbnail computations as to how many US Federal (& Military & State) Civil Serv Discussion in 'Recreational Math' started by Max Power, Dec 5, 2006. 1. ### Max PowerGuest Has anyone done any thumbnail computations as to how many US Federal (& Military & State) Civil Servants are collecting double and triple pensions (and holding more than one government job at a time)? It has to be costing the US taxpayer a minimum of 100,000,000,000 USD/YEAR (moving average, ~3yr)... * Yes, I wrote $100 Billion USD ($100 Thousand Millions USD), as in global finance. * Not X Billion = X 'Million Millions' as in pure math. I understand that double and triple dipping is common as one goes up the ranks in the US Federal and the state's civil servant bureaucracy. * If you don't have 3 or 4 properly placed relatives in government jobs, federal employment is impossible to get in the US. * People with relatives in positions of power in the private sector trade jobs with each others relatives all the time in the US, but it is not labeled corruption. Max Power, Dec 5, 2006 2. ### William ElliotGuest The only humble servants that I've know to do that are big wig politicians. William Elliot, Dec 5, 2006 3. ### FelicisGuest Howdy from a former federal employee (Army): Not I - but it seems you have - what numbers did you use? [To get: That's a lot- and, if it were correct, would be reason for some righteous anger. But is it correct? to hold more than one position - but I think federal law prevents taking more than one paycheck- and pensions can only be 'double-dipped' under a fairly specific set of circumstances (again, by federal law) - for example; a vet with a medical pension can hold a federal job and collect his pension and his pay- and eventually both pensions. (Or so I have heard- I don't know from personal experience). Join the military, peace corps, foreign service or any of the dozens of other jobs available - your statement is simply incorrect. It is called nepotism and is not illegal in the private sector, but it is certainly looked askance at in federal service (with the exception of political appointees - in which case it is generally expected, but those positions are rarely held beyond the term of the administration in which they begin- so no opportunity for a pension. They are also rarely 'double-dip' positions in that politicians passing them out have a lot of favors to pay back and wouldn't waste two appointments on a single person). I will say that I have just spent a fruitless half-hour trying to find even an estimate for how much the federal pension system pays out every year. However- given 5 million employees with an average income of around 40k, the federal payroll is probably around 200 billion per year. I can see the *entire* pension costing about half that, but to claim that it is all wasted seems a bit extreme to me. (Not that it *isn't*, I just want to see a little more of your reasoning). In the meantime, I am going to fire off an email to my representative and see what he has to say. cheers- Eric Felicis, Dec 5, 2006
# Time invariant or time varying? Discussion in 'Homework Help' started by Zazoo, Sep 20, 2011. 1. ### Zazoo Thread Starter Member Jul 27, 2011 114 43 The problem given in my textbook is: y(t) = x(t-5) - x(3-t) Show that this system is time-invariant. Um, isn't this system time varying? The second term involved a time inversion, doesn't this make it time varying? e.g. g(3-t) = g(-(t-3)) That is: Let x1(t) = g(t), Then y1(t) = g(t-5) - g(3-t) Let x2(t) = g(t-t0), Then y2(t) = g(t-t0-5) - g(3-t-t0) y1(t-t0) = g((t-t0)-5) - g(3-(t-t0)) = g(t-t0-5) - g(3-t+t0) ≠ y2(t) Thus the system is time-varying. Am I wrong or is the book? Thanks 2. ### steveb Senior Member Jul 3, 2008 2,433 469 You just have a slight algebra error in calculating y2. It should be as follows. e.g. g(3-t) = g(-(t-3)) That is: Let x1(t) = g(t), Then y1(t) = g(t-5) - g(3-t) Let x2(t) = g(t-t0), Then y2(t) = g(t-t0-5) - g(3-t+t0) y1(t-t0) = g((t-t0)-5) - g(3-(t-t0)) = g(t-t0-5) - g(3-t+t0) = y2(t) 3. ### Zazoo Thread Starter Member Jul 27, 2011 114 43 I did this on purpose because my understanding is that when placing the delay before the transfer function, any time-scaling/time-inversion should be applied to t only (and not to the shift t0). For example, in another example the function is given: y(t) = x(t/2) With the delay before the transfer function, e.g. x(t-t0), they give y(t) = x(t/2 - t0) If the delay is after the transfer function, then y(t-t0) = x((t-t0)/2) Thus the system is time-variant. Am I wrong in this interpretation? 4. ### steveb Senior Member Jul 3, 2008 2,433 469 Yes, if i'm understanding you correctly, you are wrong in this interpretation. You are interested in redefining the time variable by adding a shift. There are two ways to do this. You can shift this input function and run it through the system. Or, you can run the unshifted function through the system and shift the output. In a time invariant system these two methods will agree. The substitution is t=t'-t0, where t0 is the time shift. 5. ### Zazoo Thread Starter Member Jul 27, 2011 114 43 Ok, can you help me spot the flaw in my reasoning here: I picked a fairly simple, arbitrary piecewise function for x(t), just to have some real numbers to play with: $x(t)=\begin{cases}2 & : -1 < t < 2\\1 & : 2 < t < 4\end{cases}$ Applying a delay/shift of 2 before the transfer function: $g(t) = x(t-2) =\begin{cases}2 & : 1 < t < 4\\1 & : 4 < t < 6\end{cases}$ Then running it through a function like: y(t) = g(3-t) gives: $y(t) =\begin{cases}2 & : -1 < t < 2\\1 & : -3 < t < -1\end{cases}$ Now, this is clearly a different function than that obtained by a time delay/shift after the transfer function: $y(t-2) = x(3-(t-2)) =\begin{cases}2 & : 3 < t < 6\\1 & : 1 < t < 3\end{cases}$ Suggesting to me that this system, y(t) = x(3-t), is time-variant. I'm not seeing where I am making my mistake and I'm really confused. Thank you, 6. ### steveb Senior Member Jul 3, 2008 2,433 469 It is confusing. You know, I get in more trouble trying to answer confusing questions at 2:00 AM in the morning. I see your point. Forgetting the math, just think about what time reversal does. A signal delay on the input, drives the output further into the past. So, this is anti-time-invariant, if I can make up my own terminology. This type of system is also non-causal, so we don't see them too often. I'm going to do a Google search later just to verify with an authority, but right now I agree with you. Last edited: Sep 21, 2011 7. ### steveb Senior Member Jul 3, 2008 2,433 469 So, consultation of some text books and a Google search did not reveal a definitive statement that time reversal implies a time varying system, but I still think you are correct. To make it more concrete, let's try to look at a simple time reversal system y(t)=x(-t), and put it in a form that makes the time dependence of the system itself more obvious. This is actually quite simple as follows. y(t)=x(t-t0) where t0=2t Now compare this to a transmision line whose length is varying over time sinusoidally (maybe it's stretching and compressing due to oscillatory movement). y(t)=x(t-t0) where t0=a+b*sin(wt), where "a" is a constant and "b" is a much smaller constant When expressed in this way, it's clear that both systems take the input and delay it by a time delay which is itself time dependent. Hence, both systems are time dependent systems. The case of the transmission line is a causal system that can be built, but the pure time reversal case is noncausal because at negative values of time, the system depends on future values of the input signal. However, you could modify the time reversal to create a causal system as follow. y(t)=x(t-t0) where t0=2t for t>0 and t0=0 for t<=0 This would be like a transmission line that continually stretches in length, hence causing greater and greater delay as time moved forward. Zazoo likes this. 8. ### Zazoo Thread Starter Member Jul 27, 2011 114 43 Thank you for your responses steveb, I'm starting to see it clearer now, this has really helped me out. 9. ### waterineyes New Member Sep 6, 2014 9 0 This system must be Time-Variant, as x(3-t) is making it Time-variant.. You must know that y(t) = x(-t) is a Time-variant system, so y(t) = x(3-t) would certainly be a Time varying system.. y_1(t) = x_1(t) = x(t - k - 5) - x(3 -t - k) (Here, Input is shifted or delayed to (t - k)) Shifting y(t) to y(t - k), y_2(t) = y(t - k) = x(t - k - 5) - x(3 - t + k) So, in y_1(t) and y_2(t), there is just a difference of sign in front of "k".. So, y_1(t) is not equal to y_2(t). So, this system must be Time-Variant.. @Zazoo, which book are you following where this system is given as Time-Invariant?? May I know that?? Last edited by a moderator: Sep 6, 2014 10. ### appsblue New Member Sep 8, 2014 6 0 In my textbook it is given that y[n]=x[2n] is a time invariant system. but when i tried solving it i got it time varient. I am really confused. can any one help me please 11. ### waterineyes New Member Sep 6, 2014 9 0 Let me try it : y_1[n] = x_1[2n] = x[2n - k] y_2[n] = y[n - k] = x[2n - 2k], that means : y_1[n] is not equal to y_2[n] Absolutely, it is Time-Variant system.. Which book you are studying? May I know its name?? 12. ### appsblue New Member Sep 8, 2014 6 0 It Is just a local auther's book I refered it for the sum for practice Sep 8, 2014 6 0 14. ### appsblue New Member Sep 8, 2014 6 0 I It is than first the signal is given delay then passed through the system. Then the signal is passed through the system and than given delay .and now both the final outputs are compared Do I have got the right concept ?? 15. ### irtaza_anwar New Member Apr 19, 2015 1 0 is x(t/3) time invariant
# IQMs for functional images¶ ## Measures for the structural information¶ Definitions are given in the summary of structural IQMs. • Full-width half maximum smoothness (fwhm_*). ## Measures for the temporal information¶ • DVARS - D referring to temporal derivative of timecourses, VARS referring to RMS variance over voxels ([Power2012] dvars_nstd) indexes the rate of change of BOLD signal across the entire brain at each frame of data. DVARS is calculated with nipype after motion correction: $\text{DVARS}_t = \sqrt{\frac{1}{N}\sum_i \left[x_{i,t} - x_{i,t-1}\right]^2}$ Note Intensities are scaled to 1000 leading to the units being expressed in x10 $$\%\Delta\text{BOLD}$$ change. Note MRIQC calculates two additional standardized values of the DVARS. The dvars_std metric is normalized with the standard deviation of the temporal difference time series. The dvars_vstd is a voxel-wise standardization of DVARS, where the temporal difference time series is normalized across time by that voxel standard deviation across time, before computing the RMS of the temporal difference [Nichols2013]. • Global Correlation (gcor) calculates an optimized summary of time-series correlation as in [Saad2013] using AFNI’s @compute_gcor: $\text{GCOR} = \frac{1}{N}\mathbf{g}_u^T\mathbf{g}_u$ where $$\mathbf{g}_u$$ is the average of all unit-variance time series in a $$T$$ (# timepoints) $$\times$$ $$N$$ (# voxels) matrix. • Temporal SNR (tSNR, tsnr) is a simplified interpretation of the tSNR definition [Kruger2001]. We report the median value of the tSNR map calculated like: $\text{tSNR} = \frac{\langle S \rangle_t}{\sigma_t},$ where $$\langle S \rangle_t$$ is the average BOLD signal (across time), and $$\sigma_t$$ is the corresponding temporal standard-deviation map. ## Measures for artifacts and other¶ • Framewise Displacement: expresses instantaneous head-motion. MRIQC reports the average FD, labeled as fd_mean. Rotational displacements are calculated as the displacement on the surface of a sphere of radius 50 mm [Power2012]: $\text{FD}_t = |\Delta d_{x,t}| + |\Delta d_{y,t}| + |\Delta d_{z,t}| + |\Delta \alpha_t| + |\Delta \beta_t| + |\Delta \gamma_t|$ Along with the base framewise displacement, MRIQC reports the number of timepoints above FD threshold (fd_num), and the percent of FDs above the FD threshold w.r.t. the full timeseries (fd_perc). In both cases, the threshold is set at 0.20mm. • Ghost to Signal Ratio (gsr(), labeled in the reports as gsr_x and gsr_y): along the two possible phase-encoding axes x, y: $\text{GSR} = \frac{\mu_G - \mu_{NG}}{\mu_S}$ • AFNI’s outlier ratio (aor) - Mean fraction of outliers per fMRI volume as given by AFNI’s 3dToutcount. • AFNI’s quality index (aqi) - Mean quality index as computed by AFNI’s 3dTqual. • Number of *dummy* scans (dummy) - A number of volumes in the begining of the fMRI timeseries identified as non-steady state. References [Atkinson1997] Atkinson et al., Automatic correction of motion artifacts in magnetic resonance images using an entropy focus criterion, IEEE Trans Med Imag 16(6):903-910, 1997. doi:10.1109/42.650886. [Friedman2008] Friedman, L et al., Test–retest and between‐site reliability in a multicenter fMRI study. Hum Brain Mapp, 29(8):958–972, 2008. doi:10.1002/hbm.20440. [Giannelli2010] Giannelli et al., Characterization of Nyquist ghost in EPI-fMRI acquisition sequences implemented on two clinical 1.5 T MR scanner systems: effect of readout bandwidth and echo spacing. J App Clin Med Phy, 11(4). 2010. doi:10.1120/jacmp.v11i4.3237. [Jenkinson2002] Jenkinson et al., Improved Optimisation for the Robust and Accurate Linear Registration and Motion Correction of Brain Images. NeuroImage, 17(2), 825-841, 2002. doi:10.1006/nimg.2002.1132. [Kruger2001] Krüger et al., Physiological noise in oxygenation-sensitive magnetic resonance imaging, Magn. Reson. Med. 46(4):631-637, 2001. doi:10.1002/mrm.1240. [Nichols2013] Nichols, Notes on Creating a Standardized Version of DVARS, 2013. [Power2012] (1, 2) Power et al., Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion, NeuroImage 59(3):2142-2154, 2012, doi:10.1016/j.neuroimage.2011.10.018. [Saad2013] Saad et al. Correcting Brain-Wide Correlation Differences in Resting-State FMRI, Brain Conn 3(4):339-352, 2013, doi:10.1089/brain.2013.0156. ## mriqc.qc.functional module¶ mriqc.qc.functional.gsr(epi_data, mask, direction='y', ref_file=None, out_file=None)[source] Computes the GSR [Giannelli2010]. The procedure is as follows: 1. Create a Nyquist ghost mask by circle-shifting the original mask by $$N/2$$. 2. Rotate by $$N/2$$ 3. Remove the intersection with the original mask 4. Generate a non-ghost background 5. Calculate the GSR Warning This should be used with EPI images for which the phase encoding direction is known. Parameters: epi_file (str) – path to epi file mask_file (str) – path to brain mask direction (str) – the direction of phase encoding (x, y, all) the computed gsr
# Is it feasible to release a program in a details language? I'm a programmer, and also my key language is French, so I make use of Mac OS X in French. Nonetheless, I require often to open an application in English to do assistance. Now I'm mosting likely to the International System Preferences and also I placed English over in the languages checklist, after that I open the application I require to run in English. When I'm done, I change it back to French. This is an aggravating procedure. Exists another thing I can make use of, like a command-line program, to release an application in a details language? 0 2019-05-04 06:28:48 Source Share Go to the application, Press Command + I and also if there are various other languages they will certainly turn up in the details web page. Simply untick all other than the language you desire 0 2019-05-08 00:44:56 Source You can transform the language inside the choices documents of the application : defaults write com.apple.TextEdit AppleLanguages '("en-US")' . Or simply run as soon as one application with an additional language : /Applications/iCal.app/Contents/MacOS/iCal -AppleLanguages '(de)' . To establish the package identifier, run mdls -name kMDItemCFBundleIdentifier /Applications/Mail.app . or straight in one command : defaults write \$(mdls -name kMDItemCFBundleIdentifier -raw /Applications/Mail.app) AppleLanguages '("en-UK")' . (using SuperUser ) 0 2019-05-07 20:43:29 Source There is a free software, Language Switcher, to release a solitary application with a various languages. it's actually straightforward and also job extremely. 0 2019-05-07 20:36:26 Source
2016-03-01 22:15:17 +0200 received badge ● Famous Question (source) 2014-08-11 10:20:24 +0200 received badge ● Notable Question (source) 2014-08-11 00:07:08 +0200 received badge ● Popular Question (source) 2014-08-10 20:12:56 +0200 received badge ● Famous Question (source) 2014-08-10 20:12:28 +0200 asked a question Site: Why is openID redirecting to Ubuntu One? I used to post here not a month ago, using my openID (via Launchpad). If I try to log in again now, I am simply redirected to a login page for Ubuntu One. => Does this have anything to do with AskLibO, or should Launchpad be doing something funny (which it has at least not announced anything in that direction. The only way for me to log in now is to recover my account ... not comfortable. 2014-05-31 16:24:50 +0200 commented answer Preserve text and vector images when printing to file Tesseract is a back-end for OCR software, and it seems to be made to recognize characters in raster images. One software that builds on Tesseract is gOCR, but that requires images and will produce text files -- I have no images but a PDF file with glyphs, and I don't want a text file but a new PDF file with proper (searchable) characters. 2014-05-31 15:11:20 +0200 commented answer Preserve text and vector images when printing to file There have been bug reports and feature requests since OpenOffice 2.x, along with promises of features. I have more or less given up hope by now. I used to have an openoffice.org account, I made a launchpad account to submit the same thing for LibreOffice, and now I'd have to make a bugzilla account for LO, while my hope keeps fading ... I've had lots of long discussions on mailing lists (and I hate mailing lists!) three years ago, and since then I've just used the workaround. It's easier. 2014-05-28 12:55:34 +0200 received badge ● Notable Question (source) 2014-05-27 15:42:53 +0200 received badge ● Popular Question (source) 2014-05-26 16:24:01 +0200 answered a question How do I edit Draw graphics included in a Writer document? Not quite sure if this will apply, but I had a similar problem and solved it by copying the drawing into LO draw, editing it, copying the new esult into the same frame, after deleting the old version. Since all references go to the "1" in "Drawing 1", they're not affected. The fiddly bit is to align the new drawing in the old frame, but that's solvable. Usually, though, I circumvent this by using .eps files: export as eps from draw, then embed that in writer, using a link, not copying. Thus, when you update the file and reload the writer file or update dependencies, it will pull in the new version. EPS, of course, has other problems (prints fine to PS printers, but hard to convert to PDF without rasterizing, is also not displayed well within Writer). Maybe these days using SVG might be a better idea, depending on what the contents of that drawing are. Might be worth a try. 2014-05-26 16:17:06 +0200 asked a question Preserve text and vector images when printing to file I've got a rather roundabout process (in LO 4.1 but used since 3.x) to have EPS images be preserved as vector images in a PDF: Print to PS file and then convert to PDF using ps2pdf -dAutoFilterColorImages=false -dColorImageFilter=/FlateEncode -- the regular PDF-export cannot be convinced not to convert them to nicely anti-aliased raster images (whose resolution I have no control over) that of course look crappy in print. So far so good, but now (with printing already underway) I realized that text in the PDFs (already in the PS files) has been turned into vector objects and can neither be marked and copied nor searched for. This is only true for some fonts (Bitstream Vera becomes vector drawings, Linux Biolinum and Helvetia don't!). Sadly, I can't change the font types now as that would mess up the layout. => Does anyone know a way to keep EPS images as vector objects, PNG images as uncompressed bitmaps and characters as characters? Or is there a way to add the lost information to the PDF after the fact? Some sort of OCR process that works on vector images instead of scanned bitmaps? 2014-05-26 15:55:50 +0200 answered a question Wie kann ich eine defekte Datei retten? Deine Frage ist ein wenig unspezifisch, aber ich versuch's mal: ODT-Dateien sind Zip-Archive von Verzeichnissen, die XML-Dateien und andere Inhalte enthalten. Du kannst also einfach das Komprimierprogramm Deines Vertrauens benutzen, um sie zu entpacken und Dir den Inhalt anzusehen. Da drin findest Du dann eingefügte Bilder meist als .png-Dateien, und den Text-Inhalt in der Datei content.xml Das beste Programm, mit dem man sich das angucken kann, ist für mich Opera, zum Editieren gibt es einige Dinge, aber LibreOffice selbst scheint mir der einzige Editor zu sein, der mit der idiotischen Zeilenlänge zurechtkommt, die die Dateien intern verwenden... (.d.h.: Mit anderen XML-Editoren kannst Du Dir den Inhalt ansehen, aber wenn du anschließend speicherst, ist sie meist nicht mehr gültig...) In der Regel ist das beste, was man als nicht-Experte hinbekommen kann (und dazu zähle ich mich selbt), den Text und eventuelle Bilder rauszukopieren und in ein frisches Dokument einzufügen. ... hoffe, dass das hilft.
## Skills Review for Sequences ### Learning Outcomes • Write the terms of a sequence defined by a recursive formula • Calculate the limit of a function as 𝑥 increases or decreases without bound • Recognize when to apply L’Hôpital’s rule In the Sequences section, we will look at ordered lists of numbers (sequences) and determine whether they converge or diverge. Here we will review how to evaluate a recursive (recurrence) formula, take limits at infinity, and apply L’Hôpital’s Rule. ## Use a Recursive Formula A recursive formula is a formula that defines its value at a particular input using the result of the previous input(s). A recursive formula always has two parts: the value of an initial input and an equation defining each term in terms of preceding terms. For example, suppose we know the following: \begin{align}&{a}_{1}=3 \\ &{a}_{n}=2{a}_{n - 1}-1, \text{ for } n\ge 2 \end{align} We can find the subsequent terms of the recursive formula using the first term. \begin{align}&{a}_{1}=3\\ &{a}_{2}=2{a}_{1}-1=2\left(3\right)-1=5\\ &{a}_{3}=2{a}_{2}-1=2\left(5\right)-1=9\\ &{a}_{4}=2{a}_{3}-1=2\left(9\right)-1=17\end{align} So, the first four terms are $3,5,9,\text{ and},17$. ### How To: Given a recursive formula with only the first term provided, write the first $n$ terms of a sequence. 1. Identify the initial term, ${a}_{1}$, which is given as part of the formula. This is the first term. 2. To find the second term, ${a}_{2}$, substitute the initial term into the formula for ${a}_{n - 1}$. Solve. 3. To find the third term, ${a}_{3}$, substitute the second term into the formula. Solve. 4. Repeat until you have solved for the $n\text{th}$ term. ### Example: Writing the Terms of a Sequence Defined by a Recursive Formula Write the first five terms of the sequence defined by the recursive formula. \begin{align} {a}_{1}&=9 \\ {a}_{n}&=3{a}_{n - 1}-20\text{, for }n\ge 2 \end{align} ### Try It Write the first five terms of the sequence defined by the recursive formula. \begin{align}{a}_{1}&=2\\ {a}_{n}&=2{a}_{n - 1}+1\text{, for }n\ge 2\end{align} ## Take Limits at Infinity Recall that $\underset{x \to a}{\lim}f(x)=L$ means $f(x)$ becomes arbitrarily close to $L$ as long as $x$ is sufficiently close to $a$. We can extend this idea to limits at infinity. For example, consider the function $f(x)=2+\frac{1}{x}$. As can be seen graphically in Figure 1 and numerically in the table beneath it, as the values of $x$ get larger, the values of $f(x)$ approach 2. We say the limit as $x$ approaches $\infty$ of $f(x)$ is 2 and write $\underset{x\to \infty }{\lim}f(x)=2$. Similarly, for $x<0$, as the values $|x|$ get larger, the values of $f(x)$ approaches 2. We say the limit as $x$ approaches $−\infty$ of $f(x)$ is 2 and write $\underset{x\to a}{\lim}f(x)=2$. Figure 1. The function approaches the asymptote $y=2$ as $x$ approaches $\pm \infty$. $x$ 10 100 1,000 10,000 $2+\frac{1}{x}$ 2.1 2.01 2.001 2.0001 $x$ -10 -100 -1000 -10,000 $2+\frac{1}{x}$ 1.9 1.99 1.999 1.9999 More generally, for any function $f$, we say the limit as $x \to \infty$ of $f(x)$ is $L$ if $f(x)$ becomes arbitrarily close to $L$ as long as $x$ is sufficiently large. In that case, we write $\underset{x\to \infty}{\lim}f(x)=L$. Similarly, we say the limit as $x\to −\infty$ of $f(x)$ is $L$ if $f(x)$ becomes arbitrarily close to $L$ as long as $x<0$ and $|x|$ is sufficiently large. In that case, we write $\underset{x\to −\infty }{\lim}f(x)=L$. We now look at the definition of a function having a limit at infinity. ### Definition (Informal) If the values of $f(x)$ become arbitrarily close to $L$ as $x$ becomes sufficiently large, we say the function $f$ has a limit at infinity and write $\underset{x\to \infty }{\lim}f(x)=L$ If the values of $f(x)$ becomes arbitrarily close to $L$ for $x<0$ as $|x|$ becomes sufficiently large, we say that the function $f$ has a limit at negative infinity and write $\underset{x\to -\infty }{\lim}f(x)=L$ If the values $f(x)$ are getting arbitrarily close to some finite value $L$ as $x\to \infty$ or $x\to −\infty$, the graph of $f$ approaches the line $y=L$. In that case, the line $y=L$ is a horizontal asymptote of $f$ (Figure 2). For example, for the function $f(x)=\frac{1}{x}$, since $\underset{x\to \infty }{\lim}f(x)=0$, the line $y=0$ is a horizontal asymptote of $f(x)=\frac{1}{x}$. ### Definition If $\underset{x\to \infty }{\lim}f(x)=L$ or $\underset{x \to −\infty}{\lim}f(x)=L$, we say the line $y=L$ is a horizontal asymptote of $f$. Figure 2. (a) As $x\to \infty$, the values of $f$ are getting arbitrarily close to $L$. The line $y=L$ is a horizontal asymptote of $f$. (b) As $x\to −\infty$, the values of $f$ are getting arbitrarily close to $M$. The line $y=M$ is a horizontal asymptote of $f$. A function cannot cross a vertical asymptote because the graph must approach infinity (or negative infinity) from at least one direction as $x$ approaches the vertical asymptote. However, a function may cross a horizontal asymptote. In fact, a function may cross a horizontal asymptote an unlimited number of times. For example, the function $f(x)=\frac{ \cos x}{x}+1$ shown in Figure 3 intersects the horizontal asymptote $y=1$ an infinite number of times as it oscillates around the asymptote with ever-decreasing amplitude. Figure 3. The graph of $f(x)=\cos x/x+1$ crosses its horizontal asymptote $y=1$ an infinite number of times. ### Example: Computing Limits at Infinity For each of the following functions $f$, evaluate $\underset{x\to \infty }{\lim}f(x)$ and $\underset{x\to −\infty }{\lim}f(x)$. 1. $f(x)=5-\frac{2}{x^2}$ 2. $f(x)=\dfrac{\sin x}{x}$ 3. $f(x)= \tan^{-1} (x)$ ### Try It Evaluate $\underset{x\to −\infty}{\lim}\left(3+\frac{4}{x}\right)$ and $\underset{x\to \infty }{\lim}\left(3+\frac{4}{x}\right)$. Determine the horizontal asymptotes of $f(x)=3+\frac{4}{x}$, if any. ## Infinite Limits at Infinity Sometimes the values of a function $f$ become arbitrarily large as $x\to \infty$ (or as $x\to −\infty )$. In this case, we write $\underset{x\to \infty }{\lim}f(x)=\infty$ (or $\underset{x\to −\infty }{\lim}f(x)=\infty )$. On the other hand, if the values of $f$ are negative but become arbitrarily large in magnitude as $x\to \infty$ (or as $x\to −\infty )$, we write $\underset{x\to \infty }{\lim}f(x)=−\infty$ (or $\underset{x\to −\infty }{\lim}f(x)=−\infty )$. For example, consider the function $f(x)=x^3$. As seen in the table below and Figure 2, as $x\to \infty$ the values $f(x)$ become arbitrarily large. Therefore, $\underset{x\to \infty }{\lim}x^3=\infty$. On the other hand, as $x\to −\infty$, the values of $f(x)=x^3$ are negative but become arbitrarily large in magnitude. Consequently, $\underset{x\to −\infty }{\lim}x^3=−\infty$. $x$ 10 20 50 100 1000 $x^3$ 1000 8000 125,000 1,000,000 1,000,000,000 $x$ -10 -20 -50 -100 -1000 $x^3$ -1000 -8000 -125,000 -1,000,000 -1,000,000,000 Figure 2. For this function, the functional values approach infinity as $x\to \pm \infty$. ### Definition (Informal) We say a function $f$ has an infinite limit at infinity and write $\underset{x\to \infty }{\lim}f(x)=\infty$ if $f(x)$ becomes arbitrarily large for $x$ sufficiently large. We say a function has a negative infinite limit at infinity and write $\underset{x\to \infty }{\lim}f(x)=−\infty$ if $f(x)<0$ and $|f(x)|$ becomes arbitrarily large for $x$ sufficiently large. Similarly, we can define infinite limits as $x\to −\infty$. ### Try It Find $\underset{x\to \infty }{\lim}3x^2$. ## Apply L’Hôpital’s Rule L’Hôpital’s rule can be used to evaluate limits involving the quotient of two functions. Consider $\underset{x\to a}{\lim}\dfrac{f(x)}{g(x)}$ If $\underset{x\to a}{\lim}f(x)=L_1$ and $\underset{x\to a}{\lim}g(x)=L_2 \ne 0$, then $\underset{x\to a}{\lim}\dfrac{f(x)}{g(x)}=\dfrac{L_1}{L_2}$ However, what happens if $\underset{x\to a}{\lim}f(x)=0$ and $\underset{x\to a}{\lim}g(x)=0$? We call this one of the indeterminate forms, of type $\frac{0}{0}$. This is considered an indeterminate form because we cannot determine the exact behavior of $\frac{f(x)}{g(x)}$ as $x\to a$ without further analysis. We have seen examples of this earlier in the text. For example, consider $\underset{x\to 2}{\lim}\dfrac{x^2-4}{x-2}$ and $\underset{x\to 0}{\lim}\dfrac{ \sin x}{x}$ For the first of these examples, we can evaluate the limit by factoring the numerator and writing $\underset{x\to 2}{\lim}\dfrac{x^2-4}{x-2}=\underset{x\to 2}{\lim}\dfrac{(x+2)(x-2)}{x-2}=\underset{x\to 2}{\lim}(x+2)=2+2=4$ For $\underset{x\to 0}{\lim}\frac{\sin x}{x}$ we were able to show, using a geometric argument, that $\underset{x\to 0}{\lim}\dfrac{\sin x}{x}=1$ Here we use a different technique for evaluating limits such as these. Not only does this technique provide an easier way to evaluate these limits, but also, and more important, it provides us with a way to evaluate many other limits that we could not calculate previously. The idea behind L’Hôpital’s rule can be explained using local linear approximations. Consider two differentiable functions $f$ and $g$ such that $\underset{x\to a}{\lim}f(x)=0=\underset{x\to a}{\lim}g(x)$ and such that $g^{\prime}(a)\ne 0$ For $x$ near $a$, we can write $f(x)\approx f(a)+f^{\prime}(a)(x-a)$ and $g(x)\approx g(a)+g^{\prime}(a)(x-a)$ Therefore, $\dfrac{f(x)}{g(x)}\approx \dfrac{f(a)+f^{\prime}(a)(x-a)}{g(a)+g^{\prime}(a)(x-a)}$ Figure 1. If $\underset{x\to a}{\lim}f(x)=\underset{x\to a}{\lim}g(x)$, then the ratio $f(x)/g(x)$ is approximately equal to the ratio of their linear approximations near $a$. Since $f$ is differentiable at $a$, then $f$ is continuous at $a$, and therefore $f(a)=\underset{x\to a}{\lim}f(x)=0$. Similarly, $g(a)=\underset{x\to a}{\lim}g(x)=0$. If we also assume that $f^{\prime}$ and $g^{\prime}$ are continuous at $x=a$, then $f^{\prime}(a)=\underset{x\to a}{\lim}f^{\prime}(x)$ and $g^{\prime}(a)=\underset{x\to a}{\lim}g^{\prime}(x)$. Using these ideas, we conclude that $\underset{x\to a}{\lim}\dfrac{f(x)}{g(x)}=\underset{x\to a}{\lim}\dfrac{f^{\prime}(x)(x-a)}{g^{\prime}(x)(x-a)}=\underset{x\to a}{\lim}\dfrac{f^{\prime}(x)}{g^{\prime}(x)}$ Note that the assumption that $f^{\prime}$ and $g^{\prime}$ are continuous at $a$ and $g^{\prime}(a)\ne 0$ can be loosened. We state L’Hôpital’s rule formally for the indeterminate form $\frac{0}{0}$. Also note that the notation $\frac{0}{0}$ does not mean we are actually dividing zero by zero. Rather, we are using the notation $\frac{0}{0}$ to represent a quotient of limits, each of which is zero. ### L’Hôpital’s Rule (0/0 Case) Suppose $f$ and $g$ are differentiable functions over an open interval containing $a$, except possibly at $a$. If $\underset{x\to a}{\lim}f(x)=0$ and $\underset{x\to a}{\lim}g(x)=0$, then $\underset{x\to a}{\lim}\dfrac{f(x)}{g(x)}=\underset{x\to a}{\lim}\dfrac{f^{\prime}(x)}{g^{\prime}(x)}$, assuming the limit on the right exists or is $\infty$ or $−\infty$. This result also holds if we are considering one-sided limits, or if $a=\infty$ or $-\infty$. ### Example: Applying L’Hôpital’s Rule (0/0 Case) Evaluate each of the following limits by applying L’Hôpital’s rule. 1. $\underset{x\to 0}{\lim}\dfrac{1- \cos x}{x}$ 2. $\underset{x\to 1}{\lim}\dfrac{\sin (\pi x)}{\ln x}$ 3. $\underset{x\to \infty }{\lim}\dfrac{e^{\frac{1}{x}}-1}{\frac{1}{x}}$ 4. $\underset{x\to 0}{\lim}\dfrac{\sin x-x}{x^2}$ ### Try It Evaluate $\underset{x\to 0}{\lim}\dfrac{x}{\tan x}$. We can also use L’Hôpital’s rule to evaluate limits of quotients $\frac{f(x)}{g(x)}$ in which $f(x)\to \pm \infty$ and $g(x)\to \pm \infty$. Limits of this form are classified as indeterminate forms of type $\infty / \infty$. Again, note that we are not actually dividing $\infty$ by $\infty$. Since $\infty$ is not a real number, that is impossible; rather, $\infty / \infty$ is used to represent a quotient of limits, each of which is $\infty$ or $−\infty$. ### L’Hôpital’s Rule ($\infty / \infty$ Case) Suppose $f$ and $g$ are differentiable functions over an open interval containing $a$, except possibly at $a$. Suppose $\underset{x\to a}{\lim}f(x)=\infty$ (or $−\infty$) and $\underset{x\to a}{\lim}g(x)=\infty$ (or $−\infty$). Then, $\underset{x\to a}{\lim}\dfrac{f(x)}{g(x)}=\underset{x\to a}{\lim}\dfrac{f^{\prime}(x)}{g^{\prime}(x)}$, assuming the limit on the right exists or is $\infty$ or $−\infty$. This result also holds if the limit is infinite, if $a=\infty$ or $−\infty$, or the limit is one-sided. ### Example: Applying L’Hôpital’s Rule ($\infty /\infty$ Case) Evaluate each of the following limits by applying L’Hôpital’s rule. 1. $\underset{x\to \infty }{\lim}\dfrac{3x+5}{2x+1}$ 2. $\underset{x\to 0^+}{\lim}\dfrac{\ln x}{\cot x}$ ### Try It Evaluate $\underset{x\to \infty }{\lim}\dfrac{\ln x}{5x}$
# How do you solve by completing the square a^2+ 8a + 11 = 0? Apr 30, 2018 $a = 4 + \sqrt{5} , 4 - \sqrt{5}$ #### Explanation: ${a}^{2} + 8 a + 11 = {\left(a + 4\right)}^{2} - 16 + 11 = {\left(a + 4\right)}^{2} - 5$ Checking: $\left(a + 4\right) \left(a + 4\right) = {a}^{2} + 8 x + 16 - 5 = {a}^{2} + 8 x + 11$ Solve: ${\left(a - 4\right)}^{2} - 5 = 0$ ${\left(a - 4\right)}^{2} = 5$ $a - 4 = \pm \sqrt{5}$ $a = 4 \pm \sqrt{5}$
## Intermediate Algebra (12th Edition) Published by Pearson # Chapter 4 - Section 4.1 - Integer Exponents and Scientific Notation - 4.1 Exercises - Page 277: 71 #### Answer $\frac{1}{6^{10}}$ #### Work Step by Step According to the quotient rule for exponents, $\frac{a^{m}}{a^{n}}=a^{m-n}$ (where $a\ne0$). Therefore, $\frac{6^{-3}}{6^{7}}=6^{-3-7}=6^{-10}$. Furthermore, the definition of negative exponents tells us that $a^{-n}=\frac{1}{a^{n}}$. Therefore, $6^{-10}=\frac{1}{6^{10}}$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
• No products in the cart. # Beautiful Math with LaTeX Beautiful Math with LaTeX LaTeX is a powerful markup language for writing complex mathematical equations, formulas, and more. Jetpack combines the power of LaTeX and the simplicity of WordPress to give you the ultimate in math blogging platforms. To enable this module, visit Jetpack → Settings → Writing in your site’s dashboard. Scroll down to the Composing section and toggle on the Use the LaTeX markup language to write mathematical equations and formulas option. Using LaTeX To include \LaTeX code in your post, use the following: $your-latex-code-here$ So, for example, $i\hbar\frac{\partial}{\partial t}\left|\Psi(t)\right>=H\left|\Psi(t)\right>$ produces i\hbar\frac{\partial}{\partial t}\left|\Psi(t)\right>=H\left|\Psi(t)\right> LaTeX Error If your \LaTeX code is broken, instead of the equation you’ll see an ugly yellow and red error message. Sorry, we can’t provide support for \LaTeX syntax, but there are plenty of useful guides elsewhere online. Or a quick post in our forums might find you a solution. One thing to keep in mind is that WordPress puts all of your \LaTeX code inside a \LaTeX math environment. If you try to use \LaTeX that doesn’t work inside the math environment (such as \begin{align} … \end{align}), you will get an error: \LaTeX&s=X LaTeX Size You can change the size of the LaTeX by specifying an s parameter after the \LaTeX code. $\LaTeX&s=X$ Where X goes from -4 to 4 (0 is the default). These sizes correspond to \LaTeX‘s font size commands: s= font size Example -4 \tiny \LaTeX -3 \scriptsize \LaTeX -2 \footnotesize \LaTeX -1 \small \LaTeX 0 \normalsize (12pt) \LaTeX 1 \large \LaTeX 2 \Large \LaTeX 3 \LARGE \LaTeX 4 \huge \LaTeX LaTeX Colors WordPress tries to guess the background and foreground colors of your site and generates the \LaTeX image accordingly. But, you can change the colors. You can specify bg and fg parameters after the \LaTeX code to change the background and foreground colors, respectively. The colors must be in hexadecimal RGB format: ffffff for white, 0000ff for bright blue, etc. $\LaTeX$ \LaTeX LaTeX Packages WordPress.com uses standard \LaTeX with the following packages: amsmath amsfonts amssymb For more information about \LaTeX, you can visit LaTeX documentation site and TeX Resources by American Mathematical Society. Privacy Information This feature is activated by default. While there are no controls for it within the primary Jetpack settings area, it can be deactivated any time by following this guide. Data Used Site Owners / Users None. Site Visitors None. Activity Tracked Site Owners / Users None. Site Visitors None. Data Synced (Read More) Site Owners / Users We sync a single option that identifies whether or not the feature is activated. Site Visitors None. May 20, 2019 ### 0 responses on "Beautiful Math with LaTeX" inPhysic is an online education site which imparts knowledge and skills to million of users worldwide. 280 an dương vương, phường 4 quận 5,  Hồ Chí Minh 0976 905 317 [email protected] | [email protected] | [email protected] january, 2021 No Events
## Nomenclature & Symbols for Engineering, Mathematics, and Science Formula nomenclature is a system of names or terms represented by letters and the Greek alphabet assigned to represent equation physical quantities.  Definition symbols vary widely and do not necessarily represent the information being presented the way an abbreviation does.  These alphabetical lists contain symbols, greek symbols, definitions, US units, metric units, dimensionless numbers, constants, and constant values. ## Formula Nomenclature & Symbols A - B - C - D - E - F - G - H - I - J - K - L - M - N - O - P - Q - R - S - T - U - V - W - X - Y - Z ### Nomenclature & Symbols Display # Title Algebra Symbols Angle and Line Symbols ASCII Characters Basic Math Symbols Bracket Symbols
# A simple L-J potential for simulating atomic interaction (with bug) Hi, I am a student and recently started learning physics simulation with a strong interest. Herein, I would like to show my simple code simulating the atomic interaction with Lennard-Jones (L-J) potential. And would like to ask some questions about the physical model. Firstly, the L-J potential is a potential used to describe the interaction between two atoms and is widely used in molecular dynamics simulations. It has the following form and appearance: L-J势函数是一种用于描述两原子间相互作用的势,被广泛用于分子动力学模拟之中。他有着如下的形式: V(r) = 4\epsilon[(\frac{\sigma}{r})^{12} - (\frac{\sigma}{r})^{6}] in which \sigma measures the location of the point where the potential energy is zero and \epsilon measures the depth of the potential well. I obtained the force field by finding the gradient of this potential function and updated the velocity of each particle by the distance of different particles within a cutoff radius. Here I do not use any high-end operations, but simply a modification of the spring-mass system. \pmb{F}(\pmb{r})=24[2\frac{\sigma^{12}}{r^{13}}- \frac{\sigma^{6}}{r^{7}}]\hat{\pmb{r}} So, without other words, let me show this rough code: import taichi as ti ti.init() max_num_particles = 1024 particle_mass = 1 dt = 1e-3 substeps = 10 sigma = 0.05 sigma2 = sigma**2 sigma6 = sigma2**3 sigma12 = sigma6**2 potential_energy = 0.1 cutoff_r = 0.5 k = 0.001 x = ti.Vector.field(2, dtype=ti.f32, shape=max_num_particles) v = ti.Vector.field(2, dtype=ti.f32, shape=max_num_particles) f = ti.Vector.field(2, dtype=ti.f32, shape=max_num_particles) charge = ti.field(dtype=ti.f32, shape=max_num_particles) num_particles = ti.field(dtype=ti.i32, shape=()) @ti.kernel def add_ion(x_pos: ti.f32, y_pos: ti.f32, pos_neg: ti.f32): new_particle_id = num_particles[None] x[new_particle_id] = ti.Vector([x_pos, y_pos]) charge[new_particle_id] = pos_neg num_particles[None] += 1 @ti.kernel def substep(): n = num_particles[None] for i in range(n): f[i] = ti.Vector([0,0]) for j in range(n): if i != j: x_ij = x[i] - x[j] if x_ij.norm() <= cutoff_r: c_ij = charge[i] * charge[j] delta_x = x_ij.norm() x2 = delta_x**2 x6 = x2**3 x7 = x6 * delta_x x13 = x6**2 * delta_x d = x_ij.normalized() f[i] += (24 * potential_energy * (2 * sigma12/x13 - sigma6/x7)) * d # f[i] += -k * c_ij/ x_ij.norm()**2 * d # Boundary conditions for i in range(n): v[i] += dt * f[i] / particle_mass x[i] += dt * v[i] for d in ti.static(range(2)): if x[i][d] < 0: x[i][d] = 0 v[i][d] = 0 if x[i][d] > 1: x[i][d] = 1 v[i][d] = 0 def main(): gui = ti.GUI("Ionic Lattice Simulation", res=(512,512), background_color=0x262629) while True: for e in gui.get_events(ti.GUI.PRESS): if e.key in (ti.GUI.ESCAPE, ti.GUI.EXIT): exit() elif e.key == ti.GUI.LMB: elif e.key == ti.GUI.RMB: for step in range(substeps): substep() # draw ions X = x.to_numpy() for i in range(num_particles[None]): c = 0x3CC1FF if charge[i] < 0 else 0xFF623F gui.show() if __name__ == '__main__': main() There are actually two types of particles added to this code to model the attraction between positive and negative ions later on, but the Coulomb field is not implemented yet, just coloured differently. The result is as follow But I have a problem with this simulation. When there are a lot of particles, the simulation will suddenly get stuck and not move after a few dozen seconds, and I would like to ask what is happening. Also, I would like to ask if there is any simple way to measure the parameters of the L-J potential function and the duration of the simulation that is realistic in a physical sense? 1 Like I assume that you’re more comfortable with Chinese, if not, let me know. 1. 你的截断半径是sigma的10倍,一般如果使用带有cutoff的LJ势的时候截断半径会在sigma的2.5到4倍之间 2. 体系用了slip boundary condition,热力学上这会使体系的等效温度或能量不断降低,最终趋于接近固体的凝聚态,假如你的目标是分子模拟的话,多数情况下会用周期性边界条件。 ps. taichi真的太香了
# What are association rules as a type of knowledge? Define support and confidence, and use these... What are association rules as a type of knowledge? Define support and confidence, and use these definitions to define an association rule.
IQ Test, Mental Aptitude, Logical Reasoning & Mathematical Tests Test Report Question No 1 A person's present age is two-fifth of the age of his mother. After 8 years, he will be one-half of the age of his mother. What is the present age of the mother? Solution! Let the present age of the person = x Then present age of the mother = 5x/2 Given that , after 8 years, the person will be one-half of the age of his mother. (x + 8) = (1/2)(5x/2 + 8) 2x + 16 = 5x/2 + 8 x/2 = 8 x = 16 Present age of the mother = 5x/2 = 5 * 16/2 = 40. Question No 2 Pointing to a photograph Lata says, "He is the son of the only son of my grandfather." How is the man in the photograph related to Lata? Solution! The man in the photograph is the son of the only son of Lata's grandfather i.e., the man is the son of Lata's father. Hence, the man is the brother of Lata.. Question No 3 0 , 100 , 6 , 94 , 12 , 88 , 18 , 82 , ? , ? Solution! There are two sequences interwoven. Add 6 starting at 0 and deduct 6 starting at 100.. Question No 4 What is the sum of first 5 positive even integers? Solution! 2+4+6+8+10 = 30. Question No 5 If A + B means A is the mother of B; A - B means A is the brother B; A % B means A is the father of B and A x B means A is the sister of B, which of the following shows that P is the maternal uncle of Q? Solution! P - M = P is the brother of M M + N = M is the mother of N N x Q = N is the sister of Q Therefore, P is the maternal uncle of Q.. Question No 6 With an increase of 12% salary of Ali became Rs13440. What was his salary before increase? Solution! Let before increase Ali's salary was x. 12% of x = 12x/100 = 3x/25 Now 13440 - 3x/25 = x 336000 - 3x = 25x 336000 = 28x x = 12000 Hence Ali's salary before increase was Rs. 12000. Question No 8 TWAIN is to CLEMENS as ELIOT is to.. Solution! No answer description available for this question.. Question No 9 What is Antonym of RIBALD ? Solution! No answer description available for this question.. Question No 10 16 men dig a ditch in 10 days. How many men would be required to complete the job in half the time? Solution! Double number of men are required to complete the job in half time.. Question No 11 One year ago, the ratio of Sooraj's and Vimal's age was 6/7 respectively. Four years hence, this ratio would become 7/8. How old is Vimal? Solution! Let take the age of Sooraj and Vimal, 1 year ago as 6x and 7x respectively. Given that, four years hence, this ratio would become 7/8. (6x + 5)/(7x + 5) = 7/8 48x + 40 = 49x + 35 x = 5 Vimal's present age = 7x + 1 = 7*5 + 1 = 36. Question No 12 What is the angle between the hands of a clock at 9 o'clock? Solution! At 9 o'clock the minute hand points to 12 and hour hand points to 9. Angle between 9 and 12 at the clock is 90 degree.. Question No 13 0 , 1 , 4 , 9 , 16 , 25 , 36 , 49 , ? Solution! Add 1, 3, 5, 7, etc.. Question No 14 1 , 1 , 2 , ? , 24 , 120 , 720 Solution! Multiply by 1, 2, 3 and so on.... Question No 15 If A + B means B is the brother of A; A x B means B is the husband of A; A - B means A is the mother of B and A % B means A is the father of B, which of the following relations shows that Q is the grandmother of T? Solution! Q - P = Q is the mother of P P + R = R is the brother of P Hence, Q is the mother of R R % T = R is the father of T. Hence, Q is the grandmother of T.. Question No 16 On 8th Dec, 2007 Saturday falls.What day of the week was it on 8th Dec, 2006? Solution! The year 2006 is an ordinary year. So, it has 1 odd day. So, the day on 8th Dec, 2007 will be 1 day beyond the day on 8th Dec, 2006. But, 8th Dec, 2007 is Saturday. Hence, 8th Dec, 2006 is Friday.. Question No 17 Choose the word which is different from the rest. Solution! All except Quay are parts Of a ship.. Question No 18 A and B are children of D. Who is the father of A? To answer this question which of the statements (1) and (2) is necessary? (1) C is the brother of A and the son of E. (2) F is the mother B. Solution! A and B are children of D. From (1), C is the brother B and son of E. Since, the sex of D and E are not known. Hence (1) is not sufficient to answer the question. From (2). F is the mother of B. Hence, F is also the mother of A. Hence D is the father of A. Thus, (2) is sufficient to answer the question.. Question No 19 230 meters long train crosses a man sitting on the platform in 46 seconds. What is the speed of the train? Solution! Length of the train = L = 230m Time = t = 46s Speed = v = ? V = L/t By putting values v = 230/46 = 5m/s Hence speed of the train is 5m/s.. Question No 20 A and B are married couple. X and Y are brothers. X is the brother of A. How is Y related to B. Solution! X and Y both are brothers of A which means both are Brother-in-law of B.. Question No 21 100 , 96.75 , 93.5 , 90.25 , 87 , ? Solution! Deduct 3.25 each time.. Question No 22 Which one of the following is always in 'Sentiment'? Solution! No answer description available for this question.. A shopkeeper bought an article for $120 and sold at profit of 10%. What is selling price of the article? Select the correct answer Solution! 10% of 120 = (120*10)/100 = 12 Selling price = 120+12 = 132. Question No 24 Find the odd man out. Select the correct answer Solution! Remaining are names of British Kings.. Question No 25 Ten years ago a father was seven times as old as his son, two years hence, twice his age will be equal to five times his son's. What is the present age of son? Select the correct answer Solution! Let present age of son is x and present age of father is y. Given that y-10 = 7(x-10) y-10 = 7x-70 y-7x = -70+10 7x-y = 60 ...(1) and also given that 2(y+2) = 5(x+2) 2y+4 = 5x+10 2y-5x = 6 ...(2)By simultaneously soving equation(1) and equation(2) x = 14 Hence present age of son is 14 years.. Question No 26 Find the odd man out. Select the correct answer Solution! No answer description available for this question.. Question No 27 Optimist is to cheerful as pessimist is to.. Select the correct answer Solution! An optimist is a person whose outlook is cheerful. A pessimist is a person whose outlook is gloomy.. Question No 28 The average age of A and B is 20. If C were to replace A, the average would be 19, and if B was replaced by C, the average would be 21. The age of C is Select the correct answer Solution! Given that (A+B)/2 = 20 A+B = 40 ...(1) and (C+B)/2 = 19 C+B = 38 ...(2) and (A+C)/2 = 21 A+C = 42 ...(3) Put value of A from equation(1) in equation(3) (40-B)+C = 42 C-B = 2 ...(4) By simultaneously solving equation(2) and equation(4) C = 20 Hence C is 20 years old.. Question No 29 By how many degrees does the minute hand move in the same time, in which the hour hand move by 18 degree? Select the correct answer Solution! 18*2 *6 = 216 degree . Question No 30 Choose the word which is different from the rest. Select the correct answer Solution! Swan is the only water bird in the group.. Question No 31 Average of 4 numbers is 24. What is the sum of the four numbers? Select the correct answer Solution! x/4 = 24 x = 24*4 x = 96. Question No 32 Alfred buys an old scooter for Rs. 4700 and spends Rs. 800 on its repairs. If he sells the scooter for Rs. 5800, his gain percent is Select the correct answer Solution! Cost Price (C.P.) = (4700 + 800) = Rs. 5500 Selling Price (S.P.) = Rs. 5800 Gain = (S.P.) - (C.P.) = (5800 - 5500) = Rs. 300 Gain % = (300/5500) * 100 = 60/5 %. Question No 33 paw is to cat as hoof is to.. Select the correct answer Solution! As cat has Paw similarly Horse has Hoof.. Question No 34 What is 45% of 300? Select the correct answer Solution! 45 * (300/100) = 135 Quick way to find answer from options is 50% of 300 is 150 so 45% of 300 should be less than 150 So, it may be 149 or 135 149 is very close to 150 which is not possible Hence, 135 is correct answer.. Question No 35 Yard is to inch as quart is to.. Select the correct answer Solution! A yard is a larger measure than an inch (a yard contains 36 inches). A quart is a larger measure than an ounce (a quart contains 32 ounces).. Question No 36 In a fort there is enough food sufficient for 1200 men for 10 days. How long the food would have last for if there would have been 1000 men. Select the correct answer Solution! Let food will last for x days. As men and days are inversely proportional to each other so x/10 = 1200/1000 x = 12 Hence food will last for 12 days if there were 1000 men.. Question No 37 A 100 meters long train can cross a 150 meters long bridge in 20 seconds. What is the speed of the train? Select the correct answer Solution! Length of the train = L1 = 100m Length of the bridge = L2 = 150m Time = t = 20s Speed = v = ? v = (L1+L2)/t By putting values v = (100+150)/20 v = 250/20 = 12.5m/s Hence speed of train is 12.5m/s.. Question No 38 A train 800 metres long is running at a speed of 78 km/hr. If it crosses a tunnel in 1 minute, then the length of the tunnel (in meters) is Select the correct answer Solution! Speed = 78 * (5/18) = 65/3 m/s Time = 1 minute = 60 seconds Let the length of the tunnel be x metres Then, (800+x)/60 = 65/3 3(800 + x) = 3900 Hence, x = 500. Question No 39 A father said to his son, "I was as old as you are at the present at the time of your birth". If the father's age is 38 years now, the son's age five years back was? Select the correct answer Solution! Let the son's present age be x years. Then, (38 - x) = x 2x = 38 x = 19 Hence, Son's age 5 years back (19 - 5) = 14 years.. Question No 40 Which year has the same calendar as 1700 ? Select the correct answer Solution! Year 1700 has 1 odd day 1701 has 1 odd day 1702 has 1 odd day 1703 has 1 odd day 1704 has 2 odd day 1705 has 1 odd day Hence, 1706 will has same calender.. Question No 41 A train of length 150 metres is running at the speed of 20m/s. In what time will it cross a 130 metre long bridge? Select the correct answer Solution! Length of the train = L1 = 150m Length of bridge = L2 = 130m Speed = v = 20m/s t = (L1 + L2)/v t = (150+130)/20 t = 14 Hence the train will pass the 130 meter long bridge in 14 seconds.. Question No 42 Which number is the odd one out, 8, 27, 64, 100, 125, 216, 343 Select the correct answer Solution! All others are perfect cube of any number but 100 is not the perfect cube of any number.. Question No 43 A cycle costs$170.Jane saves $50 each week.How many weeks will it be before she can buy it? Select the correct answer Solution! 170 / 50 = 3.4 But 3.4 is not in options so nearest Option is 4. Question No 44 The ages of two persons differ by 16 years. 6 years ago, the elder one was 3 times as old as the younger one. What are their present ages of the elder person? Select the correct answer Solution! Let's take the present age of the elder person = x and the present age of the younger person = x - 16 (x - 6) = 3 (x-16-6) x - 6 = 3x - 66 2x = 60 x = 60/2 = 30. Question No 45 'A' is without sons and brothers, but the father of B is A's father's son. What is 'B' to 'A'? Select the correct answer Solution! A's father's son means A himself because A has no brother. So A is father of B. As A has no son, B must be her daughter.. Question No 46 Pride is to lion as school is to.. Select the correct answer Solution! A group of lions is called a pride. A group of fish swim in a school.. Question No 47 A man was accompanying a girl, on being asked who the girl was the man said, "Her father was the only son of my father". What was the girl to man? Select the correct answer Solution! If I am a boy and I am only son of my father it means I am talking about myself. Same is the case here man is father of the girl.. Question No 48 Find the odd man out. Select the correct answer Solution! No answer description available for this question.. Question No 49 The present age of A is 45 yeras. 5 years ago the age of A was 5 times the age of his son B. How old is B now? Select the correct answer Solution! Given that A = 45 and A-5 = 5(B-5) 40 = 5B-25 65 = 5B B = 13 Hence B is 13 years old now.. Question No 50 Cloth is to Mill as Newspaper is to? Select the correct answer Solution! As Cloth is made in a Mill, similarly Newspaper is printed in press.. Question No 51 By selling an article for$200 shopkeeper looses 20%. What is cost price of the article? Solution! Let cost price of the article is x. 20% of x = (x*20)/100 = x/5 Now (x/5) + 200 = x x+1000 = 5x 4x = 1000 x = 250 Hence cost price of the article is 250.. Question No 52 A train speeds past a pole in 15 seconds and a platform 100 m long in 25 seconds. Its length is Solution! Let the length of the train be x metres and its speed by y m/sec. Then, x/y = 15 y = x/15 So, (x+100)/25 = x/15 15(x + 100) = 25x 15x + 1500 = 25x 1500 = 10x x = 150 m. Question No 53 The length of the bridge, which a train 130 metres long and travelling at 45 km/hr can cross in 30 seconds, is Solution! Speed = 45 * (5/18) = 25/2 m/s Time = 30 sec Let the length of bridge be x metres Then, (130 + x)/30 = 25/2 2(130 + x) = 750 Hence, x = 245 m. Question No 54 Choose the word which is different from the rest. Solution! All except Oil are products obtained from milk.. Question No 55 Which number is the odd one out, 10, 25, 45, 54, 60, 75, 80 Solution! Each of the numbers except 54 is multiple of 5.. Question No 56 A train travelling at he speed of 72km/h can cross a platform in 17 seconds. If length of the train is 180 meters, what is the length of the platform? Solution! Speed of train = v = 72Km/h = 72*(5/18) = 20m/s Time = t = 17s Length of the train = L1 = 180m Length of the platform = L2 = ? t = (L1+L2)/v By putting values 17 = (180+L2)/20 340-180 = L2 L2 = 160m Hence length of the platform is 160m.. Question No 57 Reaching a party day before yesterday, I found myself two days late. If day after tomorrow is Friday. On what day was the party scheduled to be held? Solution! As day after tomorrow is Friday so tomorrow is Thursday, today is Wednesday, yesterday was Tuesday and day before yesterday was Monday. Today before the day before yesterday was Saturday. Hence party was scheduled on Saturday.. Question No 58 How many times do the hands of a clock coincide in a day? Solution! The hands of a clock coincide 11 times in every 12 hours (Since between 11 and 1, they coincide only once, i.e., at 12 o'clock). The hands overlap about every 65 minutes, not every 60 minutes. So, the hands coincide 22 times in a day.. Question No 59 Which of the following is the least number which is divisible by both 12 and 18?
• Evaluation of indicators for desertification risk assessment in part of Thar Desert Region of Rajasthan using geospatial techniques • # Fulltext https://www.ias.ac.in/article/fulltext/jess/127/08/0116 • # Keywords Desertification indicators; desertification risk assessment; environmental sensitive areas to desertification; stepwise multiple regression model. • # Abstract Desertification has emerged as a major economic, social and environmental problem in the western part of India. The best way of dealing with desertification is to take appropriate measures to arrest land degradation, especially in areas prone to desertification. This requires an early warning system for desertification based on scientific inputs. Hence, in the present study, an attempt has been made to develop a comprehensive model for the assessment of desertification risk in the Jodhpur district of Rajasthan, India, using 23 desertification indicators. Indicators including soil, climate, vegetation and socio-economic parameters were integrated into a GIS environment to get environmental sensitive areas (ESAs) to desertification. Desertification risk index (DRI) was calculated based on ESAs to desertification, the degree of land degradation and significant desertification indicators obtained from the stepwise multiple regression model. DRI was validated by using independent indicators such as soil organic matter content and cation exchange capacity. Multiple regression analysis shows that 16 indicators out of 23 were found to be significant for assessing desertification risk at a 99% confidence interval with $R^{2}$=0.83. The proposed methodology provides a series of effective indicators that would help to identify where desertification is a current or potential problem, and what could be the actions to alleviate the problem over time. • # Author Affiliations 1. Regional Remote Sensing Centre – West, NRSC/ISRO, Jodhpur, India. 2. National Remote Sensing Centre, ISRO, Hyderabad, India. • # Journal of Earth System Science Current Issue Volume 128 | Issue 5 July 2019
# Tag Info 6 As a hint, try to show that: $$\frac{1}{e^{E/T}+1} = \frac{1}{e^{E/T}-1} - \frac{2}{e^{2E/T}-1}.$$ 5 "Tastes" is the name for the additional fermions produced by fermion doubling when putting actions with fermions on a lattice. "Taste symmetry" is a symmetry exchanging these additional fermions with each other. These "tastes" are unphysical and purely an artifact of the lattice theory - they have no relation to flavor except ... 3 I think the confusion arises out of a misunderstanding of "complex conjugation" in the Grassmann formalism. Grassmann "variables" are simply elements of an exterior algebra. If $V$ is a vector space, and $\xi_1,...,\xi_n$ some vectors, then $\xi_1 \cdots \xi_n$ is simply a way of writing the exterior product $\xi_1 \wedge \cdots \wedge \... 2 So I guess I'm ultimately asking why the formula for the partition function of N non interacting indistinguishable particles isn't working for bosons and fermions. What am I missing? You're not missing anything.$N!$is a naive overcounting factor which is wrong for precisely the reasons you've worked out, both for bosons and for fermions. For fermions,$N!... 2 Experiments in high energy physics use statistical definitions in describing their data, and deciding how a fit of data to a hypothesis predicted can be compared with the theory. The usual standard deviation measure for all statistical quantities. This is usually done using monte carlo methods: generating a large number of artificial events that fit the ... 2 The masses come from, and are proportional to, the yukawa couplings to the Higgs field. So part of the question is, how to obtain yukawas that differ by so many orders of magnitude? I may have overlooked something, but all the examples I can think of, fall into one of two categories. Either the yukawa is an exponential function of something else - in which ... 1 If we swap particles $a$ and $b$ so that $a$ is in state $\psi_a ( \mathbf{r}_2)$ and $b$ is in state $\psi_b (\mathbf{r}_1)$ then the system of two particles is now in state $\psi_a (\mathbf{r}_2) \psi_b (\mathbf{r}_1)$ Since in general, $\psi_a$ and $\psi_b$ are different functions of $\mathbf{r}$, then $\psi_a (\mathbf{r}_2) \psi_b (\mathbf{r}_1) \ne \... 1 Allow me to use an analogy. Let me draw two imaginary chess boards. In the first, two pawns of the same color occupy opposite corners, whereas in the second one, it's one white and one black pawn. In the second board, the pawns clearly have an "identity". One is black, the other is white. You will not mix them at any time. If they switch positions, ... 1 The energy$E$is related to the moment$p$in the following way (with$c=1$) $$E=\sqrt{p^2+m^2}\rightarrow p=\sqrt{E^2-m^2}\tag{1}.$$$d^3p$is a infinitesimal volume in the$p$-space. Since$f(p)$depends only in the magnitude of$\vec{p}\$, we can use spherical coordinates, so $$d^3p=p^2\sin\theta \,dp\,d\theta\,d\phi$$ $$n=\frac{g}{2\pi^3}\int f(p)d^3p=\... 1 If we take, assuming isotropy on p:$$d^{3}p = 4\pi p^{2}dp, $$and taking c = 1 in the energy-momentum relation:$$E^{2} = p^{2} + m^{2}. $$We have:$$|\frac{dp}{dE}| = \frac{E}{\sqrt{E^{2} - m^{2}}}. $$Then:$$d^{3}p = 4\pi p^{2}dp = 4\pi \frac{(E^{2} - m^{2})}{\sqrt{E^{2} - m^{2}}}EdE.$$So, finally:$$d^{3}p = 4\pi E\sqrt{E^{2} - m^{2}}dE.$$1 1. Are these state eigenstates? How can i calculate their energy? Act Hamiltonian on the states \vert 00,00\rangle and \vert 11,11\rangle . The 4 number denotes occupation for site 1 spin-up, site 1 spin-down, site 2 spin-up, and site 2 spin-down:$$ H=\sum_{\sigma=\uparrow,\downarrow}[\epsilon_1 c_{1\sigma}^\dagger\,c_{1\sigma} + \epsilon_2 c_{2\... 1 You are not alone in this. I tend to regard the Grassmann integral as tool for combinatorics, but if you want a deeper view and a discussion of analytic subtleties you might like to read Martin R. Zirnbauer, Riemannian symmetric superspaces and their origin in random-matrix theory, J. Math. Phys.\ 37 (1996) 4986; arXiv:math-ph/9808012. 1 Term decay is rather rare in condensed matter physics, so I will assume that what you really mean is the finite lifetime. Condensed matter studies complex many-paryicle systems. For example, description of a crystal in terms of electron Bloch waves is valid only under assumption of a a rigid lattice and absence of Coulomb interaction. Once the interactions ... Only top voted, non community-wiki answers of a minimum length are eligible
Check the correctness of a gradient function by comparing it against a (forward) finite-difference approximation of the gradient. Parameters: func : callable func(x0, *args) Function whose derivative is to be checked. grad : callable grad(x0, *args) Gradient of func. x0 : ndarray Points to check grad against forward difference approximation of grad using func. args : *args, optional Extra arguments passed to func and grad. epsilon : float, optional Step size used for the finite difference approximation. It defaults to sqrt(numpy.finfo(float).eps), which is approximately 1.49e-08. err : float The square root of the sum of squares (i.e. the 2-norm) of the difference between grad(x0, *args) and the finite difference approximation of grad using func at the points x0. approx_fprime Examples >>> def func(x): ... return x[0]**2 - 0.5 * x[1]**3
Find all School-related info fast with the new School-Specific MBA Forum It is currently 04 May 2015, 15:26 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If r and s are integers and rs + r is odd, which of the following must Author Message TAGS: Intern Joined: 06 Apr 2007 Posts: 9 Followers: 0 Kudos [?]: 1 [0], given: 0 If r and s are integers and rs + r is odd, which of the following must [#permalink]  01 Jun 2007, 23:57 1 This post was BOOKMARKED 00:00 Difficulty: 35% (medium) Question Stats: 69% (01:56) correct 31% (01:22) wrong based on 95 sessions If r and s are integers and rs + r is odd, which of the following must be even ? A. R B. s C. r + s D. rs - r E. r^2 + s [Reveal] Spoiler: OA Last edited by Bunuel on 04 Oct 2014, 14:08, edited 1 time in total. Renamed the topic, edited the question and added the OA. VP Joined: 08 Jun 2005 Posts: 1147 Followers: 6 Kudos [?]: 128 [0], given: 0 Re: If r and s are integers and rs + r is odd, which of the following must [#permalink]  02 Jun 2007, 11:55 Himalayan is right ! according to OG , zero is even R*S+R = e=even o=odd we have two options: (1) e*o+e = e (2) o*e+o = o since the stem states that R*S+R =odd then we will choose (2) and S must be even. CEO Joined: 17 May 2007 Posts: 2994 Followers: 59 Kudos [?]: 467 [1] , given: 210 Re: If r and s are integers and rs + r is odd, which of the following must [#permalink]  02 Jun 2007, 15:43 1 KUDOS RS+R = R(S+1) --> which is ODD Now this implies that R is odd AND S+1 is odd. which means S is even Ans B. Intern Status: Open Joined: 30 Aug 2014 Posts: 2 Location: India Minakshi: Gurani Concentration: Marketing, General Management GPA: 4 WE: Brand Management (Other) Followers: 0 Kudos [?]: 0 [0], given: 0 Re: If r and s are integers and rs + r is odd, which of the following must [#permalink]  04 Oct 2014, 13:02 bsd_lover wrote: RS+R = R(S+1) --> which is ODD Now this implies that R is odd AND S+1 is odd. which means S is even Ans B. I am not sure if i got this one .. i mean if r is odd what is the possibility of s being odd or even ?? Math Expert Joined: 02 Sep 2009 Posts: 27215 Followers: 4228 Kudos [?]: 41011 [0], given: 5654 Re: If r and s are integers and rs + r is odd, which of the following must [#permalink]  04 Oct 2014, 14:18 Expert's post 1 This post was BOOKMARKED minakshigurani wrote: bsd_lover wrote: RS+R = R(S+1) --> which is ODD Now this implies that R is odd AND S+1 is odd. which means S is even Ans B. I am not sure if i got this one .. i mean if r is odd what is the possibility of s being odd or even ?? If r and s are integers and rs + r is odd, which of the following must be even ? A. r B. s C. r + s D. rs - r E. r^2 + s Given that rs + r is odd --> $$rs + r =r*(s+1)odd$$. For the product of two integers, r and s+1, to be odd both must be odd. Theretofore, r and s+1 are odd, which means that r is odd and s is even. _________________ Re: If r and s are integers and rs + r is odd, which of the following must   [#permalink] 04 Oct 2014, 14:18 Similar topics Replies Last post Similar Topics: 2 If r and s are positive integers and r-s=6, which of the following has 5 11 Mar 2015, 21:57 If n is an odd integer, which of the following must also be 3 18 Nov 2013, 11:01 2 If x is even integer, which of the following must be an odd 7 25 Jul 2012, 06:57 1 If r and s are positive integers, is r/s an integer? (1) 3 12 May 2007, 14:52 If r and s are positive integers, is r/s an integer? 1) 10 10 Aug 2005, 15:26 Display posts from previous: Sort by
# zbMATH — the first resource for mathematics Weyl type theorems for bounded linear operators. (English) Zbl 1050.47014 The aim of the paper under review is to show that, from the point of view of Weyl type theorems, the notion of B-Weyl spectrum (see M. Berkani [Integral Equations Oper. Theory 34, 244-249 (1999; Zbl 0939.47010)]) generalizes the notion of Weyl spectrum, as in the case of normal operators on Hilbert spaces. To explain this, we need some notation. Let $$T$$ be a bounded operator on a Banach space. One defines the following sets: the Weyl spectrum $$\sigma_W(T)$$ is the set of $$\lambda\in {\mathbb C}$$ such that $$T-\lambda I$$ is not a Fredholm operator of index $$0$$, the B-Weyl spectrum $$\sigma_{BW}(T)$$ is the set of $$\lambda\in {\mathbb C}$$ such that $$T-\lambda I$$ is not a B-Weyl operator of index $$0$$, $$\sigma_{{SF_{+}^{-}}}(T)$$ is the set of $$\lambda\in {\mathbb C}$$ such that $$T-\lambda I$$ is not an upper semi-Fredholm operator of negative index, and $$\sigma_{{SBF_{+}^{-}}}(T)$$ is the set of $$\lambda\in {\mathbb C}$$ such that $$T-\lambda I$$ is not an upper semi-B-Fredholm operator of negative index. We also need the following important sets of eigenvalues: $$E_0(T)$$ is the set of all eigenvalues of $$T$$ of finite multiplicity isolated in the spectrum of $$T$$, $$E(T)$$ is the set of all eigenvalues of $$T$$ isolated in the spectrum of $$T$$, and similarly, $$E_0^a(T)$$ is the set of all eigenvalues of $$T$$ of finite multiplicity isolated in the approximate point spectrum of $$T$$, and $$E_0^a(T)$$ is the set of all eigenvalues of $$T$$ isolated in the approximate point spectrum of $$T$$. Then one says that $$T$$ satisfies the generalized $$a$$-Weyl’s theorem, the $$a$$-Weyl’s theorem, the generalized Weyl’s theorem, or Weyl’s theorem, if $$\sigma_{{SBF_{+}^{-}}}(T)=\sigma^a (T)\setminus E^a(T)$$, $$\sigma_{{SF_{+}^{-}}}(T)=\sigma^a (T)\setminus E^a_0(T)$$, $$\sigma_{BW}(T)=\sigma (T)\setminus E(T)$$, or $$\sigma_{W}(T)=\sigma (T)\setminus E_0(T)$$, respectively. The author proves that if $$T$$ satisfies the generalized $$a$$-Weyl’s theorem, then $$T$$ satisfies the $$a$$-Weyl’s theorem and the generalized Weyl’s theorem. Also, $$T$$ satisfies Weyl’s theorem provided that $$T$$ obeys the generalized Weyl’s theorem. Similar results are proved with respect to Browder’s theorem. ##### MSC: 47A53 (Semi-) Fredholm operators; index theories 47A10 Spectrum, resolvent 47A55 Perturbation theory of linear operators 47A25 Spectral sets of linear operators
Version ID# 974 by 198.51.100.18 Press the "Improve" button to call for a new round of election and submit a challenging revision. #### Summary Thermal radiation is electromagnetic radiation generated by the thermal motion of charged particles in matter. All matter with a temperature greater than absolute zero emits thermal radiation. When the temperature of the body is greater than absolute zero, interatomic collisions cause the kinetic energy of the atoms or molecules to change. This results in charge-acceleration and/or dipole oscillation which produces electromagnetic radiation, and the wide spectrum of radiation reflects the wide spectrum of energies and accelerations that occur even at a single temperature. #### Details Thermal radiation power of a black body per unit area of radiating surface per unit of solid angle and per unit frequency \nu is given by Planck's law as: u(\nu,T)=\fracc^2}\cdot\frac1lt;/math> or in terms of wavelength u(\lambda,T)=\frac\lambda^5}\cdot\frac1lt;/math> where \beta is a constant. This formula mathematically follows from calculation of spectral distribution of energy in quantized electromagnetic field which is in complete thermal equilibrium with the radiating object. The equation is derived as an infinite sum over all possible frequencies. The energy, E=h \nu, of each photon is multiplied by the number of states available at that frequency, and the probability that each of those states will be occupied. Integrating the above equation over \nu the power output given by the Stefan–Boltzmann law is obtained, as: P = \sigma \cdot A \cdot T^4 where the constant of proportionality \sigma is the Stefan–Boltzmann constant and A is the radiating surface area. Further, the wavelength \lambda \,, for which the emission intensity is highest, is given by Wien's displacement law as: \lambda_= \fracT} For surfaces which are not black bodies, one has to consider the (generally frequency dependent) emissivity factor \epsilon(\nu). This factor has to be multiplied with the radiation spectrum formula before integration. If it is taken as a constant, the resulting formula for the power output can be written in a way that contains \epsilon as a factor: P = \epsilon \cdot \sigma \cdot A \cdot T^4 This type of theoretical model, with frequency-independent emissivity lower than that of a perfect black body, is often known as a grey body. For frequency-dependent emissivity, the solution for the integrated power depends on the functional form of the dependence, though in general there is no simple expression for it. Practically speaking, if the emissivity of the body is roughly constant around the peak emission wavelength, the gray body model tends to work fairly well since the weight of the curve around the peak emission tends to dominate the integral.
# Write the area A of square as a function of its perimeter P. The question aims to represent the area of a square in terms of its perimeter P. The area of a square is defined as the measure of the space it covered. The area of the square is found by its sides, because all the sides of a square are equal to the area of the square. Square meters, square feet, square inches, and square inches are typical units for measuring square area. The perimeter of the square is basically the total length around its boundary. The perimeter of the square is represented by P. The term perimeter of a square is calculated by the summation of all of its sides. Inches, yards, millimeters, centimeters, and meters are typical units for measuring perimeter. The length of the side of the square is given as $a$. All the sides of the square are equal. The formula of the area of the square is given by the square of its sides: $A=a^2$ The perimeter $P$ is given by the sum of all the sides of the square: $P=a+a+a+a=4a$ Step 1: Solve $a$ for the formula of the perimeter. Take the value of the side from the perimeter formula and plug it into the formula of the area of the square. $P=4a$ $a=\dfrac{P}{4}$ Step 2: Substitute $a$ from step 1 from the formula of the perimeter to the formula of the area. $A=a^2$ $a=\dfrac{P}{4}$ $A=(\dfrac{P}{4})^2$ $A=\dfrac{P^2}{4^2}$ $A=\dfrac{P^2}{16}$ The formula of the area of the square in form of its perimeter is represented by: $A=\dfrac{P^2}{16}$ ## Numerical Result The formula of the area of the square in form of its perimeter is represented by: $A=\dfrac{P^2}{16}$ ## Example Find the area of the square if the perimeter is $4cm$. Solution: The formula for the area of the square is shown as: $A=a^2$ where $a$ represents the side of the square. The formula for the perimeter of the square is shown as: $P=4a$ First, write the area of the square in terms of its perimeter and then plug the value of the perimeter. Step 1: Solve $a$ for the formula of the perimeter. $P=4a$ $a=\dfrac{P}{4}$ Step 2: Substitute $a$ from step 1 from the formula of the perimeter to the formula of the area. $A=a^2$ $a=\dfrac{P}{4}$ $A=(\dfrac{P}{4})^2$ $A=\dfrac{P^2}{4^2}$ $A=\dfrac{P^2}{16}$ The expression for the area of the square in terms of its perimeter is represented by: $A=\dfrac{P^2}{16}$ Now plug the value of the perimeter into the formula: $A=\dfrac{4^2}{16}$ $A=1cm^2$ The result of the area of the square is $1cm^2$ when the perimeter of the square is $4cm$.
# pm4py.evaluation.generalization package ## pm4py.evaluation.generalization.evaluator module PM4Py is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. PM4Py is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with PM4Py. If not, see <https://www.gnu.org/licenses/>. class pm4py.evaluation.generalization.evaluator.Variants(value)[source] Bases: enum.Enum An enumeration. GENERALIZATION_TOKEN = <module 'pm4py.evaluation.generalization.variants.token_based' from 'C:\\Users\\berti\\pm4py-core\\pm4py\\evaluation\\generalization\\variants\\token_based.py'> pm4py.evaluation.generalization.evaluator.apply(log, petri_net, initial_marking, final_marking, parameters=None, variant=Variants.GENERALIZATION_TOKEN)[source] Deprecated since version 2.2.5: This will be removed in 3.0. Use the pm4py.algo.evaluation.generalization package ## pm4py.evaluation.generalization.parameters module PM4Py is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. PM4Py is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with PM4Py. If not, see <https://www.gnu.org/licenses/>. class pm4py.evaluation.generalization.parameters.Parameters(value)[source] Bases: enum.Enum An enumeration. ACTIVITY_KEY = 'pm4py:param:activity_key'
OBJECTIVE— The objective of this study was to estimate the cost of productivity losses in the U.S. attributable to diabetes, with regard to specific demographic and disease-related characteristics in the U.S. RESEARCH DESIGN AND METHODS— We used the 1989 National Health Interview Survey, a random survey of individuals in the U.S. that included a diabetes supplement. Data on individuals were obtained for labor force participation, hours of work, demographic and occupational characteristics,self-reported health status, and several variables that indicated the presence, duration, and severity (complications) of diabetes. Using multivariate regression analyses, we estimated the association of independent variables (e.g., demographics, health, and diabetes status) with labor force participation, hours of work lost, and the economic value of lost work attributable to diabetes and its complications and duration. RESULTS— In general, the presence of diabetes and complications were found to be related to workforce participation variables. The magnitude of the lost-productivity costs depended on personal characteristics and on the presence and status of diabetes. In general, the loss of yearly earnings amounted to about a one-third reduction in earnings and ranged from $3,700 to$8,700 per annum. CONCLUSIONS— Diabetes has a considerable net effect on earnings, and the complications and duration of diabetes have compound effects. Our findings have implications for the cost-effectiveness of diabetes control; the presence of complicating factors is the single most important predictive factor in lost productivity costs attributable to diabetes, and thus the avoidance or retardation of complications will have an impact on indirect health-related costs. The loss of productivity caused by illness has been a prominent topic in general health policy for several decades(1,2,3). Nationwide estimates have also been conducted for people with diabetes(4,5). The American Diabetes Association estimates that diabetes accounted for $27 billion in direct medical costs and$32 billion in indirect or lost-productivity costs in 1997(4). In recent years, there has been a growing recognition that, for many reasons, the costs of diabetes should be expressed as an “excess cost” figure; this excess cost statistic has been estimated in the U.S. for people with diabetes for direct medical expenditures(6,7,8),and it has been estimated for both direct and lost-productivity costs in Sweden (9). To date, in these studies, diabetes-related lost-productivity costs have been expressed for entire groups and have been categorized for only a few selected population characteristics, such as age. We know very little about the determinants of productivity losses. It is very important to gain an understanding of what factors affect them, as well as of their overall magnitude. We have conducted an analysis of lost productivity attributable to the prevalence and severity of diabetes in the U.S. Using data from the National Health Interview Survey (NHIS) and its diabetes supplement, we developed estimates of how the onset and progression of diabetes influences the workplace behavior of individuals. We focused on two components: participation in the labor force and actual hours of work. Data were used from the 1989 NHIS, which included a diabetes supplement, published by the U.S. Department of Health and Human Services National Center for Health Statistics. The NHIS is a personal-interview household survey of a nationwide sample of the civilian noninstitutionalized population of the U.S. It contains questions on personal and demographic characteristics, illnesses, injuries, impairment,chronic conditions, and use of health resources. The diabetes supplement includes extensive survey questions on the prevalence of diabetes and specific diabetes complications. Subjects of this study were individuals aged between 18 and 65 years, both with and without diabetes, and included those who were and were not in the labor force. ### Employment status To address the effect of diabetes on the employment status of an individual, we adopted the standard probit estimation in analyzing an individual's probability of being in the labor force, applied to the entire working-age population. The dependent variable takes the value of zero or one,with the latter meaning “in the labor force.” According to the standard theory of labor supply(10,11,12,13,14,15,16,17),the decision to be in the labor force is determined by sex, age, race, marital status, educational level, regional factors, family size, and the health status of the individual. Regional factors were approximated by residence in an urban area and the region of residence. A self-reported health status measure was also included. Finally, a dummy variable indicating whether an individual has diabetes was included. Given that diabetes is a lifelong disease and that one's health deteriorates as he/she ages, the severity of the disease has an impact on the likelihood of one being in the labor force. To address this issue, the above probit equation was modified and reestimated for the diabetic group only. This was done by replacing the variable that indicates the presence or absence of diabetes with one that indicates whether the individual had any complications of diabetes, defined as the reporting of any of the following conditions:affected retina, high blood pressure or hypertension, angina, stroke, heart disease, cataracts, kidney disease, foot/ankle sores (peripheral vascular disease), retinopathy, glaucoma, proteinuria, gum disease, autonomic neuropathy (bladder control), amputations, and peripheral neuropathy. The literature has generally regarded the impact of type 1 versus type 2 diabetes on indirect costs as being different. This may be true because of the clinical and demographic differences between the two conditions (e.g., earlier age of onset, longer duration of diabetes, required use of insulin, and risk of hypoglycemia). Accordingly, a dummy variable (TYPE1) indicating whether an individual has type 1 diabetes or not was included in the regression analysis to test the hypothesis that productivity losses differ between type 1 and type 2 diabetes. Type 1 diabetes was indicated by age at onset <30 years, use of insulin, and body weight(18). ### Loss of productivity The second target variable was work-loss days for diabetic and nondiabetic groups who were employed. The loss of productivity was hypothesized to be affected by sex, age, race, marital status, educational level, a self-reported health status measure and, finally, the occupation of the individual(19,20,21,22,23). According to the information provided by the NHIS, occupations were grouped into 13 categories. To examine the influence of the occupational effect on the productivity loss, the 13 categories were further grouped into 5 broad categories. Farming, forestry, and fishing occupations were used as the reference group. Because the work-loss days for the previous 2 weeks (the dependent variable) is left censored with a value of zero, we used a tobit regression technique(24,25,26). To test for the severity of diabetes disease affecting one's productivity in the workplace, “the presence of diabetes” in the above-specified tobit equation was replaced by “the presence of diabetes with complications.” Similar to the employment status issue, the differentiation between types 1 and 2 diabetes was taken into account by having the dummy variable TYPE1 included in the regression. To derive the productivity loss associated with diabetes, we first derive the daily earnings for full-time full-year workers, disaggregated by race,sex, and age-group. Based on the Statistical Abstract of the U.S.(27), average yearly earnings for white males, white females, black males, and black females by three age-groups (age <25, 25-54, and ≥55 years) can be used to calculate the daily earnings with the assumption that full-time full-year workers are employed 5 days per week, 52 weeks per year. The estimated number of work-loss days for the past 2 weeks (i.e., the estimated coefficient) is then multiplied by the derived daily earnings. This amount of productivity loss is then projected back to yearly estimates. All prices were expressed in 1989 dollars. ### Sample characteristics In total, there were 84,572 individuals in the NHIS, 71,325 of which were between the ages of 18 and 65 years. There were 2,405 people (2.8% of the total sample) with diabetes, and of these there were 1,401 (2%) in the relevant age range. For the purpose of regression analysis, the value of variables can be neither missing nor in an irrelevant range. The construction of the working-age sample therefore has to exclude subjects with missing values on key variables. With these deletions, a total sample of 68,634 individuals remained; 1,351 (2%) were people with diabetes, and within the diabetes group, 715 (53%) of these were in the labor force. We used Student's t test on the sample statistics for the selected 1,351 subjects and those statistics from the original sample with subjects containing missing values for key explanatory variables. Student's t test showed that we could not reject the null hypothesis that there was statistically no difference between samples. Therefore, the exclusion of subjects with missing information on certain variables would not introduce a major bias in the results. In Table 1 we present the relevant characteristics for the full sample (working and nonworking) and the working sample. For the full sample, people with diabetes were older, more likely to be black, and less likely to be single. As expected, only 53% of people with diabetes were in the labor force, compared with 77% of those people without diabetes. With respect to self-reported quality-of-life by group, ∼20% of the individuals with diabetes were in poor health, whereas∼2% of the nondiabetic group ranked themselves as being in poor health. For the working sample, except for sex, age, and health status, the demographic characteristics by group was very similar to those of the full sample. We found that <19% of people with diabetes were type 1, whereas>80% of them had diabetes complications. Table 1 ### Employment status In Table 2 we provide the estimates of the probability of working. In column 1 we show the results of the regression equation for the full sample when using the dummy variable that indicated the presence or absence of diabetes. Employment was associated with sex (male subjects had 21% higher employment rates [95% CI 21-22%]), education(better-educated subjects had 3-4% higher rates [2-5%]), and race (nonwhites had 2-6% lower rates [2-7%]). Age, marital status (single), and family size were associated with lower employment rates. Individuals with poor health were 45% (43-46%) less likely to be in the labor force and individuals with fair health were 20% (19-21%) less likely, compared with individuals with excellent health. Individuals with diabetes were 4% (2-4%) less likely to be in the workforce, whereas the employment probability differs between types 1 and 2 diabetes. Table 2 In column 2 of Table 2, we present the estimates for the inclusion of the dichotomous variable indicating whether people with diabetes had any complications, using only the sample for people with diabetes. Unlike in the full sample, only sex, age, and educational effects are found to be important factors determining the workforce decision. People with diabetes who had complications were ∼12%(5-19%) less likely to be in the workforce than those who did not have complications. Again, the dependency on insulin did not enhance the working probability of people with diabetes. ### Lost work days In Table 3 we present the results of regression equations indicating the effect on work-loss days of variables including having diabetes during the prior 2 weeks and the severity of diabetes (results are in columns 1 and 2, respectively). In column 1, we present estimates for the sample of working individuals. We found that being male and older reduced the number of work-loss days. For the self-reported health status measure, there was a progressive increase in work-loss days as health status fell. No statistically significant effect was found for self-reported diabetes. In column 2 we address the loss of productivity for people in the diabetes group. Using this regression equation, the effect of having diabetes complications (compared with not having them) increased the number of work-loss days by 3.2 days(1,2,3,4,5,6)within a 2-week period. In either specification, the effect of having type 1 diabetes is found to be significant. Table 3 Because the presence of diabetes was not statistically significant in the full sample for the tobit regression (Table 3), in Table 4 we present lost-productivity cost estimates in relation to the diabetes group with complications. Our reference value for yearly earnings are shown in column 1. For example, a fully employed white male subject aged <25 years earned $14,339 annually in 1989. For white men with diabetes with complications who were aged between 25 and 54 years, the yearly earnings loss was be$8,616 (column 2). For each demographic group (by race and sex), such losses increase with age and peak at the prime age-group (age 25-54 years). Among the groups, nonwhite female subjects generally suffered the least compared with either their white female or male (white and nonwhite)counterparts. Table 4 Diabetes has a considerable impact on economic behavior in the labor force. Controlling for variables such as age, sex, and health status, the presence of diabetes itself reduced employment by 3.5%, and the presence of complications reduced employment by 12% compared with the absence of complications. For those individuals who were employed, having diabetes did not have a significant overall effect on hours worked; however,those who had complicated diabetes worked 3.2 days less every 2 weeks than those whose diabetes were without complications. The type of diabetes had no impact. Kahn (17) used both 1989 NHIS and the 1992 Health and Retirement Survey (HRS) to study labor market outcomes for people with diabetes. Kahn's results relating to employment indicated that people without diabetes had participation rates ∼12% above those with diabetes, but he only examined the 50- to 60-year age-group. In addition, Kahn's results indicated that people with 5 years' duration of diabetes had 3% lower employment rates. In his study, Kahn focused on the issue of changes in labor markets over time, rather than on the productivity losses associated with diabetes. He did not include the complications variable, which has important implications for economic evaluations. He also did not include an analysis of days of work lost. In his analysis of the HRS sample, he found that the earnings of men with diabetes was 69% of those without diabetes; there was no difference between female groups. However, the HRS analysis was confined to people between the ages of 51 and 62 years(roughly one-half of our sample of people with diabetes). It should be noted that the analysis could not distinguish between long-and short-term disability and productivity losses associated with diabetes. This distinction would be important in identifying lost-work time attributable to short-term complications of diabetes (e.g., hyper- or hypoglycemia)—which would likely be more prevalent in type 1 diabetes—from lost productivity attributable to longer-term complications, which could occur in both conditions. Given the limitations of the self-reported data from population surveys, we were not able to incorporate productivity losses caused by premature mortality associated with diabetes. We should point out that the data that we have obtained were self-reported,and such data are often subject to errors related to recall. However, the variables that we examined are less likely to be subject to recall because of the short period of data collection. There could be errors related to the misreporting of diabetes because individuals might not have known they had diabetes or complications; however, given the seriousness of the condition,this should not be a significant problem. Further, given the nature of the data from this cross-sectional survey, the observed relationships must be viewed as associations and not necessarily as causal relationships. In terms of model setup, one may argue that diabetes complications may be endogenous instead of exogenous. Facing the constraint of data availability and the theme of the study, the endogeneity issue is beyond the scope of the present analysis. Costs that can be tied to specific interventions, such as diabetes control,can yield very valuable information. Several studies, in the U.S.(28,29)and the U.K. (30) have shown that aggressive interventions for diabetes can retard the development of complications. Our results indicate that the net productivity costs of preventing complications once an individual has diabetes can be very significant, amounting to >\$3,700-8,700 per person per year, depending on the demographic group. These costs are of the same order of magnitude as annual medical costs due to diabetes(8). Abbreviations: HRS, Health and Retirement Survey; NHIS, National Health Interview Survey. A table elsewhere in this issue shows conventional and Système International (SI) units and conversion factors for many substances. J.A.J. is a Population Health Investigator funded by the Alberta Heritage Foundation for Medical Research (AHFMR). An earlier version of this article was presented and discussed at a meeting of the Alliance for Canadian Health Outcomes Research in Diabetes (ACHORD)Investigators. The authors would like to acknowledge the input of these colleagues. 1. Mushkin SJ, Collings F: Economic cost of disease and injury. Public Health Rep 14 : 795 -809, 1959 2. Weisbrod BA: Economics of Public Health . Philadelphia, University of Pennsylvania Press, 1961 3. Rice DP, Hodgson TA, Kopstein AN: The economic costs of illness:replication and update. Health Care Financing Review 7 : 61 -80, 1985 4. American Diabetes Association: Economic consequences of diabetes mellitus in the U.S. in 1997. Diabetes Care 21 : 296 -309, 1998 5. Songer TJ: Studies on the Cost of Diabetes . Atlanta, GA, Centers for Disease Control and Prevention, 1998 6. Selby JV, Ray GT, Zhang D, Colby CJ: Excess costs of medical care for patients with diabetes in a managed care population. Diabetes Care 20 : 1396 -1402, 1997 7. Huse DM, Oster G, Killen AR, Lacey MJ, Colditz GA: The economic costs of noninsulin-dependent diabetes mellitus. JAMA 262 : 2708 -2713, 1989 8. Rubin RJ, Altman WM, Mendelson DN: Health care expenditures for people with diabetes mellitus. J Clin Endocrinol Metab 78 : 809A -809F, 1994 9. Olsson J, Persson U, Tollin C, Nilsson S, Melander A: Comparison of excess costs of care and production losses because of morbidity in diabetic patients. Diabetes Care 17 : 1257 -1263, 1994 10. Bartel A, Taubman P: Health and labor market success: the role of various diseases. Rev Econ Stat 61 : 1 -8, 1979 11. Lambrinos J: Health: a source of bias in labor supply models. Rev Econ Stat 63 : 206 -212, 1981 12. Anderson KH, Burkhauser RV: The importance of the measure of health in empirical estimates of the labor supply of older men. Econ Lett 16 : 375 -380, 1984 13. Haveman R, Wolfe B, Kreider B, Stone M: Market work, wages and men's health. J Health Econ 13 : 163 -182, 1994 14. Bound J, Schoenbaum M, Waldmann T: Race and education differences in disability status and labor force attachment in the health and retirement survey. J Human Res 30 (Suppl.): S227 -S267, 1995 15. Costa DL: Health and labor force participation of older men,1990-1991. J Econ History 56 : 62 -89, 1996 16. Wang W: Semi-parametric estimation of the effect of health on labor force participation of married women. Applied Economics 29 : 325 -329, 1997 17. Kahn ME: Health and labor market performance: the case of diabetes. J Labor Econ 16 : 878 -899, 1998 18. Fertig BJ, Simmons DA, Martin DB: Therapy for diabetes. In National Diabetes Data Group: Diabetes in America . 2nd ed. Bethesda, MD, National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, 1995 (NIH publ. no. 95-1468) 19. Becker GS: A theory of the allocation of time. Econ J 75 : 493 -517, 1965 20. Ashenfelter O, Heckman J: Estimating labor supply functions. In Income Maintenance and Labor Supply . Cain G, Watts H,Eds. Chicago, Markham, 1973 , p. 265 -278 21. Gronau R: Leisure, home production and work: the theory of the allocation of time revisited. J Political Economy 85 : 1090 -1123, 1977 22. Killingsworth MR, Heckman JJ: Female labor supply: a survey. In Handbook of Labor Economics . Vol. 1 . Ashenfelter O, Layard R, Eds. Amsterdam,North-Holland, 1986 , p. 103 -204 23. Heckman JJ: What has been learned about labor supply in the past twenty years? Am Econ Rev 83 : 116 -121, 1993 24. Maddla GS: Limited Dependent and Qualitative Variables in Econometrics . New York, Cambridge University Press, 1983 25. Amemiya T: Censored or truncated regression models, symposium. J Econometrics 14 : 1 -222, 1984 26. Greene W: Econometric Analysis . 2nd Ed. New York, Macmillan, 1993 27. U.S. Bureau of the Census: Statistical Abstract of the United States: 1991 . Washington, DC, U.S. Govt. Printing Office, 1991 28. Diabetes Control and Complications Trial Research Group: Lifetime benefits and costs of intensive therapy as practiced in the Diabetes Control and Complications Trial. JAMA 276 : 1409 -1415, 1996 29. Eastman RC, Javitt JC, Herman WH, Dasbach EJ, Copley-Merriman C,Maier W, Dong F, Manninen D, Zbrozek AS, Kotsanos J, Garfield SA, Harris M:Model of complications of NIDDM. II. Analysis of the health benefits and cost-effectiveness of treating NIDDM with the goal of normoglycemia. Diabetes Care 20 : 735 -744, 1997 30. U.K. Prospective Diabetes Study Group: Effect of intensive blood-glucose control with metformin on complications in over-weight patients with type 2 diabetes. Lancet 352 : 854 -865, 1998
# How do you differentiate (1 / (4-3t)) + ( (3t )/ ((4-3t)^2))? Jun 12, 2015 Rewrite the expression first. #### Explanation: $\frac{1}{4 - 3 t} + \frac{3 t}{{\left(4 - 3 t\right)}^{2}} = \frac{4 - 3 t}{4 - 3 t} ^ 2 + \frac{3 t}{{\left(4 - 3 t\right)}^{2}} = \frac{4}{4 - 3 t} ^ 2$ We can use the quotient rule at this point. The derivative is: $\frac{\left(0\right) {\left(4 - 3 t\right)}^{2} - \left(4\right) \left[2 \left(4 - 3 t\right) \cdot \left(- 3\right)\right]}{{\left(4 - 3 t\right)}^{2}} ^ 2 = \frac{0 + 24 \left(4 - 3 t\right)}{4 - 3 t} ^ 4$ $= \frac{24}{4 - 3 t} ^ 3$ Alternative If (when) you've learned the chain rule, you may agree that it is easier to continue rewriting to get: $\frac{4}{4 - 3 t} ^ 2 = 4 {\left(4 - 3 t\right)}^{- 2}$ The derivative is may be found by using the power rule and the chain rule. $- 8 {\left(4 - 3 t\right)}^{- 3} \cdot \left(- 3\right) = 24 {\left(4 - 3 t\right)}^{- 3}$
# C++ bitset and its application ? Server Side ProgrammingProgramming A bitset is a dataset that stores multiple boolean values but takes lesser memory space as compared to other data sets that can store a sequence of bits like a boolean array or boolean vector. Bitsets stores the binary bits in a form that takes less memory space, it stores them in compressed from. Accessing any element is same as others i.e. by using its index value i.e. bitset_name[index]. But the indexing of elements in a bitset is reverse. Let’s take an example, for bitset {01101001} the element at 0th index is 1 and so on. So 0’s are at index 1, 2, 4,7. And 1’s are at index 0,3,5,6. Let’s create a program that will use all functions of the bitset − ## Example Live Demo #include <bits/stdc++.h> using namespace std; #define setSize 32 int main() { bitset<setSize> bset1; // value is 00000000000000000000000000000000 bitset<setSize> bset2(20); //value is 00000000000000000000000000010100 bitset<setSize> bset3(string("1100")); // value is 00000000000000000000000000001100 cout<<"The values of bitsets are :\n" ; cout<<"bitset 1 : "<<bset1<<endl; cout<<"bitset 2 : "<<bset2<<endl; cout<<"bitset 3 : "<<bset3<<endl; cout << endl; bitset<8> bset4; // value is 00000000 bset4[1] = 1; cout<<"value after changing a bit :"<<bset4<<endl; bset4[4] = bset4[1]; cout <<"changing value using other method :"<<bset4<<endl; int numberofone = bset4.count(); int numberofzero = bset4.size() - numberofone; cout<<"The set"<<bset4<<"has"<<numberofone<<"ones and"<<numberofzero<<"zeros\n"; cout << "bool representation of " << bset4 << " : "; for (int i = 0; i < bset4.size(); i++) cout << bset4.test(i) << " "; cout << endl; if (!bset1.none()) cout << "bset1 has some bit set\n"; cout <<".set() method sets all bits, bset4.set() = "<< bset4.set() << endl; cout<<"changing a specific bit(4) to 0 "<<bset4.set(4, 0)<<endl; cout<<"changing a specific bit(4) to 1 "<<bset4.set(4)<<endl; cout<<"Resetting bit at position 2 :"<<bset4.reset(2)<<endl; cout<<"Resetting bits of full bitset : "<<bset4.reset()<<endl; cout<<"Flipping bit at position 2 : "<< bset4.flip(2) << endl; cout<<"Flipping bit of array : "<<bset4.flip() << endl; int num = 100; cout << "\nDecimal number: " << num << " Binary equivalent: " << bitset<8>(num); return 0; } ## Output The values of bitsets are : bitset 1 : 00000000000000000000000000000000 bitset 2 : 00000000000000000000000000010100 bitset 3 : 00000000000000000000000000001100 value after changing a bit :00000010 changing value using other method :00010010 The set00010010has2ones and6zeros bool representation of 00010010 : 0 1 0 0 1 0 0 0 .set() method sets all bits, bset4.set() = 11111111 changing a specific bit(4) to 0 11101111 changing a specific bit(4) to 1 11111111 Resetting bit at position 2 :11111011 Resetting bits of full bitset : 00000000 Flipping bit at position 2 : 00000100 Flipping bit of array : 11111011 Decimal number: 100 Binary equivalent: 01100100 Published on 04-Oct-2019 07:04:35
The widespread Transport Layer Security protocol (TLS), a successor of the better-known SSL protocol, includes a very thoughtful mechanism to add an additional functionality to the protocol. Using the so-called extensions we can extend the TLS capabilities as we desire without actually modifying the protocol itself. The extensions were firstly introduced in the RFC 3456 and later fully incorporated into the TLS itself. The client states which extensions it supports during the handshake (in the ClientHello message) and the server chooses which extensions to use at its sole discretion. Many extensions are commonly used in the wild. In this article, I'm picking just a few out of many to introduce the concept and describe one of the most popular extensions today. #### Server Name Indication [server_name] This extension enables browsers to send the destination hostname as a part of a TLS handshake. Because of the shortage of IPv4 addresses, it's very common that multiple hosts are on the same IP address. In plain HTTP this is solved via the Host request header, which includes the same thing - the hostname. Let's assume you want to make a HTTPS connection to www.susanka.eu and this website sits at IP 88.90.30.48 and as does www.tsusanka.cz. The handshake starts and the server needs to provide a certificate, which is bound to the domain name. Since both tsusanka.cz and susanka.eu are found at the same machine with the same IP, how can it know which certificate to present? And that's exactly where the Server Name Indication comes in handy. The client simply includes the hostname in the TLS handshake and the server knows right away which certificate to present. #### Elliptic Curves capabilities [elliptic_curves] This extension helps the client to specify what elliptic curves it supports. For example, the Elliptic Curves Key Exchange (ECDH) operates on some particular elliptic curve, where the mathematical voodoo occurs. Let's spin up a TLS connection inside Firefox by simply visiting a HTTPS website. When inspected in Wireshark you can see that the ClientHello includes this extension and specifies which ECs it supports. We can read from the image that our browser supports four elliptic curves. Let's dig a little bit more into the first one. The ecdh_x25519 stands for the widely used Curve25519. On wikipedia and in the official RFC 7748 we can find that the curve is specified by the formula: $$y_2 = x_3 + 486662 x_2 + x$$ That it is a Montgomery curve, over the quadratic extension of the prime field defined by the prime number $$2^{255} − 19$$ and it uses the base point $$x=9$$ I always had a little bit of trouble understanding why it is this arbitrary curve and not another one. Why this prime number? Why multiply by 486662 and not another number that fits as well? Well, the answer to that is somewhat unsatisfactory. The choice of those constants and the form of the curve is simply to allow some performance tricks to achieve faster computations. Researchers simply found out that we can do some very specific tricks on this curve which makes it a nice fast curve to be used for cryptography. That's one main reason, the other one is security – in other words, it is hard to use it in an unsecure way. Feel free to dive into the official paper or have a look on the SafeCurves project, which does a nice comparison of elliptic curves used for cryptography. #### Session tickets [session_ticket] The TLS handshake is a costly operation. It requires two round-trips and on top of that, the cryptographic operations are CPU-exhaustive. TLS itself incorporates a mechanism called session resumption to abbreviate the handshake. The server assigns the session a unique ID and both the client and the server store the session details under such ID. During the next handshake, both sides simply state the ID and the connection is resumed using the stored data. This mechanism, however, contains one big caveat – the server needs to store the session details for every client. That can add up to a lot of data! For that reason session tickets extension delegates the data-storing exclusively to the client. The server sends all the session data (encrypted) to the client and the client stores it and sends it back to the server on resumption. No server-side storage is then needed whatsoever. This was just a brief introduction into the extensions mechanism. I wanted to demonstrate that using the extensions you can extend or modify the TLS protocol as you desire. Like it? Tip it?
Riesz representation theorem This article describes a theorem concerning the dual of a Hilbert space. For the theorems relating linear functionals to measures, see Riesz–Markov–Kakutani representation theorem. Riesz representation theorem, sometimes called Riesz–Fréchet representation theorem, named after Frigyes Riesz and Maurice René Fréchet, establishes an important connection between a Hilbert space and its continuous dual space. If the underlying field is the real numbers, the two are isometrically isomorphic; if the underlying field is the complex numbers, the two are isometrically anti-isomorphic. The (anti-) isomorphism is a particular natural one as will be described next; a natural isomorphism. Preliminaries and notation Let ${\displaystyle H}$ be a Hilbert space over a field ${\displaystyle \mathbb {F} ,}$ where ${\displaystyle \mathbb {F} }$ is either the real numbers ${\displaystyle \mathbb {R} }$ or the complex numbers ${\displaystyle \mathbb {C} .}$ If ${\displaystyle \mathbb {F} =\mathbb {C} }$ (resp. if ${\displaystyle \mathbb {F} =\mathbb {R} }$) then ${\displaystyle H}$ is called a complex Hilbert space (resp. a real Hilbert space). Every real Hilbert space can be extended to be a dense subset of a unique (up to bijective isometry) complex Hilbert space, called its complexification, which is why Hilbert spaces are often automatically assumed to be complex. Real and complex Hilbert spaces have in common many, but by no means all, properties and results/theorems. This article is intended for both mathematicians and physicists and will describe the theorem for both. In both mathematics and physics, if a Hilbert space is assumed to be real (i.e. if ${\displaystyle \mathbb {F} =\mathbb {R} }$) then this will usually be made clear. Often in mathematics, and especially in physics, unless indicated otherwise, "Hilbert space" is usually automatically assumed to mean "complex Hilbert space." Depending on the author, in mathematics, "Hilbert space" usually means either (1) a complex Hilbert space, or (2) a real or complex Hilbert space. Linear and antilinear maps By definition, an antilinear map (also called a conjugate-linear map) ${\displaystyle f:H\to Y}$ is a map between vector spaces that is additive: ${\displaystyle f(x+y)=f(x)+f(y)}$      for all ${\displaystyle x,y\in H,}$ and antilinear (also called conjugate-linear or conjugate-homogeneous): ${\displaystyle f(cx)={\overline {c}}f(x)}$      for all ${\displaystyle x\in H}$ and all scalar ${\displaystyle c\in \mathbb {F} .}$ In contrast, a map ${\displaystyle f:H\to Y}$ is linear if it is additive and homogeneous: ${\displaystyle f(cx)=cf(x)}$      for all ${\displaystyle x\in H}$ and all scalar ${\displaystyle c\in \mathbb {F} .}$ Every constant 0 map is always both linear and antilinear. If ${\displaystyle \mathbb {F} =\mathbb {R} }$ then the definitions of linear maps and antilinear maps are completely identical. A linear map from a Hilbert space into a Banach space (or more generally, from any Banach space into any topological vector space) is continuous if and only if it is bounded; the same is true of antilinear maps. The inverse of any antilinear (resp. linear) bijection is again an antilinear (resp. linear) bijection. The composition of two antilinear maps is a linear map. Continuous dual and anti-dual spaces A functional on ${\displaystyle H}$ is a function ${\displaystyle H\to \mathbb {F} }$ whose codomain is the underlying scalar field ${\displaystyle \mathbb {F} .}$ Denote by ${\displaystyle H^{*}}$ (resp. by ${\displaystyle {\overline {H}}^{*})}$ the set of all continuous linear (resp. continuous antilinear) functionals on ${\displaystyle H,}$ which is called the (continuous) dual space (resp. the (continuous) anti-dual space) of ${\displaystyle H.}$[1] If ${\displaystyle \mathbb {F} =\mathbb {R} }$ then linear functionals on ${\displaystyle H}$ are the same as antilinear functionals and consequently, the same is true for such continuous maps: that is, ${\displaystyle H^{*}={\overline {H}}^{*}.}$ One-to-one correspondence between linear and antilinear functionals Given any functional ${\displaystyle f~:~H\to \mathbb {F} ,}$ the conjugate of f is the functional denoted by ${\displaystyle {\overline {f}}~:~H\to \mathbb {F} }$      and defined by      ${\displaystyle h\mapsto {\overline {f(h)}}.}$ This assignment is most useful when ${\displaystyle \mathbb {F} =\mathbb {C} }$ because if ${\displaystyle \mathbb {F} =\mathbb {R} }$ then ${\displaystyle f={\overline {f}}}$ and the assignment ${\displaystyle f\mapsto {\overline {f}}}$ reduces down to the identity map. The assignment ${\displaystyle f\mapsto {\overline {f}}}$ defines an antilinear bijective correspondence from the set of all functionals (resp. all linear functionals, all continuous linear functionals ${\displaystyle H^{*}}$) on ${\displaystyle H,}$ onto the set of all functionals (resp. all antilinear functionals, all continuous antilinear functionals ${\displaystyle {\overline {H}}^{*}}$) on ${\displaystyle H.}$ Mathematics vs. physics notations and definitions of inner product The Hilbert space ${\displaystyle H}$ has an associated inner product, which is a map ${\displaystyle H\times H\to \mathbb {F} }$ valued in H's underlying field ${\displaystyle \mathbb {F} ,}$ which is linear in one coordinate and antilinear in the other (as described in detail below). If ${\displaystyle H}$ is a complex Hilbert space (meaning, if ${\displaystyle \mathbb {F} =\mathbb {C} }$), which is very often the case, then which coordinate is antilinear and which is linear becomes a very important technicality. However, if ${\displaystyle \mathbb {F} =\mathbb {R} }$ then the inner product a symmetric map that is simultaneously linear in each coordinate (i.e. bilinear) and antilinear in each coordinate. Consequently, the question of which coordinate is linear and which is antilinear is irrelevant for real Hilbert spaces. Notation for the inner product In mathematics, the inner product on a Hilbert space ${\displaystyle H}$ is often denoted by ${\displaystyle \left\langle \cdot ,\cdot \right\rangle }$ or ${\displaystyle \left\langle \cdot ,\cdot \right\rangle _{H}}$ while in physics, the bra-ket notation ${\displaystyle \left\langle \cdot |\cdot \right\rangle }$ or ${\displaystyle \left\langle \cdot |\cdot \right\rangle _{H}}$ is typically used instead. In this article, these two notations will be related by the equality: ${\displaystyle \left\langle x,y\right\rangle :=\left\langle y|x\right\rangle }$      for all ${\displaystyle x,y\in H.}$ Completing definitions of the inner product The maps ${\displaystyle \left\langle \cdot ,\cdot \right\rangle }$ and ${\displaystyle \left\langle \cdot |\cdot \right\rangle }$ are assumed to have the following two properties: 1. The map ${\displaystyle \left\langle \cdot ,\cdot \right\rangle }$ is linear in its first coordinate; equivalently, the map ${\displaystyle \left\langle \cdot |\cdot \right\rangle }$ is linear in its second coordinate. Explicitly, this means that for every fixed ${\displaystyle y\in H,}$ the map that is denoted by y | • ⟩ = ⟨ •, y ⟩ : H → 𝔽 and defined by h   ↦   y | h = ⟨ h, y      for all ${\displaystyle h\in H}$ is a linear functional on ${\displaystyle H.}$ • In fact, this linear functional is continuous, so y | • ⟩ = ⟨ •, y ⟩ ∈ H*. 2. The map ${\displaystyle \left\langle \cdot ,\cdot \right\rangle }$ is antilinear in its second coordinate; equivalently, the map ${\displaystyle \left\langle \cdot |\cdot \right\rangle }$ is antilinear in its first coordinate. Explicitly, this means that for every fixed ${\displaystyle y\in H,}$ the map that is denoted by ⟨ • | y = ⟨ y, • ⟩ : H → 𝔽 and defined by h   ↦   h | y = ⟨ y, h      for all ${\displaystyle h\in H}$ is an antilinear functional on H • In fact, this antilinear functional is continuous, so ⟨ • | y = ⟨ y, • ⟩ ∈ ${\displaystyle {\overline {H}}^{*}.}$ In mathematics, the prevailing convention (i.e. the definition of an inner product) is that the inner product is linear in the first coordinate and antilinear in the other coordinate. In physics, the convention/definition is unfortunately the opposite, meaning that the inner product is linear in the second coordinate and antilinear in the other coordinate. This article will not chose one definition over the other. Instead, the assumptions made above make it so that the mathematics notation ${\displaystyle \left\langle \cdot ,\cdot \right\rangle }$ satisfies the mathematical convention/definition for the inner product (i.e. linear in the first coordinate and antilinear in the other), while the physics bra-ket notation ${\displaystyle \left\langle \cdot |\cdot \right\rangle }$ satisfies the physics convention/definition for the inner product (i.e. linear in the second coordinate and antilinear in the other). Consequently, the above two assumptions makes the notation used in each field consistent with that field's convention/definition for which coordinate is linear and which is antilinear. Canonical norm and inner product on the dual space and anti-dual space If ${\displaystyle x=y}$ then x | x = ⟨ x, x is a non-negative real number and the map ${\displaystyle \left\|x\right\|:={\sqrt {\left\langle x,x\right\rangle }}={\sqrt {\left\langle x|x\right\rangle }}}$ defines a canonical norm on ${\displaystyle H}$ that makes ${\displaystyle H}$ into a Banach space.[1] As with all Banach spaces, the (continuous) dual space ${\displaystyle H^{*}}$ carries a canonical norm, called the dual norm, that is defined by[1] ${\displaystyle \|f\|_{H^{*}}~:=~\sup _{\|x\|\leq 1,x\in H}|f(x)|\,}$      for every ${\displaystyle f\in H^{*}.}$ The canonical norm on the (continuous) anti-dual space ${\displaystyle {\overline {H}}^{*},}$ denoted by ${\displaystyle \|f\|_{{\overline {H}}^{*}},}$ is defined by using this same equation:[1] ${\displaystyle \|f\|_{{\overline {H}}^{*}}~:=~\sup _{\|x\|\leq 1,x\in H}|f(x)|\,}$      for every ${\displaystyle f\in {\overline {H}}^{*}.}$ This canonical norm on ${\displaystyle H^{*}}$ satisfies the parallelogram law, which means that the polarization identity can be used to define a canonical inner product on ${\displaystyle H^{*},}$ which this article will denote by the notions ${\displaystyle \left\langle f,g\right\rangle _{H^{*}}:=\left\langle g|f\right\rangle _{H^{*}},}$ where this inner product turns ${\displaystyle H^{*}}$ into a Hilbert space. Moreover, the canonical norm induced by this inner product (i.e. the norm defined by ${\displaystyle f\mapsto {\sqrt {\left\langle f,f\right\rangle _{H^{*}}}}}$) is consistent with the dual norm (i.e. as defined above by the supremum over the unit ball); explicitly, this means that the following holds for every ${\displaystyle f\in H^{*}}$: ${\displaystyle \sup _{\|x\|\leq 1,x\in H}|f(x)|=\|f\|_{H^{*}}~=~{\sqrt {\langle f,f\rangle _{H^{*}}}}~=~{\sqrt {\langle f|f\rangle _{H^{*}}}}.}$ As will be described later, the Riesz representation theorem can be used to give an equivalent definition of the canonical norm and the canonical inner product on ${\displaystyle H^{*}.}$ The same equations that were used above can also be used to define a norm and inner product on ${\displaystyle H}$'s anti-dual space ${\displaystyle {\overline {H}}^{*}.}$[1] Canonical isometry between the dual and antidual The complex conjugate ${\displaystyle {\overline {f}}}$ of a functional ${\displaystyle f,}$ which was defined above, satisfies ${\displaystyle \left\|f\right\|_{H^{*}}~=~\left\|{\overline {f}}\right\|_{{\overline {H}}^{*}}}$      and,      ${\displaystyle \left\|{\overline {g}}\right\|_{H^{*}}~=~\left\|g\right\|_{{\overline {H}}^{*}}}$ for every ${\displaystyle f\in H^{*}}$ and every ${\displaystyle g\in {\overline {H}}^{*}.}$ This says exactly that that the canonical antilinear bijection defined by ${\displaystyle \operatorname {Cong} ~:~H^{*}\to {\overline {H}}^{*}}$      where      ${\displaystyle \operatorname {Cong} (f):={\overline {f}}}$ as well as its inverse ${\displaystyle \operatorname {Cong} ^{-1}~:~{\overline {H}}^{*}\to H^{*}}$ are antilinear isometries and consequently also homeomorphisms. If ${\displaystyle \mathbb {F} =\mathbb {R} }$ then ${\displaystyle H^{*}={\overline {H}}^{*}}$ and this canonical map ${\displaystyle \operatorname {Cong} :H^{*}\to {\overline {H}}^{*}}$ reduces down to the identity map. Riesz representation theorem Theorem — Let ${\displaystyle H}$ be a Hilbert space whose inner product ${\displaystyle \left\langle x,y\right\rangle }$ is linear in its first argument and antilinear in its second argument (the notation ${\displaystyle \left\langle y|x\right\rangle :=\left\langle x,y\right\rangle }$ is used in physics). For every continuous linear functional ${\displaystyle \varphi \in H^{*},}$ there exists a unique ${\displaystyle f_{\varphi }\in H}$ such that ${\displaystyle \varphi (x)=\left\langle f_{\varphi }|x\right\rangle =\left\langle x,f_{\varphi }\right\rangle }$      for all ${\displaystyle x\in H,}$ and moreover, ${\displaystyle \left\|f_{\varphi }\right\|_{H}=\|\varphi \|_{H^{*}}.}$ • Importantly for complex Hilbert spaces, note that the vector ${\displaystyle f_{\varphi }\in H}$ is always located in the antilinear coordinate of the inner product (no matter which notation is used).[note 1] Consequently, the map ${\displaystyle H^{*}\to H}$ defined by ${\displaystyle \varphi \mapsto f_{\varphi }}$ is a bijective antilinear isometry whose inverse is the antilinear isometry ${\displaystyle \Phi :H\to H^{*}}$ defined by ${\displaystyle y\mapsto \left\langle \bullet ,y\right\rangle =\left\langle y|\bullet \right\rangle .}$ For ${\displaystyle y\in H,}$ the physics notation for the functional ${\displaystyle \Phi (y)\in H^{*}}$ is the bra ${\displaystyle \left\langle y\right|,}$ where explicitly this means that ${\displaystyle \left\langle y\right|:=\Phi (y),}$ which complements the ket notation ${\displaystyle \left|y\right\rangle }$ defined by ${\displaystyle \left|y\right\rangle :=y.}$ Proof — Let ${\displaystyle M:=\operatorname {ker} \varphi :=\{u\in H\ |\ \varphi (u)=0\}.}$ Then ${\displaystyle M}$ is closed subspace of ${\displaystyle H.}$ If ${\displaystyle M=H}$ (or equivalently, if φ = 0) then we take ${\displaystyle f_{\varphi }:=0}$ and we're done. So assume ${\displaystyle M\neq H.}$ It is first shown that ${\displaystyle M^{\perp }=\{v\in H~:~\langle m,v\rangle =0~{\text{ for all }}m\in M\}}$ is one-dimensional. Using Zorn's lemma or the well-ordering theorem it can be shown that there exists some non-zero vector ${\displaystyle v}$ in ${\displaystyle M^{\perp }}$ — proving this is left as an exercise to the reader. We continue: Let ${\displaystyle v_{1},}$ and ${\displaystyle v_{2}}$ be nonzero vectors in ${\displaystyle M^{\perp }.}$ Then ${\displaystyle \varphi (v_{1})\neq 0,}$ and ${\displaystyle \varphi (v_{2})\neq 0,}$ and there must exist a nonzero real number ${\displaystyle \lambda \neq 0}$ such that ${\displaystyle \lambda \varphi (v_{1})=\varphi (v_{2}).}$ This implies that ${\displaystyle \lambda v_{1}-v_{2}\in M^{\perp }}$ and ${\displaystyle \varphi (\lambda v_{1}-v_{2})=0,}$ so ${\displaystyle \lambda v_{1}-v_{2}\in M.}$ Since ${\displaystyle M^{\perp }\cap M=\{0\},}$ this implies that ${\displaystyle \lambda v_{1}-v_{2}=0,}$ as desired. Now let ${\displaystyle g}$ be a unit vector in ${\displaystyle M^{\perp }.}$ For arbitrary ${\displaystyle x\in H,}$ let ${\displaystyle v}$ be the orthogonal projection of ${\displaystyle x}$ onto ${\displaystyle M^{\perp }.}$ Then ${\displaystyle v=\langle x,g\rangle g}$ and ${\displaystyle \langle g,x-v\rangle =0}$ (from the properties of orthogonal projections), so that ${\displaystyle x-v\in M}$ and ${\displaystyle \langle x,g\rangle =\langle v,g\rangle .}$ Thus ${\displaystyle \varphi (x)=\varphi (v+x-v)=\varphi (\langle x,g\rangle g)+\varphi (x-v)=\langle x,g\rangle \varphi (g)+0=\langle x,g\rangle \varphi (g)=\left\langle x,{\overline {\varphi (g)}}g\right\rangle .}$ Because of this, we take ${\displaystyle f_{\varphi }:={\overline {\varphi (g)}}g.}$ We also see that ${\displaystyle \left\|f_{\varphi }\right\|_{H}=|\varphi (g)|.}$ From the Cauchy-Bunyakovsky-Schwarz inequality ${\displaystyle |\varphi (x)|\leq \|x\||\varphi (g)|\|g\|=\|x\||\varphi (g)|,}$ and so if ${\displaystyle x}$ has unit norm then ${\displaystyle \|\varphi \|_{H^{*}}\leq |\varphi (g)|.}$ Since ${\displaystyle g}$ has unit norm, we have ${\displaystyle \|\varphi \|_{H*}=|\varphi (g)|.}$ Observations: • ${\displaystyle \varphi \left(f_{\varphi }\right)=\left\langle f_{\varphi },f_{\varphi }\right\rangle =\left\|f_{\varphi }\right\|^{2}=\left\|\varphi \right\|^{2}.}$ So in particular, we always have ${\displaystyle \varphi \left(f_{\varphi }\right)\geq 0}$ is real, where ${\displaystyle \varphi \left(f_{\varphi }\right)=0}$ fφ = 0 φ = 0. • Showing that there is a non-zero vector ${\displaystyle v}$ in ${\displaystyle M^{\perp }}$ relies on the continuity of ${\displaystyle \phi }$ and the Cauchy completeness of ${\displaystyle H}$. This is the only place in the proof in which these properties are used. Constructions Using the notation from the theorem above, we now provide ways of constructing ${\displaystyle f_{\varphi }}$ from ${\displaystyle \varphi \in H^{*}.}$ • If φ = 0 then fφ := 0 and otherwise ${\displaystyle f_{\varphi }:={\frac {{\overline {\varphi (g)}}g}{\|g\|^{2}}}}$ for any ${\displaystyle 0\neq g\in \left(\operatorname {ker} \varphi \right)^{\perp }.}$ • If ${\displaystyle g\in \left(\operatorname {ker} \varphi \right)^{\perp }}$ is a unit vector then ${\displaystyle f_{\varphi }:={\overline {\varphi (g)}}g.}$ • If g is a unit vector satisfying the above condition then the same is true of -g (the only other unit vector in ${\displaystyle \left(\operatorname {ker} \varphi \right)^{\perp }}$). However, ${\displaystyle {\overline {\varphi (-g)}}(-g)={\overline {\varphi (g)}}g=f_{\varphi }}$ so both these vectors result in the same ${\displaystyle f_{\varphi }.}$ • If φ(x) ≠ 0 and xK is the orthogonal projection of ${\displaystyle x}$ onto ker φ, then ${\displaystyle f_{\varphi }={\frac {\|\varphi \|^{2}}{\varphi (x)}}(x-x_{K}).}$[note 2] • Suppose φ ≠ 0 and let ${\displaystyle M_{\mathbb {R} }:=(\operatorname {Re} \varphi )^{-1}\left(0\right)=\varphi ^{-1}\left(i\mathbb {R} \right)}$ where note that ${\displaystyle f_{\varphi }\not \in M_{\mathbb {R} }}$ since ${\displaystyle \varphi \left(f_{\varphi }\right)=\left\|\varphi \right\|^{2}\neq 0}$ is real and ${\displaystyle \operatorname {ker} \varphi }$ is a proper subset of ${\displaystyle M_{\mathbb {R} }.}$ If we reinterpret ${\displaystyle H}$ as a real Hilbert space H (with the usual real-valued inner product defined by ${\displaystyle \left\langle x,y\right\rangle _{\mathbb {R} }:=\operatorname {Re} \left\langle x,y\right\rangle }$), then ${\displaystyle \operatorname {ker} \varphi }$ has real codimension 1 in ${\displaystyle M_{\mathbb {R} },}$ where ${\displaystyle M_{\mathbb {R} }}$ has real codimension 1 in H, and ${\displaystyle \left\langle f_{\varphi },M_{\mathbb {R} }\right\rangle _{\mathbb {R} }=0}$ (i.e. ${\displaystyle f_{\varphi }}$ is perpendicular to ${\displaystyle M_{\mathbb {R} }}$ with respect to ${\displaystyle \left\langle \cdot ,\cdot \right\rangle _{\mathbb {R} }}$). • In the theorem and constructions above, if we replace ${\displaystyle H}$ with its real Hilbert space counterpart H and if we replace φ with Re φ then ${\displaystyle f_{\varphi }=f_{\operatorname {Re} \varphi },}$ meaning that we will obtain the exact same vector ${\displaystyle f_{\varphi }}$ by using (H, ⟨⋅, ⋅⟩) and the real linear functional Re φ as we did with the origin complex Hilbert space (H, ⟨⋅, ⋅⟩) and original complex linear functional φ (with identical norm values as well). • Given any continuous linear functional ${\displaystyle \varphi \in H^{*},}$ the corresponding element ${\displaystyle f_{\varphi }\in H}$ can be constructed uniquely by ${\displaystyle f_{\varphi }=\varphi (e_{1})e_{1}+\varphi (e_{2})e_{2}+...,}$ where ${\displaystyle \{e_{i}\}}$ is an orthonormal basis of H, and the value of ${\displaystyle f_{\varphi }}$ does not vary by choice of basis. Thus, if ${\displaystyle y\in H,y=a_{1}e_{1}+a_{2}e_{2}+...,}$ then ${\displaystyle \varphi (y)=a_{1}\varphi (e_{1})+a_{2}\varphi (e_{2})+...=\langle f_{\varphi },y\rangle .}$ Canonical injection from a Hilbert space to its dual and anti-dual For every ${\displaystyle y\in H,}$ the inner product on ${\displaystyle H}$ can be used to define two continuous (i.e. bounded) canonical maps: • The map defined by placing ${\displaystyle y}$ into the antilinear coordinate of the inner product and letting the variable ${\displaystyle h\in H}$ vary over the linear coordinate results in a linear functional on H: φy = y | • ⟩ = ⟨ •, y ⟩ : H → 𝔽       defined by       hy | h = ⟨ h, y This map is an element of ${\displaystyle H^{*},}$ which is the continuous dual space of ${\displaystyle H.}$ The canonical map from ${\displaystyle H}$ into its dual ${\displaystyle H^{*}}$[1] is the antilinear operator ${\displaystyle \Phi :=\operatorname {In} _{H}^{H^{*}}~:~H\to H^{*}}$       defined by       y ↦ φy = ⟨ • | y = ⟨ y, • ⟩ which is also an injective isometry.[1] The Riesz representation theorem states that this map is surjective (and thus bijective). Consequently, every continuous linear functional on ${\displaystyle H}$ can be written (uniquely) in this form.[1] • The map defined by placing ${\displaystyle y}$ into the linear coordinate of the inner product and letting the variable ${\displaystyle h\in H}$ vary over the antilinear coordinate results in an antilinear functional: ⟨ • | y = ⟨ y, • ⟩ : H → 𝔽       defined by       hh | y = ⟨ y, h, This map is an element of ${\displaystyle {\overline {H}}^{*},}$ which is the continuous anti-dual space of ${\displaystyle H.}$ The canonical map from ${\displaystyle H}$ into its anti-dual ${\displaystyle {\overline {H}}^{*}}$[1] is the linear operator ${\displaystyle \operatorname {In} _{H}^{{\overline {H}}^{*}}~:~H\to {\overline {H}}^{*}}$       defined by       y⟨ • | y = ⟨ y, • ⟩ which is also an injective isometry.[1] The Fundamental theorem of Hilbert spaces, which is related to Riesz representation theorem, states that this map is surjective (and thus bijective). Consequently, every antilinear functional on ${\displaystyle H}$ can be written (uniquely) in this form.[1] If ${\displaystyle \operatorname {Cong} :H^{*}\to {\overline {H}}^{*}}$ is the canonical antilinear bijective isometry ${\displaystyle f\mapsto {\overline {f}}}$ that was defined above, then the following equality holds: ${\displaystyle \operatorname {Cong} ~\circ ~\operatorname {In} _{H}^{H^{*}}~=~\operatorname {In} _{H}^{{\overline {H}}^{*}}.}$ Let ${\displaystyle A:H\to Z}$ be a continuous linear operator between Hilbert spaces ${\displaystyle \left(H,\langle \cdot ,\cdot \rangle _{H}\right)}$ and ${\displaystyle \left(Z,\langle \cdot ,\cdot \rangle _{Z}\right).}$ As before, let ${\displaystyle \langle y|x\rangle _{H}:=\langle x,y\rangle _{H}}$ and ${\displaystyle \langle y|x\rangle _{Z}:=\langle x,y\rangle _{Z}.}$ The adjoint of ${\displaystyle A:H\to Z}$ is the linear operator ${\displaystyle A^{*}:Z\to H}$ defined by the condition: ${\displaystyle \left\langle z|Ah\right\rangle _{Z}=\left\langle A^{*}z|h\right\rangle _{H},}$      for all ${\displaystyle h\in H}$ and all ${\displaystyle z\in Z.}$ It is also possible to define the transpose of ${\displaystyle A:H\to Z,}$ which is the map ${\displaystyle {}^{t}A:Z^{*}\to H^{*}}$ defined by sending a continuous linear functionals ${\displaystyle \psi \in Z^{*}}$ to ${\displaystyle {}^{t}A(\psi ):=\psi \circ A}$ The adjoint ${\displaystyle A^{*}:Z\to H}$ is actually just to the transpose ${\displaystyle {}^{t}A:Z^{*}\to H^{*}}$ when the Riesz representation theorem is used to identify ${\displaystyle Z}$ with ${\displaystyle Z^{*}}$ and ${\displaystyle H}$ with ${\displaystyle H^{*}.}$ To make this explicit, let ${\displaystyle \Phi _{H}~:~H\to H^{*}}$ and ${\displaystyle \Phi _{Z}~:~Z\to Z^{*}}$ be the bijective antilinear isometries defined respectively by gg | • ⟩H = ⟨ •, gH      and      zz | • ⟩Z = ⟨ •, zZ so that by definition ${\displaystyle (\Phi _{H}g)h=\langle g|h\rangle _{H}=\langle h,g\rangle _{H}}$ for all ${\displaystyle g,h\in H}$      and      ${\displaystyle (\Phi _{Z}z)y=\langle z|y\rangle _{Z}=\langle y,z\rangle _{Z}}$ for all ${\displaystyle y,z\in Z.}$ The relationship between the adjoint and transpose can be shown (see footnote for proof)[note 3] to be: ${\displaystyle {}^{t}A~\circ ~\Phi _{Z}~=~\Phi _{H}~\circ ~A^{*}}$ which can be rewritten as: ${\displaystyle A^{*}~=~\Phi _{H}^{-1}~\circ ~{}^{t}A~\circ ~\Phi _{Z}}$      and      ${\displaystyle {}^{t}A~=~\Phi _{H}~\circ ~A^{*}~\circ ~\Phi _{Z}^{-1}.}$ Extending the bra-ket notation to bras and kets Let ${\displaystyle \left(H,\langle \cdot ,\cdot \rangle _{H}\right)}$ be a Hilbert space and as before, let ${\displaystyle \langle y|x\rangle _{H}:=\langle x,y\rangle _{H}.}$ Let ${\displaystyle \Phi ~:~H\to H^{*}}$ be the bijective antilinear isometry defined by gg | • ⟩H = ⟨ •, gH so that by definition ${\displaystyle (\Phi h)g=\langle h|g\rangle _{H}=\langle g,h\rangle _{H}}$      for all ${\displaystyle g,h\in H.}$ Bras Given a vector ${\displaystyle h\in H,}$ let ${\displaystyle \langle h|}$ denote the continuous linear functional ${\displaystyle \Phi h}$; that is, ${\displaystyle \langle h|~:=~\Phi h.}$ The resulting of plugging some given ${\displaystyle g\in H}$ into the functional ${\displaystyle \langle h|}$ is the scalar ${\displaystyle \langle h|g\rangle _{H}=\langle g,h\rangle _{H},}$ where ${\displaystyle \langle h|g\rangle }$ is the notation that is used instead of ${\displaystyle \langle h|(g)}$ or ${\displaystyle \langle h|g.}$ The assignment ${\displaystyle h\mapsto \langle h|}$ is just the isometric antilinear isomorphism ${\displaystyle \Phi ~:~H\to H^{*}}$ so ${\displaystyle ~\langle cg+h|~=~{\overline {c}}\langle g|~+~\langle h|~}$ holds for all ${\displaystyle g,h\in H}$ and all scalars ${\displaystyle c.}$ Given a continuous linear functional ${\displaystyle \psi \in H^{*},}$ let ${\displaystyle \langle \psi |}$ denote the vector ${\displaystyle \Phi ^{-1}\psi }$; that is, ${\displaystyle \langle \psi |~:=~\Phi ^{-1}\psi .}$ The defining condition of the vector ${\displaystyle \langle \psi |\in H}$ is the technically correct but unsightly equality ${\displaystyle \left\langle \,\langle \psi |\,|g\right\rangle _{H}~=~\psi g}$      for all ${\displaystyle g\in H,}$ which is why the notation ${\displaystyle \left\langle \psi |g\right\rangle }$ is used in place of ${\displaystyle \left\langle \,\langle \psi |\,|g\right\rangle _{H}=\left\langle g,\,\langle \psi |\,\right\rangle _{H}.}$ The defining condition becomes ${\displaystyle \left\langle \psi |g\right\rangle ~=~\psi g}$      for all ${\displaystyle g\in H.}$ The assignment ${\displaystyle \psi \mapsto \langle \psi |}$ is just the isometric antilinear isomorphism ${\displaystyle \Phi ^{-1}~:~H^{*}\to H}$ so ${\displaystyle ~\langle c\psi +\phi |~=~{\overline {c}}\langle \psi |~+~\langle \phi |~}$ holds for all ${\displaystyle \phi ,\psi \in H^{*}}$ and all scalars ${\displaystyle c.}$ Kets For any given vector ${\displaystyle g\in H,}$ the notation ${\displaystyle |g\rangle }$ is used to denote ${\displaystyle g}$; that is, ${\displaystyle |g\rangle :=g.}$ The notation ${\displaystyle \langle h|g\rangle }$ and ${\displaystyle \langle \psi |g\rangle }$ is used in place of ${\displaystyle \left\langle h|\,|g\rangle \,\right\rangle _{H}~=~\left\langle \,|g\rangle ,h\right\rangle _{H}}$ and ${\displaystyle \left\langle \psi |\,|g\rangle \,\right\rangle _{H}~=~\left\langle g,\,\langle \psi |\,\right\rangle _{H},}$ respectively. As expected, ${\displaystyle ~\langle \psi |g\rangle ~=~\psi g~}$ and ${\displaystyle ~\langle h|g\rangle ~}$ really is just the scalar ${\displaystyle ~\langle h|g\rangle _{H}~=~\langle g,h\rangle _{H}.}$ Properties of induced antilinear map The mapping ${\displaystyle \Phi }$: HH* defined by ${\displaystyle \Phi (x)}$ = ${\displaystyle \varphi _{x}}$ is an isometric antilinear isomorphism, meaning that: • ${\displaystyle \Phi }$ is bijective. • The norms of ${\displaystyle x}$ and ${\displaystyle \varphi _{x}}$ agree: ${\displaystyle \Vert x\Vert =\Vert \Phi (x)\Vert .}$ • Using this fact, this map could be used to give an equivalent definition of the canonical dual norm of ${\displaystyle H^{*}.}$ The canonical inner product on ${\displaystyle H^{*}}$ could be defined similarly. • ${\displaystyle \Phi }$ is additive: ${\displaystyle \Phi (x_{1}+x_{2})=\Phi (x_{1})+\Phi (x_{2}).}$ • If the base field is ${\displaystyle \mathbb {R} ,}$ then ${\displaystyle \Phi (\lambda x)=\lambda \Phi (x)}$ for all real numbers λ. • If the base field is ${\displaystyle \mathbb {C} ,}$ then ${\displaystyle \Phi (\lambda x)={\bar {\lambda }}\Phi (x)}$ for all complex numbers λ, where ${\displaystyle {\bar {\lambda }}}$ denotes the complex conjugation of ${\displaystyle \lambda .}$ The inverse map of ${\displaystyle \Phi }$ can be described as follows. Given a non-zero element ${\displaystyle \varphi }$ of H*, the orthogonal complement of the kernel of ${\displaystyle \varphi }$ is a one-dimensional subspace of H. Take a non-zero element z in that subspace, and set ${\displaystyle x={\overline {\varphi (z)}}\cdot z/{\left\Vert z\right\Vert }^{2}.}$ Then ${\displaystyle \Phi (x)}$ = ${\displaystyle \varphi .}$ Alternatively, the assignment ${\displaystyle x\mapsto \varphi _{x}}$ can be viewed as a bijective linear isometry ${\displaystyle H\to {\overline {H}}^{*}}$ into the anti-dual space of ${\displaystyle H,}$[1] which is the complex conjugate vector space of the continuous dual space H*. Historically, the theorem is often attributed simultaneously to Riesz and Fréchet in 1907 (see references). In the mathematical treatment of quantum mechanics, the theorem can be seen as a justification for the popular bra–ket notation. The theorem says that, every bra ${\displaystyle \langle \psi |}$ has a corresponding ket ${\displaystyle |\psi \rangle ,}$ and the latter is unique. Notes 1. Trèves 2006, pp. 112-123. 1. ^ If ${\displaystyle \mathbb {F} =\mathbb {R} }$ then the inner product will be symmetric so it doesn't matter which coordinate of the inner product the element ${\displaystyle y}$ is placed into because the same map will result. But if ${\displaystyle \mathbb {F} =\mathbb {C} }$ then except for the constant 0 map, antilinear functionals on ${\displaystyle H}$ are completely distinct from linear functionals on ${\displaystyle H,}$ which makes the coordinate that ${\displaystyle y}$ is placed into is very important. For a non-zero ${\displaystyle y\in H}$ to induce a linear functional (rather than an antilinear functional), ${\displaystyle y}$ must be placed into the antilinear coordinate of the inner product. If it is incorrectly placed into the linear coordinate instead of the antilinear coordinate then the resulting map will be the antilinear map ${\displaystyle h\mapsto \left\langle y,h\right\rangle =\langle h|y\rangle ,}$ which is not a linear functional on ${\displaystyle H}$ and so it will not be an element of the continuous dual space ${\displaystyle H^{*}.}$ 2. ^ Since we must have ${\displaystyle x_{K}=x-{\frac {\left\langle x,f_{\varphi }\right\rangle }{\left\|f_{\varphi }\right\|^{2}}}f_{\varphi }.}$ Now use ${\displaystyle \left\|f_{\varphi }\right\|^{2}=\left\|\varphi \right\|^{2}}$ and ${\displaystyle \left\langle x,f_{\varphi }\right\rangle =\varphi \left(x\right)}$ and solve for ${\displaystyle f_{\varphi }.}$ 3. ^ To show that ${\displaystyle {}^{t}A~\circ ~\Phi _{Z}~=~\Phi _{H}~\circ ~A^{*},}$ fix ${\displaystyle z\in Z.}$ The definition of ${\displaystyle {}^{t}A}$ implies ${\displaystyle \left({}^{t}A\circ \Phi _{Z}\right)z=\left({}^{t}A(\Phi _{Z}z)\right)=\left(\Phi _{Z}z\right)\circ A}$ so it remains to show that ${\displaystyle \left(\Phi _{Z}z\right)\circ A=\Phi _{H}\left(A^{*}z\right).}$ If ${\displaystyle h\in H}$ then ${\displaystyle \left(\left(\Phi _{Z}z\right)\circ A\right)h=\left(\Phi _{Z}z\right)(Ah)=\langle z|Ah\rangle _{Z}=\langle A^{*}z|h\rangle _{H}=\left(\Phi _{H}\left(A^{*}z\right)\right)h,}$ as desired. ◼ References • Fréchet, M. (1907). "Sur les ensembles de fonctions et les opérations linéaires". Les Comptes rendus de l'Académie des sciences (in French). 144: 1414–1416. • P. Halmos Measure Theory, D. van Nostrand and Co., 1950. • P. Halmos, A Hilbert Space Problem Book, Springer, New York 1982 (problem 3 contains version for vector spaces with coordinate systems). • Riesz, F. (1907). "Sur une espèce de géométrie analytique des systèmes de fonctions sommables". Comptes rendus de l'Académie des Sciences (in French). 144: 1409–1411. • Riesz, F. (1909). "Sur les opérations fonctionnelles linéaires". Comptes rendus de l'Académie des Sciences (in French). 149: 974–977. • Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. • Walter Rudin, Real and Complex Analysis, McGraw-Hill, 1966, ISBN 0-07-100276-6. • Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
# Spherical Harmonics In solving the 3D hydrogen atom, we obtain a spherical harmonic, Y such that, $$Y_{lm}(\theta,\phi) = \epsilon\sqrt{\frac{(2l+1)}{(4\pi)}}\sqrt{\frac{(l-|m|)!}{(l+|m|)!}}e^{im\phi}P^m_l(cos \theta)$$ where $$\epsilon = (-1)^m$$ for $$m \geq 0$$ and $$\epsilon = 1$$ for $$m \leq 0$$. In quantum, m = -l, -l+1, ..., l-1, l. But according to the formula above, when m = l, we should have zero and not a finite value, since $$l - |m| = 0$$. Which means the wavefunction should be zero when m = l. Where did I go wrong? Last edited:
# C# Strings and Unicode - Solved ## Recommended Posts beebs1    398 Hiya, I'm writing a simple utility in C# to write out unicode files that will be read by a C++ program. I'm using the C# objects String, FileStream and BinaryWriter to do this. I've noticed though, that if the string contains only ASCII characters, it is written out in 8-bit ASCII format, and if it contains Unicode it is written out in UTF-16 format. Can anyone tell me how to force it to use UTF-16, even if no Unicode characters are present? My code is pretty much like this: try { FileStream stream = new FileStream( "test.txt", FileMode.Create, FileAccess.Write); BinaryWriter writer = new BinaryWriter(stream); string s = txtEntry.Text; // could be ASCII or Unicode writer.Write(s.ToCharArray()); writer.Close(); } // catch, etc... Many thanks for any help [smile] James. Solved - there is an overloaded BinaryWriter constructor that takes an encoding to be used. [Edited by - beebs1 on August 2, 2007 10:48:36 AM]
Kate Scripting: Indentation Kate Part in KDE4 supports the ECMAScript (JavaScript) language by using kjs. In KDE3 we had several hard-coded indenters in C++, the idea is to let scripts do all the indentation in KDE4. How does it work? It is similar to vim: You simply create a script in the directory \$KDEDIR/share/apps/katepart/jscript. An indentation script has to follow several rules: 1. it must have a valid script header (the first line must include the string kate-script and indentation scripts must have the type: indentation) 2. it must define some variables and functions Whenever the user types a character, the flow in Kate Part works like this 1. check the indentation script’s trigger characters, i.e. whether the script wants to indent code for the typed character 2. if yes, call the indentation function 3. the return value of the indentation function is an integer value representing the new indentation depth in spaces. In the 3rd step there are 2 special cases for the return value: 1. return value = -1: Kate keeps the indentation, that is it searches for the last non-empty line and uses its indentation for the current line 2. return value < -1 tells Kate to do nothing So how does a script look like exactly? The name does not really matter, so let’s call it foobar.js: /* kate-script * name: MyLanguage Indenter * author: Foo Bar * version: 1 * kate-version: 3.0 * type: indentation * * optional bla bla here */ // specifies the characters which should trigger indent() beside the default '\n' triggerCharacters = "{}"; // called for the triggerCharacters {} and function indent(line, indentWidth, typedChar) { // do calculations here // if typedChar is an empty string, the user hit enter/return // todo: Implement your indentation algorithms here. return -1; // keep indentation } • name [required]: the name will appear in Kate’s menus • license [optional]: not visible in gui, but should be specified in js-files. it is always better to have a defined license • author [optional]: name • version [optional]: recommended. an integer • kate-version [required]: the minimum required kate-version (e.g. for api changes) • type [required]: must be set to indentation The only missing part is the API which Kate Part exports to access the document and the view. Right now, there is no API documentation, so you have to look at the code: You will find, that the current document exports functions like • document.fileName() • document.isModified() • document.charAt(line, column) • etc… The view exports functions like • view.cursorPosition() • view.hasSelection() • view.clearSelection() That’s the boring part of this blog. The interesting one is unfortunately shorter: we are looking for contributors who want to write scripts or help in the C++ implementation :)
# Precise determination of the mass of the Higgs boson and tests of compatibility of its couplings with the standard model predictions using proton collisions at 7 and 8 TeV This is a condensed description with plots for the analysis CMS-HIG-14-009. ## Abstract Properties of the Higgs boson with mass near 125 GeV are measured in proton-proton collisions with the CMS experiment at the LHC. Comprehensive sets of production and decay measurements are combined. The decay channels include $\gamma\gamma$, ZZ, WW, $\tau\tau$, bb, and $\mu\mu$ pairs. The data samples were collected in 2011 and 2012 and correspond to integrated luminosities of up to 5.1 fb$^{-1}$ at 7 TeV and up to 19.7 fb$^{-1}$ at 8 TeV. From the high-resolution $\gamma\gamma$ and ZZ channels, the mass of the Higgs boson is determined to be $125.02 \,^{+0.26}_{-0.27}\,\text{(stat.)} \,^{+0.14}_{-0.15}\,\text{(syst.)}$ GeV. For this mass value, the event yields obtained in the different analyses tagging specific decay channels and production mechanisms are consistent with those expected for the standard model Higgs boson. The combined best-fit signal relative to the standard model expectation is $1.00 \,\pm0.09\,\text{(stat.)} \,^{+0.08}_{-0.07}\,\text{(theo.)} \,\pm0.07\,\text{(syst.)}$ at the measured mass. The couplings of the Higgs boson are probed for deviations in magnitude from the standard model predictions in multiple ways, including searches for invisible and undetected decays. No significant deviations are found. $\def\MX{\mathrm{m}_\mathrm{H}} \def\MH{\mathrm{m}_\mathrm{H}} \def\hgg{\mathrm{H}\rightarrow\mathrm{\gamma\gamma}} \def\hzzllll{\mathrm{H}\rightarrow\mathrm{ZZ}\rightarrow\mathrm{4\ell}} \def\ggh{\mathrm{ggH}} \def\tth{\mathrm{ttH}} \def\vh{\mathrm{VH}} \def\vbf{\mathrm{VBF}} \def\to{\rightarrow} \def\gg{\mathrm{\gamma\gamma}} \def\PH{\mathrm{H}} \def\GSM{\mathrm{\Gamma}_{\mathrm{SM}}} \def\GammaObsComb{1.7} \def\GammaExpComb{2.3} \def\muggh{\mu_{\mathrm{ggH}}} \def\mutth{\mu_{\mathrm{ttH}}} \def\muvh{\mu_{\mathrm{VH}}} \def\muvbf{\mu_{\mathrm{VBF}}}$ ## Mass measurement and direct limits on the natural width Plot Caption The 68% CL confidence regions for the signal strength $\sigma /\sigma_{\text{SM}}$ versus the mass of the boson $\MH$ for the $\hgg$ and $\hzzllll$ final states, and their combination. The symbol $\sigma / \sigma_{\text{SM}}$ denotes the production cross section times the relevant branching fractions, relative to the SM expectation. In this combination, the relative signal strength for the two decay modes is set to the expectation for the SM Higgs boson. Scan of the test statistic $q(\MX)=-2 \, \Delta \ln \mathcal{L}$ versus the mass of the boson $\MX$ for the $\hgg$ and $\hzzllll$ final states separately and for their combination. Three independent signal strengths, $(\ggh,\tth)\to \gg$, $(\vbf,\vh)\to \gg$, and $\text{pp}\to\hzzllll$, are profiled together with all other nuisance parameters. The solid curve is obtained by profiling all nuisance parameters and thus includes both statistical and systematic uncertainties. The dashed curve is obtained by fixing all nuisance parameters to their best-fit values, except for those related to the $\hgg$ background description, thus including only statistical uncertainties. The crossings with the thick (thin) horizontal lines define the 68% (95%) CL interval for the measured mass. Scan of the test statistic $q(m_{\PH}^{\gg} - m_{\PH}^{4\ell})$ versus the difference between two individual mass measurements for the same model of signal strengths used in the left panel. Likelihood scan as a function of the width of the boson. The continuous (dashed) lines show the observed (expected) results for the $\hgg$ analysis, the $\hzzllll$ analysis, and their combination. The data are consistent with $\GSM \sim$ 4 MeV and for the combination of the two channels the observed (expected) upper limit on the width at the 95% CL is $\GammaObsComb (\GammaExpComb)$ GeV. ## Significances of the observations in data Significance (mH = 125.0 GeV) Combination Expected (post-fit) H→ZZ tagged 6.3 σ 6.5 σ H→γγ tagged 5.3 σ 5.6 σ H→WW tagged 5.4 σ 4.7 σ H→ττ tagged 3.9 σ 3.8 σ H→bb tagged 2.6 σ 2.0 σ H→μμ tagged <0.1 σ 0.4 σ The observed and median expected significances of the excesses for each decay mode group, assuming $\MX=125.0$ GeV. ## Compatibility of the observed yields with the SM Higgs boson hypothesis ### Grouping by predominant decay mode and/or production tag Plot Caption Values of the best-fit σ/σSM for the combination (solid vertical line) and for subcombinations by predominant decay mode and additional tags targeting a particular production mechanism. The vertical band shows the overall σ/σSM uncertainty. The σ/σSM ratio denotes the production cross section times the relevant branching fractions, relative to the SM expectation. The horizontal bars indicate the ±1 standard deviation uncertainties in the best-fit σ/σSM values for the individual modes; they include both statistical and systematic uncertainties. Values of the best-fit σ/σSM for the combination (solid vertical line) and for subcombinations by predominant decay mode. The vertical band shows the overall σ/σSM uncertainty. The σ/σSM ratio denotes the production cross section times the relevant branching fractions, relative to the SM expectation. The horizontal bars indicate the ±1 standard deviation uncertainties in the best-fit σ/σSM values for the individual modes; they include both statistical and systematic uncertainties. Values of the best-fit σ/σSM for the combination (solid vertical line) and for subcombinations by analysis tags targeting individual production mechanisms. The vertical band shows the overall σ/σSM uncertainty. The σ/σSM ratio denotes the production cross section times the relevant branching fractions, relative to the SM expectation. The horizontal bars indicate the ±1 standard deviation uncertainties in the best-fit σ/σSM values for the individual modes; they include both statistical and systematic uncertainties. #### Numeric values The tables below contain the same information that is shown in Figure 4 of the HIG-14-009 paper. Grouping μ̂ = σ/σSM (mH = 125.0 GeV) by production tag and predominant decay mode value uncertainty H → ZZ (2 jets) 1.549 -0.661/+0.954 H → ZZ (0/1 jet) 0.883 -0.272/+0.337 H → ττ (ttH tag) -1.325 -3.600/+6.079 H → ττ (VH tag) 0.868 -0.884/+1.000 H → ττ (VBF tag) 0.949 -0.381/+0.432 H → ττ (0/1 jet) 0.845 -0.384/+0.423 H → WW (ttH tag) 3.939 -1.437/+1.704 H → WW (VH tag) 0.800 -0.934/+1.089 H → WW (VBF tag) 0.623 -0.479/+0.594 H → WW (0/1 jet) 0.766 -0.206/+0.229 H → γγ (ttH tag) 2.675 -1.729/+2.402 H → γγ (VH tag) 0.575 -0.807/+0.934 H → γγ (VBF tag) 1.514 -0.475/+0.545 H → γγ (untagged) 1.000 -0.257/+0.286 H → bb (ttH tag) 0.650 -1.809/+1.850 H → bb (VH tag) 0.890 -0.441/+0.469 by predominant decay mode value uncertainty H → ZZ tagged 1.003 -0.264/+0.318 H → WW tagged 0.832 -0.201/+0.226 H → γγ tagged 1.121 -0.224/+0.251 H → ττ tagged 0.913 -0.264/+0.289 H → bb tagged 0.836 -0.429/+0.454 by production tag value uncertainty ttH tagged 2.750 -0.911/+1.061 VH tagged 0.833 -0.345/+0.361 VBF tagged 1.150 -0.248/+0.287 Untagged 0.869 -0.146/+0.173 ### Fermion- and boson-mediated production processes and their ratio Plot Caption The 68% CL regions (bounded by the solid curves) for signal strength of the ggH and ttH, and of the VBF and VH production mechanisms: μggH,ttH and μ VBF,VH, respectively. The different colors show the results obtained by combining data from each of the five analyzed decay mode groups: γγ (green), WW (blue), ZZ(red), ττ (violet), bb (cyan). The crosses indicate the best-fit values. The diamond at (1,1) indicates the expected values for the SM Higgs boson. 1D test statistics q(μVBF,VHggH,ttH) scan vs the ratio of signal strength modifiers μVBF,VHggH,ttH combined for all channels. The solid curve represents the observed result in data while the dashed curve indicates the expected median result in the presence of the SM Higgs boson. Crossings with the horizontal thick and thin red lines denote the 68% CL and 95% CL confidence intervals.The cross-section ratios σVBFVH and σggHttH assumed to be as in the SM. #### Numerical values The best-fit values for the signal strength at mH = 125.0 GeV, of the VBF and VH, and of the ggH and ttH production mechanisms, μVBF,VH and μggH,ttH, respectively. The channels are grouped by decay mode tag. The observed and median expected results for the ratio of μVBF,VH to μggH,ttH together with their uncertainties are also given for the full combination. Channel grouping Best fit (μggH,ttH, μVBF,VH) H → γγ tagged $(1.07,1.24)$ H → ZZ tagged $(0.88,1.75)$ H → WW tagged $(0.87,0.66)$ H → ττ tagged $(0.52,1.21)$ H → bb tagged $(0.55,0.85)$ Combined best fit μVBF,VH/μggH,ttH Observed (expected) $1.25_{-0.44}^{+0.62}$ ($1.00_{-0.35}^{+0.49}$) ### Individual production modes Plot Caption Likelihood scan results for $\muggh, \muvbf, \muvh$, and $\mutth$. The inner bars represent the 68% CL confidence intervals while the outer bars represent the 95% CL confidence intervals. When scanning each individual parameter, the three other parameters are profiled. The SM values of the relative branching fractions are assumed for the different decay modes. #### Numerical values The best-fit results for independent signal strengths corresponding to the four main production processes, ggH, VBF, VH, and ttH; the expected sensitivities and observed significances with respect to background-only hypothesis ($\mu = 0$), and the pull of the observation with respect to the SM hypothesis ($\mu=1$). These results assume that the relative values of the branching fractions are those predicted by the SM. Parameter Best fit result (68% CL) for full combination Observed significance ($\sigma$) Expected sensitivity ($\sigma$) Pull to SM hypothesis ($\sigma$) μggH $0.85_{-0.16}^{+0.19}$ 6.6 7.4 -0.8 μVBF $1.16_{-0.34}^{+0.37}$ 3.7 3.3 +0.4 μVH $0.92_{-0.36}^{+0.38}$ 2.7 2.9 -0.2 μttH $2.90_{-0.94}^{+1.08}$ 3.5 1.2 +2.2 Parameter Best fit result (68% CL) for 7 TeV data Best fit result (68% CL) for 8 TeV data μggH $1.03^{+0.37}_{-0.33}$ $0.79^{+0.19}_{-0.17}$ μVBF $1.77^{+0.99}_{-0.90}$ $1.02^{+0.39}_{-0.36}$ μVH $0.68^{+0.99}_{-0.68}$ $0.96^{+0.41}_{-0.39}$ μttH < 2.19 $3.27^{+1.20}_{-1.04}$ ### Ratios between decay modes $\def\muf{\mu_\mathrm{ggH,ttH}} \def\muv{\mu_\mathrm{VBF,VH}}$ Parameterization used to scale the expected SM Higgs boson yields of the different production and decay modes when obtaining the results presented in next table. The $\muf$ and $\muv$ parameters are introduced to reduce the dependency of the results on the SM expectation \begin{array}{c|ccccc} \hline \text{Best-fit} ~\lambda_\text{col,row} & \hgg & \hzz & \hww & \htt & \hbb \\\hline \hgg & 1 & \DRzzgg & \DRwwgg & \DRttgg & \DRbbgg \\\hzz & \DRggzz & 1 & \DRwwzz & \DRttzz & \DRbbzz \\\hww & \DRggww & \DRzzww & 1 & \DRttww & \DRbbww \\\htt & \DRggtt & \DRzztt & \DRwwtt & 1 & \DRbbtt \\\hbb & \DRggbb & \DRzzbb & \DRwwbb & \DRttbb & 1 \\\hline \end{array} The best-fit results and 68% CL confidence intervals for signal strength ratios of the decay mode in each column and the decay mode in each row, as modelled by the parameterization in previous table. When the likelihood of the data is scanned as a function of each individual parameter, the three other parameters in the same row, as well the production cross sections modifiers $\muf$ and $\muv$, are profiled. Since each row corresponds to an independent fit to data, the relation $\lambda_{yy,xx}=1/\lambda_{xx,yy}$ is only approximately satisfied. ### Search for mass-degenerate states with different coupling structures Plot Caption Distribution of the profile likelihood ratio $q_\lambda$ between different assumptions for the structure of the matrix of signal strengths for the production processes and decay modes both for pseudo-data samples generated under the SM hypothesis and the value observed in data. The likelihood in the numerator is that for the data under a model of a general rank 1 matrix, expected if the observations are due to a single particle and of which the SM is a particular case. The likelihood in the denominator is that for the data under a "saturated model'' with as many parameters as there are matrix elements. The arrow represents the observed value in data, $q_\lambda^\text{obs}$. Under the SM hypothesis, the probability to find a value of $q_\lambda \geq q_\lambda^\text{obs}$ is $(7.9\pm0.3)\%$, where the uncertainty reflects only the finite number of pseudo-data samples generated. ## Compatibility of the observed data with the SM Higgs boson couplings ### Relation between the coupling to the W and Z bosons #### Using only untagged pp → H → WW and pp → H → ZZ events and assuming SM couplings to fermions Plot Caption 1D test statistics q($\lambda_{\mathrm{WZ}}$) scan vs $\lambda_{\mathrm{WZ}}$, the ratio of the couplings to W and Z bosons, profiling the coupling modifier κZ and all other nuisances. The coupling to fermions is taken to be the SM one (κF = 1). #### Using all channels, and without assumption on the couplings to fermions (except their universality) Plot Caption 1D test statistics q($\lambda_{\mathrm{WZ}}$) scan vs $\lambda_{\mathrm{WZ}}$, the ratio of the couplings to W and Z bosons, profiling the coupling modifiers κZ and κF and all other nuisances. ### Test of the couplings to massive vector bosons and fermions Plot Caption 2D test statistics q(κV, κF) scan. The cross indicates the best-fit values. The solid, dashed, and dotted contours show the 68%, 95%, and 99.7% CL regions, respectively. The yellow diamond shows the SM point (κV, κf) = (1, 1). The left plot shows the likelihood scan in two quadrants, $(+, +)$ and $(+,-)$. The right plot shows the likelihood scan constrained to the $(+, +)$ quadrant. 2D test statistics q(κV, κF) scan for individual channels (colored swaths) and for the overall combination (thick curve). The cross indicates the global best-fit values. The dashed contour bounds the 95% CL region for the combination. The yellow diamond shows the SM point (κV, κf) = (1, 1). The left plot shows the likelihood scan in two quadrants, $(+, +)$ and $(+,-)$. The right plot shows the positive quadrant only. ### Test for asymmetries in the couplings to fermions #### Up-type vs Down-type Fermions Plot Caption 1D test statistics q(λdu) scan vs the coupling modifier ratio λdu, profiling the coupling modifiers κu and κV and all other nuisances. #### Leptons vs Quarks Plot Caption 1D test statistics q(λlq) scan vs the coupling modifier ratio λlq, profiling the coupling modifiers κq and κV and all other nuisances. ### Test of the scaling of couplings with the masses of SM particles The coupling scale factors to fermions and vector bosons are expressed in terms of a mass scaling parameter $\epsilon$ and a \u201cvacuum expectation value\u201d parameter $M$, described in arXiv:1207.1693. The coupling scale factors to fermions are $\kappa_{f,i}=v\cdot m_{f,i}^{\epsilon} / M^{1+\epsilon}$ and the coupling scale factors to vector bosons are $\kappa_{V,j}=v\cdot m_{V,j}^{2\epsilon} / M^{1+2\epsilon}$, where $v\approx246$ GeV is the SM vacuum expectation value, $m_{f,i}$ are the fermion masses, and $m_{V,i}$ are the vector boson masses. The SM expectation of $\kappa_{f,i}=\kappa_{V,i}=1$ is recovered in the double limit of $\epsilon=0$ and $M=v$. Plot Caption Summary of the fits for deviations in the coupling for the generic five-parameter model not effective loop couplings. In this model, loop-induced couplings are assumed to follow the SM structure as in arXiv:1307.1347. The best fit values of the parameters are shown, with the corresponding 68% and 95% CL intervals. 2D test statistics q(M, ε) scan. The cross indicates the best-fit values. The solid, dashed, and dotted contours show the 68%, 95%, and 99.7% CL confidence regions, respectively. The diamond represents the SM expectation, (M,ε) = (v,0), where v is the SM Higgs vacuum expectation value, v = 246.22 GeV. Summary of the fits for deviations in the coupling for the generic five-parameter model not effective loop couplings, expressed as function of the particle mass. For the fermions, the values of the fitted yukawa couplings hff are shown, while for vector bosons the square-root of the coupling for the hVV vertex divided by twice the vacuum expectation value of the Higgs boson field. Particle masses for leptons and weak boson, and the vacuum expectation value of the Higgs boson are taken from the PDG. For the top quark the same mass used in theoretical calculations is used (172.5 GeV) and for the bottom quark the running mass mb(mH=125.0 GeV)=2.76 GeV is used. In this model, loop-induced couplings are assumed to follow the SM structure as in arXiv:1307.1347. The solid black line with 68% and 95% CL bands are taken from the fit to data with the model $(M,\epsilon)$. ### Test for the presence of BSM particles in loops Plot Caption 2D test statistics q(κg, κγ) scan, assuming that ΓBSM = 0. The cross indicates the best-fit values. The solid, dashed, and dotted contours show the 68%, 95%, and 99.7% CL regions, respectively. The yellow diamond represents the SM expectation, (κγ, κg) = (1, 1). The partial widths associated with the tree-level production processes and decay modes are assumed to be unaltered (κ = 1). 1D test statistics q(BRBSM) scan, profiling the modifier to the effective coupling to photons and gluons κγ, κg. The solid curve represents the observation and the dashed curve indicates the expected median results in the presence of the SM Higgs boson. The partial widths associated with the tree-level production processes and decay modes are assumed to be unaltered (κ = 1). 1D test statistics q(BRinv) scan, profiling the modifier to the effective coupling to photons and gluons κγ, κg, also combining with data from the H(inv) searches, thus assuming that BRBSM = BRinv, i.e. that there are no undetected decays, BRundet = 0.The solid curve represents the observation and the dashed curve indicates the expected median results in the presence of the SM Higgs boson. The partial widths associated with the tree-level production processes and decay modes are assumed to be unaltered (κ = 1). 1D test statistics q(BRinv) scan, further assuming the modifier to the effective coupling to photons and gluons κγ = κg = 1 and combining with the data from the H(inv) searches. The solid curve represents the observation and the dashed curve indicates the expected median results in the presence of the SM Higgs boson. The partial widths associated with the tree-level production processes and decay modes are assumed to be unaltered (κ = 1). ### Test of a model with scaling factors for SM particles The search is performed with six independent coupling modifiers: κV, κb, κτ, κt, κg, κγ. Plot Caption Likelihood scans for parameters in a model with coupling scaling factors for the SM particles, one coupling at a time while profiling the remaining five together with all other nuisance parameters; from top to bottom: $\kappa_{V}$ (W and Z bosons), $\kappa_{b}$ (bottom quarks), $\kappa_{\tau}$ (tau leptons), $\kappa_{t}$ (top quarks), $\kappa_{g}$ (gluons; effective coupling), and $\kappa_{\gamma}$ (photons; effective coupling). The inner bars represent the 68% CL confidence intervals while the outer bars represent the 95% CL confidence intervals. ### Test of a general model without assumptions on the total width The search is performed with following 7 free parameters: κgZ (= κκZH), λγZ (= κγZ), λWZ (= κWZ), λbZ (= κbZ), λτZ (= κτZ), λZg (= κZg), λtg (= κtg), allowing all gauge and third generation fermion couplings to float and allowing for invisible or undetectable widths. Plot Caption Summary of the fits for deviations in the coupling modifier ratios for the general seven-parameter model with effective loop couplings. The best fit of the parameters are shown, with the corresponding 68% and 95% CL intervals. ### Constraints on $\text{BR}_{\text{BSM}}$ in a scenario with free couplings Plot Caption The likelihood scan versus BRBSM = ΓBSMtot. The solid curve is the data and the dashed line indicates the expected median results in the presence of the SM Higgs boson. The modifiers for both the tree-level and loop-induced couplings are profiled, but the couplings to the electroweak bosons are assumed to be bounded by the SM expectation (κV ≤ 1). The likelihood scan versus BRinv = Γinvtot, also combining with data from the H(inv) searches, thus assuming that BRBSM = BRinv, i.e. BRundet = 0. The solid curve is the data and the dashed line indicates the expected median results in the presence of the SM Higgs boson. The modifiers for both the tree-level and loop-induced couplings are profiled, but the couplings to the electroweak bosons are assumed to be bounded by the SM expectation (κV ≤ 1). The 2D likelihood scan for the BRinv and BRundet parameters for a combined analysis of the H(inv) search data and visible decay channels. The cross indicates the best-fit values. The solid, dashed, and dotted contours show the 68%, 95%, and 99.7% CL confidence regions, respectively. The diamond represents the SM expectation, (BRinv, BRundet) = (0, 0). The likelihood scan versus BRundet = Γundettot. The solid curve is the data and the dashed line indicates the expected median results in the presence of the SM Higgs boson. BRinv is constrained by the data from the H(inv) searches and modifiers for both the tree-level and loop-induced couplings are profiled, but the couplings to the electroweak bosons are assumed to be bounded by the SM expectation (κV ≤ 1). ### Summary of tests of the compatibility of the data with the SM Higgs boson couplings Plot Caption Summary plot of likelihood scan results for the different parameters of interest in benchmark models from the LHC XS WG recommendations (arXiv:1307.1347) separated by dotted lines. The BRBSM value at the bottom is obtained for the model with three parameters (κgγ,BRBSM). The inner bars represent the 68% CL confidence intervals while the outer bars represent the 95% CL confidence intervals. The list of parameters for each model and the numerical values of the intervals are also provided in Table 12 of the paper. ## Additional plots (not in paper) ### Mass measurements Plot Caption Values of the best-fit $\MH$ for the combination (solid vertical line) and for H→γγ and H→ZZ→4ℓ final states separately. The vertical band shows the combined uncertainty. The horizontal bars indicate the ±1 standard deviation uncertainties in the best-fit $\MH$ values for the individual channels. ### Signal strengths by predominant decay mode and production tag Plot Caption Values of the best-fit σ/σSM for the combination (solid vertical line) and for subcombinations by predominant decay mode and additional tags targeting a particular production mechanism. The vertical band shows the overall σ/σSM uncertainty. The σ/σSM ratio denotes the production cross section times the relevant branching fractions, relative to the SM expectation. The horizontal bars indicate the ±1 standard deviation uncertainties in the best-fit σ/σSM values for the individual modes; they include both statistical and systematic uncertainties. ### Ratio of fermion- and boson-mediated production processes for different decay channels Plot Caption 1D test statistics q(μVBF,VHggH,ttH) scan vs the ratio of signal strength modifiers μVBF,VHggH,ttH, profiling all other nuisances, for the different decay channels considered and their combination. The cross-section ratios σVBFVH and σggHttH assumed to be as in the SM. ### Individual production modes for 7 and 8 TeV separately Plot Caption Summary of the fits to the 7 TeV data assuming independent signal strengths for each of the four production modes, while the decay branching fractions are assumed to be as in the SM. The best fit of the signal strengths are shown, with the corresponding 68% and 95% CL intervals. Summary of the fits to the 8 TeV data assuming independent signal strengths for each of the four production modes, while the decay branching fractions are assumed to be as in the SM. The best fit of the signal strengths are shown, with the corresponding 68% and 95% CL intervals. ### Likelihood scans for individual parameters in the test of a model with scaling factors for SM particles Plot Caption 1D test statistics q(κV) scan, profiling the other five coupling modifiers. 1D test statistics q(κb) scan, profiling the other five coupling modifiers. 1D test statistics q(κτ) scan, profiling the other five coupling modifiers. 1D test statistics q(κt) scan, profiling the other five coupling modifiers. 1D test statistics q(κγ) scan, profiling the other five coupling modifiers. 1D test statistics q(κg) scan, profiling the other five coupling modifiers. ### Likelihood scans for individual parameters in the test of a general model without assumptions on the total width Plot Caption 1D test statistics q(κgZ) scan, profiling the other 6 parameters. 1D test statistics q(κgZ) scan, profiling the other 6 parameters. 1D test statistics q(κgZ) scan, profiling the other 6 parameters. 1D test statistics q(κgZ) scan, profiling the other 6 parameters. 1D test statistics q(κgZ) scan, profiling the other 6 parameters. 1D test statistics q(κgZ) scan, profiling the other 6 parameters. 1D test statistics q(κgZ) scan, profiling the other 6 parameters. Edit | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | More topic actions Topic revision: r2 - 2015-02-23 - GuillelmoCeballos Create a LeftBar Cern Search TWiki Search Google Search CMSPublic All webs Copyright &© 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors. Ideas, requests, problems regarding TWiki? Send feedback
# G305 500hz Vs 1000hz Filter Your Results. All Results 1 - 16 of 113. G305 Lightspeed Wireless The G305 Lightspeed Wireless by Logitech uses an Optical sensor, specifically the Hero, providing a DPI/CPI range of 200 - 12,000 DPI. Code to add this calci to your website. CORRECTION: It is 125hz not 128hz, not a big difference though smh. 400dpi, 1000Hz. 5 GHz 500E to the test against the 0. 01 % for the sampling frequency. 6mm (H) x 62. Kilohertz to hertz formula. By age the upper limit for many is reduced to 12-13. Logitech G invented LIGHTSPEED wireless technology to deliver the ultimate in high-performance wireless gaming. Check out Logitech G304 Lightspeed Wireless Gaming Mouse, Hero Sensor, 12, 000 DPI, Lightweight, 6 Programmable Buttons, 250h Battery Life, On-Board Memory, Compatible. So with this page you can raise that rate in the order of 125hz, 250hz, 500hz, & 1000hz. I can definitely agree with the people who claim they feel a difference 500hz vs 1000hz. Now I am stuck between these two models. Kyle “Bugha” Giersdorf was born on December 30, 2002 and is a professional Fortnite player for Sentinels. (FWIW, I couldn't discern any difference between Microsoft's 500Hz polling and Razer's 1000Hz in actual gaming sessions. I have a mild hearing loss in both ears, mainly in the low frequencies and have found a lot of benefit from trialling Phonak Audeo Q50s. counou no med scout speed dmixes 63 pope_abibe 36 RobTheAwsm TF2 Tier 8 Tease (1 Level Away) - Come Join Us! !twitter !discord !steam 31 deepspacepotatoes BONK!! Playing TF2 with viewers!! 3 subs to go to our goal of 25! Let's get those emojis! :D !discord !steam 31 Rahmed casual gamer gaming 21 ParadoxPowerPlays Mann VS Machine [MvM] - Two Cities Tour #900 with Dr. This is surprising as both mice feel the same when tested. The section for the polling rate allows the user to change from the default 1000Hz setting. Polling rate is measured in Hz. This results in a tracking speed of 400IPS, and a maximum acceleration of 40 G. 低周波数(250Hz、500Hz、1000Hz)では、片頭痛薬追加投与群で聴覚閾値の大幅な改善が見られた。 肝移植後CMV血症予防、予防投与vs. A 1000Hz polling rate is claptrap; 12,000 DPI is claptrap on a cosmic level. there are computer/mouse systems where 1000hz is absolutely better in every way than 500hz. Bugha uses edit confirm on release! This site uses Akismet to reduce spam. CC1608_Fire Systems Design Guide_Update1_Layout 1 11/03/2010 09:57 Page 8 In corridors less than 2m wide the horizontal spacing of detectors can be increased, the area of coverage need not overlap as in the case of a room. 83v/m @500Hz : 5Ω : 22″ × 71″ × 2″ MG-2. Mack Foxx, Pug and Friends!. Choosing a capacitor C and frequency f is best. This can be set in different ranges from 125Hz, to 500Hz to 1000Hz. 1000Hz feels great. 1000hz unless it makes a program you use unstable or harder to use. (54) methods and apparatus for pulsed-dc dielectric barrier discharge plasma actuator and circuit ----- 500hz, 2. Pressing the same button combination will change the Polling Rate back to 1000Hz. plastic on the VSonic. jbl charge 3 vs jbl charge 4. NOTE: The higher the NRC rating, the more sound the material can absorb. It's compact and great for gaming-on-the-go, providing a slot for its wireless. At once I noticed that my sensitivity almost felt lower on wrist movements but higher on arm movements. 500Hz: 25db(65 gain) 750Hz: 40db(65 gain) 1000Hz: 50db(60 gain) 1500Hz: 70db(40 gain) 2000Hz: 80db(35 gain) No response at 3000Hz and up. G305 Lightspeed Wireless The G305 Lightspeed Wireless by Logitech uses an Optical sensor, specifically the Hero, providing a DPI/CPI range of 200 - 12,000 DPI. Ayman Elezabi A thesis submitted in partial ful llment of the requirements for the degree of Master of Science in the Electronics and Communications Engineering July 2015. I am planning on getting a Samsung LED HDTV. The crossover frequency was changed from 1000Hz to 1500Hz and this would have had the effect of relieving the tweeter from having to reproduce that lower extra 500Hz so, yes, in theory it would add to the longevity of the tweeter. Inductive Reactance: Inductive Reactance: Frequency: Inductance: where, X L = Inductive Reactance, f = Frequency, L = Inductance. However, my mouse hand has been aching a bit lately. Up in the rarified atmosphere of higher frequencies, say 2,000Hz+, the speaker can be as small as 1 foot (30cm) square. @crypticSlave - I honestly think I've seen it before, and have done the exact same thing. Some reviewers have said they didn't notice any difference and that 500Hz is even fine for competition gaming for them (these were competitive gamers). I'm trying 1000Hz on the second battery to see what difference there will be in battery life. The Alinco has a 1000hz cw crystal filter, in addition to a 500hz audio filter for cw ops. 1 second / 1000hz = 0. 400, 800, 1600, and 3200 DPI. 5" R8 on wall 0. Large memory structure with alpha-tagging. From the documentation, I understand that there are High, Medium, and Low detail levels. To combat this, several companies have produced left-handed variants of their popular right-handed models or ambidextrous. Logitech G305 wireless gaming mouse is for those who want an inexpensive gaming mouse. A lot of bargain mice have a 125Hz polling rate, while gaming mice — both optical and laser — often have a polling rate of 1000Hz. which one of these trans. The frequency f in kilohertz (kHz) is equal to the frequency f in hertz (Hz) divided by 1000:. Sound level, in dB, is plotted on the left side of the graph and ranges from very faint sounds (-10. info czeka na Ciebie szeroki wybór rabatów. With a solid and sturdy build quality, programmable buttons and varying polling rates, CORSAIR offers you many options to work with without sacrificing much despite the budget. ProsodyPro *---- A Praat script for large-scale systematic analysis of continuous prosodic events (Version 5. The audiogram is a graph showing the results of a pure-tone hearing test. There are certain cases where you are willing to give up just a small bit of input lag by enabling strobing, in the certain situations (certain gameplay tactics in. Sensors This is down to Laser vs Optic and for now Laser is winning. But beyond the technical jargon, the Salmosa is an incredibly swift performing mouse for its price. That's an issue with the Titan, not the Apex and I don't get it at 125Hz or 250Hz. The gain at the edge of the pass-band (500Hz?) The gain at the edge of the reject band (500Hz?) Compare the simulated results vs. Originally Posted by Skylit I do. The Beyma and Eminence PX passive crossovers available here. Aug 17, 2015 @ 6:46am I heard 1000 isn`t that good, because some mice aren. Center Frequency. Make the Most of your warranty. G4X - Link ECU's Latest Platform. PS/2 is an analog interface, so whenever you press a button it sends a command to the computer and it does it, right away. 7 kHz to hz = 7000 hz. Moderate hearing loss: At this level, you are asking people to repeat themselves a lot during conversations - in person and on the telephone. 7x ingame sens. FM modulation for 10m repeaters. The audiogram in Figure 1 is plausible. 7 kHz to hz = 7000 hz. Capacitive reactance calculator X C • Reactance of a capacitor • Calculate the reactance X C. There are no FAQs for this Product. canon t7i vs nikon d5600. Right now I am wearing a pair of Phonak analog hearing aids. counou no med scout speed dmixes 63 pope_abibe 36 RobTheAwsm TF2 Tier 8 Tease (1 Level Away) - Come Join Us! !twitter !discord !steam 31 deepspacepotatoes BONK!! Playing TF2 with viewers!! 3 subs to go to our goal of 25! Let's get those emojis! :D !discord !steam 31 Rahmed casual gamer gaming 21 ParadoxPowerPlays Mann VS Machine [MvM] - Two Cities Tour #900 with Dr. T2N 3P7 Ph 403-452-2263. NOTE: The higher the NRC rating, the more sound the material can absorb. Set the I/P frequency at 500hz. You can see this by enabling motion blur reduction on your 120Hz. you all should ignore any technical advice finalmouse gives. pg278qr vs pg279q. With a 1 ms report rate and end-to-end optimized wireless connectivity, G703 delivers incredible responsiveness for competition grade performance. A call to analogWrite () is on a scale of 0 - 255, such that analogWrite(255) requests a 100% duty cycle (always on), and analogWrite(127) is a 50% duty cycle (on half the. There are a number of variations in the type and manufacture of instruments as well as the ability of different performers. Some programs update in response to mouse movements. jbl flip 5 vs jbl flip 4. Xorel Artform panels deliver highly effective sound absorption and echo reduction while offering tremendous creative design opportunities. Code to add this calci to your website. The Roccat Kone Aimo’s LED system gives it the potential to be brash, but the level of control means it doesn’t have to be. M c Squared System Design Group, Inc. Alatawneh, P. 400Hz output is also available. I went back to Raw input, but changed the mouse polling to 500hz instead of 1000hz, and it seems to have removed some of the "smoothing" sensation I was feeling so I've left it there as it actually feels snappier than 1000hz, again, the opposite of what you might expect. At once I noticed that my sensitivity almost felt lower on wrist movements but higher on arm movements. Capacitive reactance calculator X C • Reactance of a capacitor • Calculate the reactance X C. Optic sensors are becoming more uncommon as their Laser counterparts are more accurate and have faster response times. Mark Hasegawa-Johnson [email protected] $2595 : 2-Way Planar-Magnetic w/ True-Ribbon Tweeter. 1kHz CPU stress is not negligible! On full load it ranges from ~15% on a 4,5GHz 4Core Intel CPU's up to ~30% on some Phenom II x4 Systems. Addendum: Photos of 125Hz vs 500Hz vs 1000Hz Originally created in a Blur Busters Forums thread , and now a part of this guide, this is a photo comparision of 125Hz versus 500Hz versus 1000Hz mouse poll rates. The CORSAIR IRONCLAW RGB Gaming Mouse combines a performance 18,000 DPI precision optical sensor with a 105g lightweight body and contoured shape that’s sculpted specifically for palm-grips and larger hands. I use zowie ec2-a (the sensor should be 3310, default rate: 1000hz), cs:go game (raw_input 1, acc off, m_mousespeed 1 (default value)), i7-6700hq procesor, usb 3. FacebookTwitterGoogle+หลายๆคนคงเคยเห็นตาม Config ของโปรเพลเยอร์ที่จะเขียนไว้ว่า 1000Hz 500Hz ไอ้เจ้าตัว Hz ของเมาส์เนี่ยแหละ เค้าจะเรียกอย่างเป็นทางกา. It uses a USB connection with a 1. pdf - Free download as PDF File (. In the question"What is the best gaming mouse?"Logitech G502 Proteus Spectrum is ranked 1st while Logitech G Pro Gaming Mouse is ranked 25th. You can take out. The Tt eSports Ventus X RGB features a 2:1 ratio, with a length at roughly 5 inches and a grip area at around 2. Hp Reverb Fixed. This thread is archived. The Tt eSports Ventus X RGB is actually made for those with medium to large hands; offering good support for claw and palm type grips. Vinson 12018-25dBM-20dBM-15dBM-12dBM-10dBM 200Hz 250Hz 300Hz 500Hz 1000Hz 1500Hz 2000Hz 2500Hz 3000Hz 3500Hz 3600Hz 3700Hz 3800Hz-10dBM Input Level at all Frequencies Frequency Response Curve Input vs. You have to part with some extra cash if you want to go for higher polling rates. Capacitive reactance calculator X C • Reactance of a capacitor • Calculate the reactance X C. 125hz for doing slow but very accurate work, 500hz is in the. 500hz vs 1000hz Mouse Polling Rate so on my Corsair Raptor M40 I have been using 1000 hz polling rate till recently I decided to switch to 500 to see if there is a notable difference. Low-end mice come with polling rate of 125Hz, and mid-range mice offer 500Hz. You can see the gaming mice sorted by sensors by clicking here = Gaming Mouse Sensors. To combat this, several companies have produced left-handed variants of their popular right-handed models or ambidextrous. So my question in my case i should use 500hz or 1000hz?. If you turn the volume too loud at first, you may damage your speakers or get a really loud surprise when the next frequency starts. 1,000hz is equal to 1 ms of delay by the way. A machined aluminum frame is designed to be lightweight and rugged providing a long service life in harsh industrial environments. EXhilaration Starts Here! Link's latest platform, G4X, is headlined by a faster microprocessor, high speed communications chip and 512 Megabytes of data logging across the entire product range. Gaming mice have the highest polling rate, with most rocking 1000Hz for optimal performance. 500Hz versus 1000Hz is more clearly human-eye visible when enabling blur reduction strobing (e. , if the reflex threshold was 80dB at 1000Hz, you would test at 90dB at 1000Hz). I'm trying 1000Hz on the second battery to see what difference there will be in battery life. Aim for a mouse with a polling rate of 500Hz or 1000Hz. Audiologists, by virtue of academic degree, clinical training, and license to practice, are qualified to provide guidance, development, implementation, and oversight of hearing screening programs. Check out Logitech G304 Lightspeed Wireless Gaming Mouse, Hero Sensor, 12, 000 DPI, Lightweight, 6 Programmable Buttons, 250h Battery Life, On-Board Memory, Compatible. However, if you feel that your guitar lacks presence, you can pull it to the front of the mix by boosting in the 3 kHz area. Professional Reference articles are designed for health professionals to use. Despite my heavy computer use, I rarely experience hand or wrist pain. Klipsch Reference R-15m vs. jbl flip 4 vs jbl flip 5. jbl charge 3 vs jbl charge 4. 0) plug and play gaming mouse. However Windows MT63 softwares accept +/-0. The Para driver is fully sweepable from 160hz to 3khz and you can choose anything inbetween. The audio spectrum is the audible frequency range at which humans can hear. We put the 0. pg278qr vs pg279q. Random Vibration is a varying waveform. This mouse is great for gaming. ProsodyPro *---- A Praat script for large-scale systematic analysis of continuous prosodic events (Version 5. Fourteen volunteers, seven males, 20 to 44 years old, with normal hearing were fitted with a standard Telephonics headset containing TDH-50P earphones and a model 51 cushion only or the model 51 cushion enclosed in Amplivox Audiocups headphone or Peltor AudioMate headphone. I have worn analog hearing aids for my entire life. 6 kHz to hz = 6000 hz. With this approach, researchers can also modify the low-pass frequency filters used by the hearing aid (filters that remove frequencies above 250Hz, 500Hz, 750Hz, 1000Hz, and 1500Hz) to determine which filters might best complement a cochlear implant. Extension feels about the same with both buds able to provide a visceral rumble you feel more than hear, just the HE 150Pro just it a hint better. 1000dpi 1000hz , 1000dpin 500hz, 800dpi 1000hz and 800dpi 500hz. 1000Hz improves animation in high-refresh displays. 443) were for the proportion of energy in the two lowest frequency bands, 100-500Hz, and 500-1000Hz. Most talks about ps/2 vs usb and polling rates are concerned with latency. Selectable CW filters; 2600Hz, 1500Hz, 1000Hz, 500Hz, 300Hz and 100Hz plus 4 CW peak filters applied after 100Hz filter to obtain an overall filter of 20Hz bandwidth. Right now I am wearing a pair of Phonak analog hearing aids. The wavelength of a 500Hz sound wave would then be (A) 500 m. Optic sensors are becoming more uncommon as their Laser counterparts are more accurate and have faster response times. Otol, 3, 2, 1981; Margolis, R. 125hz,500hz or 1000hz? Which one should i use? < > Showing 1-15 of 23 comments. The higher polling rate used, the more you use the computer's performance and the higher the configuration requirements. Posted by 1 year ago. I tried BF2, COD4, COD:WAW and none of them provided any difference in how well I aim or am able to get the cross hair/iron sight to where I wanted to aim at. This mice uses Lightspeed wireless technology and HERO(High-Efficiency Rated Optical)Sensors. canon eos rebel t6 vs nikon d340. You could change it, I play at a high sens on csgo right now I have it on 500hz, hell I know coldzera with a zowie uses 500hz but most use 1000hz as far as pros go, I don't think it really makes a difference I imagine when you are making the small precise movements you might not even track at 1000hz (like if you aren't moving the mouse much) so I really don't think it matters at all it's. While I am burning the midnight oil I would be really grateful if someone has the R values for Wickes 50mm General Purpose Slab Insulation and Wickes 30mm Heavy Density Slab Insulation. , if the reflex threshold was 80dB at 1000Hz, you would test at 90dB at 1000Hz). In its full power mode, it has a 250 hour operational life, from a single AA battery. I cant feel the 500hz vs 1000hz difference, but im still dismayed that Im troubleshooting my very expensive gaming mouse. The MT63 specifications require a precision of +/- 0. Extension feels about the same with both buds able to provide a visceral rumble you feel more than hear, just the HE 150Pro just it a hint better. However, if you feel that your guitar lacks presence, you can pull it to the front of the mix by boosting in the 3 kHz area. I personally have burned out a usb port on two motherboards running @1000hz so I now only use 500hz which is sufficient in my opinion but chose whichever you want. When 1000 oscillations occur in one second, the frequency is 1000 Hz, or 1. monitor speaker description 500hz 400hz 315hz 210° 180° 150° 120° 90° 60° 30° 10 0-10-20-30-40-30-20-10 0 10 270° 0° 330° 300° 240° 1000hz 800hz 630hz. Underhill, and T. g603 vs g703. Re: Logitech G Pro Wireless being EXTREMELY inconsistent 1000hz « Reply #8 on: 08:10 AM - 04/23/19 » Played with 500hz last night and xim apex ran with only 0-5 smoothing needed (sync off) and it was more consistent, however still not up to its potential. Some programs update in response to mouse movements. pg278qr vs pg279q. Sine and random vibration testing cannot be equated. Visiting New Arrivals on Banggood, you can find what you want at a fairly low price as there are many products available here for you. l ⭐ Subwoofer test 100Hz, 200Hz, 500Hz, 1KHz and 2KHz Tones files for testing your subwoofer and speakers in Stereo FLAC. 500Hz Vs 1000 is a long-lasting debate, and there is very little difference between the two only a robot could tell the difference between 500 and 1000Hz. By age the upper limit for many is reduced to 12-13. Addendum: Photos of 125Hz vs 500Hz vs 1000Hz Originally created in a Blur Busters Forums thread , and now a part of this guide, this is a photo comparision of 125Hz versus 500Hz versus 1000Hz mouse poll rates. Some reviewers have said they didn't notice any difference and that 500Hz is even fine for competition gaming for them (these were competitive gamers). This can be set in different ranges from 125Hz, to 500Hz to 1000Hz. 500Hz 1000Hz 2000Hz 4000Hz 8000Hz 40 dB SPL 40 dB SPL 47 dB SPL 57 dB SPL 62 dB SPL As a practical matter, it is not difficult to meet these numbers in a reasonably quiet and distraction-free room. View ET115W7AG00117014. d5500 vs d7200. Yes, I hear you there. 1000Hz feels great. ) What is the time constant of Vo for a step change between these two frequencies? Solution (a. Hey guys, as the topic states i need some help with my USB polling rate. Credit: BenQ. Github Audio Noise Reduction. If Ko = 2π(1kHz/Volt), Kv = 500 (sec-1) and ωo = 1000π rads/sec (fo = 500Hz) for the FM demodulator on the previous slide, (a. Logitech G305 wireless gaming mouse is for those who want an inexpensive gaming mouse. 1000hz vs 500hz am i the only one prefering 500 over 1000? it feels just snappier on 500 and too smooth on 1000. Both designs. Logitech G invented LIGHTSPEED wireless technology to deliver the ultimate in high-performance wireless gaming. M c Squared System Design Group, Inc. I personally don't have a problem at 1000RR/1000Hz, but use 500RR/ 1000Hz for most games. no doubt due to the handoff between woofer and midrange. 1000 Hz provides additional cues of manner, nasal consonants, back and central vowels, noise bursts of most plosives and semi-vowels. Does anyone know if there is a significant improvement in switching over to digital hearing. But that's just me. When 1000 oscillations occur in one second, the frequency is 1000 Hz, or 1. The overall NRC rating is the calculated average of frequencies 250Hz, 500Hz, 1000Hz, and 2000Hz, which is then rounded to the nearest multiple of 0. The Para clearly gives you more midrange options compared to the BDDI. Home Speaker Components Crossover Components Assembled Passive Crossovers. Gamers usually need a higher rate of: 250Hz, 500Hz or 1000Hz. Redragon Invader M719 RGB mouse can be shifted with 7 lighting effects: Breathing, Rainbow, Full lighted, Wave, Go without Trace, Reactive, Flash. If 1000hz works for you that's great. 6 m) behind the seated patient (to prevent lip-reading. Kinzu V1, 500Hz. nikon d3200 vs d3300. Selectable AM filters from 2500Hz to 6000Hz with 500Hz step. It's that efficient that Logitech claims you can get 250 hours from a single AA battery which is a lot and it's a compact size and low weight is ideal for traveling. Kyle “Bugha” Giersdorf was born on December 30, 2002 and is a professional Fortnite player for Sentinels. 4GHz wireless technology, a pro-grade wireless solution, providing instant wireless connection with 1ms ultra low latency and 1000Hz polling rate. イヤーマフ遮音…500Hz:25dB、1000Hz:29dB、2000Hz:33dB、4000Hz:36dB 総重量…約680g 頭?顔?耳を保護します。 刈り払い作業やチェーンソー使用時にお使いください。 フェイスガードにはゴミよけシートが付いています。. This is combined with a 32-bit ARM. 1000Hz feels great. f (Hz) = f (kHz) × 1000. The Logitech G305 brings an esports-friendly design to a$60 wireless gaming mouse. 185MHz S1 and S2 ON Modulation: 2,4kHz and 500Hz: L45 L48 L49: Adjust the cores to obtain max. 1000Hzなら1秒間に1000回。 数値が高いほど頻度が細かく、素早くマウスの動きが反映される。ポインタの位置が更新される頻度も細かくなるのでマウスの動きもなめらかになる。 が、1000Hzで1ms。500Hzで2msなので、1msの違いを体感できる人はいるのだろうか。. So that is why people debated over 500hz vs 1000hz. In cabover style emergency vehicles, roof or bumper mounted sirens can produce almost identical sound levels in the driver's seat. 10 kHz to hz = 10000 hz. The overall NRC rating is the calculated average of frequencies 250Hz, 500Hz, 1000Hz and 2000Hz. Frequency Impedance vs. You could change it, I play at a high sens on csgo right now I have it on 500hz, hell I know coldzera with a zowie uses 500hz but most use 1000hz as far as pros go, I don't think it really makes a difference I imagine when you are making the small precise movements you might not even track at 1000hz (like if you aren't moving the mouse much) so I really don't think it matters at all it's. – Hearing loss no greater than 24dB (decibels) for the average of frequencies 500Hz, 1000Hz, 2000Hz, and 3000Hz in the better ear, unaided (without a hearing aid) or aided (with a hearing aid). For the results of the sound absorption coefficients presented on the standardized frequency bands, Figure 5b, it can be noticed that the both samples have similar values on the average frequency of 1000Hz, but a substantial difference is recorded on the 500Hz frequency where the shiv sample presents less than 30% sound absorption comparative. Bugha is also a content creator on YouTube and can be found streaming on Twitch. A simple and accurate test for detecting hearing impairment. 4GHz wireless technology, a pro-grade wireless solution, providing instant wireless connection with 1ms ultra low latency and 1000Hz polling rate. razer deathadder vs razer mamba. You could change it, I play at a high sens on csgo right now I have it on 500hz, hell I know coldzera with a zowie uses 500hz but most use 1000hz as far as pros go, I don't think it really makes a difference I imagine when you are making the small precise movements you might not even track at 1000hz (like if you aren't moving the mouse much) so I really don't think it matters at all it's. pc1869_cs1e. So no, you will not have any "lag" with that mouse. If you're not sure which tone you want, 1kHz is a safe bet. 500hz vs 1000hz Mouse Polling Rate so on my Corsair Raptor M40 I have been using 1000 hz polling rate till recently I decided to switch to 500 to see if there is a notable difference. Logitech Connection Utility Software. Register Your Product FIle a Warranty Claim. 500Hz to 1000Hz +5dB(A) @ 500Hz to 1000Hz Bac k ground noise The minimum sound level of a sounder device should be 65dB(A) or 5dB(A) above a background noise that lasts more than 30 seconds and not less than 75dB(A) at the beadhead if required to rouse people from sleep. The 500Hz versus 1000Hz is human-eye visible during motion blur reduction strobing (e. Capacitive reactance calculator X C • Reactance of a capacitor • Calculate the reactance X C. Cette page existe aussi en Français. Code to add this calci to your website. Hertz to kilohertz formula. I personally don't have a problem at 1000RR/1000Hz, but use 500RR/ 1000Hz for most games. It also does it with more texture, slightly better control, and more kick. Logitech is known for their top-notch build quality, this mouse is no different. But Logitech G305(this one will cost you $60) is a budget gaming mouse. The difference between 500hz and 1000hz is so insignificant is not likely going to make a difference. Large memory structure with alpha-tagging. Have you ever noticed where during some videos on the internet you see this weird stripe pattern in the room or over the picture? That is PWM dimming! Although the human eye might not be able to see or even perceive frequencies around 400Hz or 500Hz a camera sensor still can. Capacitive reactance (symbol X C) is a measure of a capacitor's opposition to AC (alternating current). I assume you could use this mouse for different grips depending on the size of your hand, but ultimately this is a claw grip mouse. Currently I'm conducting an experiment keeping the sampling rate 1000Hz. Key Issues. razer deathadder vs razer mamba. Mark Hasegawa-Johnson [email protected] Image via dvdyourmemories. VERDICT:The Harpoon gaming mouse is intended for the budget-friendly gamers who do not want to miss out on any high-end features. Figure 2 shows evoked responses from one representative control, with response to the 1000Hz tone shown in the (a) NEUROREPORT. Hello fellow techies! I am in quite a conundrum atm. But with so many options also comes a lot of confusion about what the best mouse will be. Agility X 2. Have you ever noticed where during some videos on the internet you see this weird stripe pattern in the room or over the picture? That is PWM dimming! Although the human eye might not be able to see or even perceive frequencies around 400Hz or 500Hz a camera sensor still can. txt) or read online for free. But with so many options also comes a lot of confusion about what the best mouse will be. The overall NRC rating is the calculated average of frequencies 250Hz, 500Hz, 1000Hz and 2000Hz. Quando falamos sobre ergonomia, ele é um mouse Muito confortável. Polling rate 500hz VS 1000hz In our interview with professional players, we found that everyone will choose a mouse polling rate of 500Hz or 1000Hz. The graph to the left represents a blank audiogram illustrates the degrees of hearing loss listed above. The CW filters come in two flavours 500Hz and 300Hz. 0, 8 Buttons, 1000Hz, Black. The end result is Agility X 2. The CPU usage at 1000hz is 15%. jbl flip 4 vs jbl flip 5. This is combined with a 32-bit ARM. When 1000 oscillations occur in one second, the frequency is 1000 Hz, or 1. Originally, I had a BG RD40 planar from 800hz-4kz. Moderate hearing loss: At this level, you are asking people to repeat themselves a lot during conversations - in person and on the telephone. g533 vs g933. The audio spectrum is the audible frequency range at which humans can hear. Hello fellow techies! I am in quite a conundrum atm. 2 kHz to hz = 2000 hz. In cabover style emergency vehicles, roof or bumper mounted sirens can produce almost identical sound levels in the driver's seat. One bad accessory can spoil an entire game, but a full set of high-end peripherals can cost hundreds of dollars. Buy Logitech G304 Lightspeed Wireless Gaming Mouse, Hero Sensor, 12, 000 DPI, Lightweight, 6 Programmable Buttons, 250h Battery Life, On-Board Memory, Compatible with PC/Mac - Black online at low price in India on Amazon. Great for speaker crossover replacement or upgrade. Logitech's best gaming sensor, 250 hours of battery life, and a body that weighs only. 500Hz recommended which adds 2ms input lag with a decent amout of CPU stress. Learn vocabulary, terms, and more with flashcards, games, and other study tools. 400, 800, 1600, and 3200 DPI. How do polling rates of 125, 250, 500 or 1000Hz affect the input lag of a modern keyboard? And has the PS/2 connection still less lag than USB? These are some of the questions that I will answer. Redragon Invader M719 RGB mouse can be shifted with 7 lighting effects: Breathing, Rainbow, Full lighted, Wave, Go without Trace, Reactive, Flash. 5mm and angle snapping is DISABLED, so tracking wise. What would happen if fire were exposed to your acoustic foam? Our acoustic foam will smolder and smoke, but it will not burst into flames. Period is the seconds/cycle. 125hz,500hz or 1000hz? Which one should i use? < > Showing 1-15 of 23 comments. Dies wird vom Fahrzeug erkannt und verarbeitet. It's next-generation wireless gaming, now ready for any and every gamer. 1kHz CPU stress is not negligible! On full load it ranges from ~15% on a 4,5GHz 4Core Intel CPU's up to ~30% on some Phenom II x4 Systems. Soundproofing - Roxul S n' S vs regular Roxul. They come with a high-performance optical sensor and has a fast polling rate of 1000Hz or 1ms. This thread is archived. ) Find Vo for fi = 250Hz and 1000Hz. Polling rate 500hz VS 1000hz. iPhone se vs iPhone 6s. Is is not feasible to operate it on low or at high frequency if you do this, the transformer will damage because at higher frequency the losses will become high and the vibration become high which can damage insulation and the mechanical structure and at lower frequency the output will be highly deviated from the rated and heating also takes place which can damage the insulation. Find helpful customer reviews and review ratings for Cooler Master mm711 60G Glossy White Gaming Mouse with Lightweight Honeycomb Shell, Ultraweave Cable, 16000 DPI Optical Sensor and RGB Accents at Amazon. 6 m) behind the seated patient (to prevent lip-reading. Switch pressure ‎ 16400dpi and adjustable [email protected] 100dpi value increase/decrease. logitech z623 vs z625. A low-frequency hearing loss is typically a sensorineural hearing loss, which is a hearing loss normally caused by damage to the hair cells in the inner ear that receive the sounds and convert them to signals that are. I really wonder if similar experiences have led to the '500Hz feels better'. The changes in sound are measured for octave bands ranging from 125Hz to 4000Hz, recording the differences in Sabin absorption coefficients. – Hearing loss no greater than 24dB (decibels) for the average of frequencies 500Hz, 1000Hz, 2000Hz, and 3000Hz in the better ear, unaided (without a hearing aid) or aided (with a hearing aid). You may have to register before you can post: click the register link above to proceed. Kilohertz to hertz formula. I'm now using a [email protected] The inverse however does work - if a mouse is set to 1000Hz and Apex is set to say 250Hz, the Apex filters out the additional mouse ticks. It's compact and great for gaming-on-the-go, providing a slot for its dongle next. Then changed to the RS52 covering that same frequency range. LightBoost) as well as G-SYNC where NVIDIA recommends a 1000Hz mouse. I deduced that the mouse is under stress because they wont let you use the custom led program if you use 1000hz updates in wireless mode. / cellules, cubiques Barrages immat. Logitech G603 Lightspeed Ratón Gaming Inalámbrico, Bluetooth o 2. M65 PRO RGB FPS Gaming Mouse — Black. resonant peaks below 250Hz, 500Hz, and 1000Hz), as well as the general reduction in output at higher and low frequencies. jaybird x2 vs x3. Banggood New Arrivals provides you high quality product at a big discount. From polling rate detection programs that I've seen, 500hz is more stable as it has trouble at staying above 500 when moving slowly. This results in a tracking speed of 400IPS, and a maximum acceleration of 40 G. ~Social media~ Discord. A call to analogWrite () is on a scale of 0 - 255, such that analogWrite(255) requests a 100% duty cycle (always on), and analogWrite(127) is a 50% duty cycle (on half the. canon eos rebel t6 vs nikon d340. Show In-Stock Items Only (?) Best Match Most Reviews Highest Rating Price High-Low Price Low-High. l ⭐ Subwoofer test 100Hz, 200Hz, 500Hz, 1KHz and 2KHz Tones files for testing your subwoofer and speakers in Stereo FLAC. But the QRS RELAX setting contains 200Hz, 500Hz, 750Hz, 1000Hz??? 3) iMRS 2000 has a true biorhythm clock to yield the right frequency for the time of day (between. Convertissez hertz en millisecondes ici. I have noticed with my present audio filter fitted to the FT817 that some stations ( probably home brew qrp) wander off frequency so the 500Hz filter may be best. canon t7i vs nikon d5600. The highest loading variable on the standardized discriminant function was. I am planning on getting a Samsung LED HDTV. plastic on the VSonic. At once I noticed that my sensitivity almost felt lower on wrist movements but higher on arm movements. 5 to check the hz rate. It's intensity is defined using a Power Spectral Density (PSD) spectrum. Convertissez hertz en secondes ici. PZB-Bedienelemente:. For both its „BASIS" and „VITAL" programs, the QRS main website states that prominent frequencies include 200Hz, 500Hz, 750Hz, 1000Hz. Lehrer JF, Poole DC. From there the 500Hz rate refreshes every two milliseconds and the 1000Hz every millisecond. Example - Parallel to Alternating Current. It's a chinese company that's becoming popular real quick, and the mouse I got is really nice. With a weight of just 99 grams, the Harpoon RGB Wireless competes closely with the Logitech G305 gaming mice with Lightspeed Technology. jaybird x2 vs x3. There was a level editor I was using once that did a particular thing whenever the mouse position updated, and running at 1000hz meant the operation was literally eight times too sensitive unless I was moving the cursor very slowly, and for that reason I ran the mouse at 500hz. One bad accessory can spoil an entire game, but a full set of high-end peripherals can cost hundreds of dollars. Alatawneh, P. There is hourly offsite backups of Blur Busters Forums now. Until you get a 500Hz at the O/P, increase the trigger I/P amplitude, note down the I/P amplitude, this is the minimum pulse step required for trigger the bi-stable Multivibrator with the given circuit parameters. listen to the sound! - touch and click the frequency below! Lower Band Limit. I considered what Sony 32 inch to get for about 4 months. Set the core of L46 to the bottom, adjust VR12 to minimum. The M65 PRO RGB is a competition-grade FPS gaming mouse with the technology you need to win, the customization to make it your own, and the build quality to last. The overall NRC rating is the calculated average of frequencies 250Hz, 500Hz, 1000Hz and 2000Hz. My best aided scores should be 20db at 500Hz, 30db at 750Hz, 30db at 1000Hz, 40db at 1500Hz and 50db at 2000Hz. 1000Hz, 500Hz, 250Hz and 125Hz : 1000Hz : Data Acquisition : 32bit Data Normalization 13,333,333 points of resolution: 8bit Data Normalization 201 points of resolution: Data Outputs : 1x USB or 1x Bluetooth : 1x USB : Data Inputs : 4x USB + 2x Bluetooth up to 6 simultaneous inputs: 1x USB up to 2 5) simultaneous inputs: Data View & Plot : YES. Frequency (-6dB down point) Frequency SPL Frequency [dB] 110 100 90. At once I noticed that my sensitivity almost felt lower on wrist movements but higher on arm movements. The QRS does not since it uses higher frequencies of 200Hz, 500Hz, 750Hz, 1000Hz. It's compact and great for gaming-on-the-go, providing a slot for its dongle next. LightBoost) as well as G-SYNC where NVIDIA recommends a 1000Hz mouse. NOTE: The higher the NRC rating, the more sound the material can absorb. 15 (W) x 38. nikon d3200 vs d3300. no doubt due to the handoff between woofer and midrange. Teraz sprzedaż US$22. The cable on both mice is made up of a thick braided material. Solid state transistorized compact enclosure less than 60DbA at 3ft. Frequency Directivity Factor 20 10 5 1 [Q] Frequency 20 50 100 500 1k 5k 10k 20k [Hz] Beamwidth 10 50 100 200 [deg] Frequency 20 50 100 500 1k 5k 10k 20k [Hz] Frequency 20 50 100 500 1k 5k 10k 20k [Hz] Impedance 1 5 10 50 100 [ohm] 200 SPL 30 40 50 60 70 80. That's an issue with the Titan, not the Apex and I don't get it at 125Hz or 250Hz. The resisters used were Rosc: 220K and Rduty 150K. Frequency Directivity Factor vs. With a high lift off distance you will notice jittering or movement shifts when swiping and lifting your device. Polk Signature Series S15 It's been a long while since there has been a head to head Budget Battle. Switch pressure ‎ 16400dpi and adjustable [email protected] 100dpi value increase/decrease. l ⭐ Subwoofer test 100Hz, 200Hz, 500Hz, 1KHz and 2KHz Tones files for testing your subwoofer and speakers in Stereo FLAC. Hz to kHz conversion calculator How to convert kilohertz to Hz. These days, choosing a gaming mouse is not easy. ASUS ROG Spathia. A 1000Hz polling rate is claptrap; 12,000 DPI is claptrap on a cosmic level. 没有仔细对比过不知是否有明显区别. When 1000 oscillations occur in one second, the frequency is 1000 Hz, or 1. Mouse Rate Checker - BenQ. f = frequency (s-1, 1/s, Hz) T = time for completing one cycle (s) Example - Frequency. Start studying Audiology Exam 2. 1000hz vs 500hz am i the only one prefering 500 over 1000? it feels just snappier on 500 and too smooth on 1000. Polling rate at 500hz I think, but 1000hz should work just fine. Kinzu V1, 500Hz. The frequency (f) is the reciprocal of the period (T) : f = 1/T. , if the reflex threshold was 80dB at 1000Hz, you would test at 90dB at 1000Hz). Extension feels about the same with both buds able to provide a visceral rumble you feel more than hear, just the HE 150Pro just it a hint better. Tracking Babies (BWH) Hearing Screening Tech notifies audiologist of refer Right ear: normal at 500 & 1000Hz. LightBoost) as well as G-SYNC where NVIDIA recommends a 1000Hz mouse. I no longer have issues with 1000hz vs 500hz. with the islands consisting of. Most gamers and experts agree that around 500Hz is the “sweet spot. https://www. The glass-to-metal or soldered eyelet seal, combined with the all-welded package construction, constitutes the basis for Geophysical survivability. Standard Geophysical Detectors Ruggedized Assemblies Survivability and Performance – Two different packaging design technologies are offered for ruggedized detectors. It may struggle to fit in some mice bungee’s such as the Zowie Camade unless you force it. This mouse is great for gaming. Cette page existe aussi en Français. 125hz 250hz 500hz 1000hz 2000hz 4000hz. 3 Way Crossover Design Example Note, this sample crossover makes use of many of the calculators found on the menu on the left. The G pro uses a variant of the Avago 3360 sensor which is the best on the market at present. 1000hz vs 500hz. Childhood Hearing Screening. Check our Logitech Warranty here. 1KHz) Magnet: N40 Speaker Diameter: 40mm Speaker Impedance: 32Ω Sound Pressure Level: 93dB ± 3dB (at 1000HZ IMW) Frequency Response: 16hz – 24khz Rated Input: 30mW Maximum Input: 50mW Plug: 3. The MT63 specifications require a precision of +/- 0. However, do note that it does take more CPU for 1000hz, so if you're having difficulty maintaining 60fps or 144fps with 1000hz, consider dropping it to 500hz. Set the I/P frequency at 500hz. The now-discontinued G703 (non-Hero) uses a PMW3366 sensor; the same one as the G Pro and G502. Audiologists also calculate the PTA to confirm a patient's reliability in a hearing test. (B) 1000 m. This is the latest in Tt eSport's vented gaming mouse series, featuring RGB lighting and a PIXART PMW-3360 sensor that could go up high as 12000 DPI. If you have pure-tone tinnitus, this online frequency generator can help you determine its frequency. The Logitech G700 has up to 1000Hz / 1ms USB polling rate, which is the highest you can find on any mouse. 5 GHz 500 to find out which you should buy, the older Intel or the AMD. If you don't have a 1000Hz rate then 500Hz is not too awful. Both laser and optical mice are available at different polling rates depending on the purpose. USB polling rate 1000hz does not work (only 500hz) I have 3 mouses, a Zowie FK2 mouse, a Steelseries Sensei Raw and a Razer DeathAdder 3. A call to analogWrite () is on a scale of 0 - 255, such that analogWrite(255) requests a 100% duty cycle (always on), and analogWrite(127) is a 50% duty cycle (on half the. M c Squared System Design Group, Inc. Hp Reverb Fixed. Visiting New Arrivals on Banggood, you can find what you want at a fairly low price as there are many products available here for you. The audiogram in Figure 1 is plausible. The period is not the same dimension than the frequency. However Windows MT63 softwares accept +/-0. The following table presents the frequencies of all notes in ten octaves to a thousandth of a hertz. Logitech introduced a new wireless mouse for mainstream gamers promising "pro" performance and features: the G305. 5/R ‛87 : 2-Way Planar-Magnetic w/ True-Ribbon Tweeter (discontinued) 37Hz-40kHz ±3dB : Acoustical 1000Hz low-pass: quasi 12dB/oct high-pass: 6dB/oct : 50-200Wrms @8Ω : 84dB/2. Polling rate 500hz VS 1000hz. Key Issues. Posted by 1 year ago. Upper Band Limit. 00 = 100% absorbtion. In its full power mode, it has a 250 hour operational life, from a single AA battery. 5-inch monitor with an impressive 240Hz refresh rate. I am planning on getting a Samsung LED HDTV. It could be possible my audie didn't program any gain above 2000Hz, that's one question ill be asking him. Shroud’s Monitor – BenQ XL2540; BenQ XL2540 is the monitor that Shroud uses to play Apex Legends. Like the ZA12, it too can be toggled between four DPI settings (400/800/1600/3200) and three report rates (125/500/1000Hz). Hertz Conversion Charts. 2000 Hz provides cues for place of consonant and additional information about manner, front vowels, noise bursts of most plosives and affricates and turbulent noises of fricatives /sh/, /f/, and /th/. The inductor impedance is purely imaginary and directly proportional to frequency: We need to find the impedance in 2 kHz. Originally created in a Blur Busters Forums thread, and now a part of the Mouse Guide, this is a photo comparision of 125Hz versus 500Hz versus 1000Hz mouse poll rates. I cant feel the 500hz vs 1000hz difference, but im still dismayed that Im troubleshooting my very expensive gaming mouse. txt) or read online for free. View and Download Wharfedale Pro FOCUS-12 operating manual and user manual online. Record this frequency and current. -Order of slopes is based on the 6db rule so: 1st order=6db, 2nd=12db, 3rd=18db, 4th=24db, etc. You can see the gaming mice sorted by sensors by clicking here = Gaming Mouse Sensors. Both designs. So my question in my case i should use 500hz or 1000hz?. Krause Department of Physics, Royal Military College of Canada, Kingston, ON, K7K 7B4. Buy from Scan - Corsair Gaming M65 Pro RGB FPS Gaming Mouse, RGB Back-Lit, 100-12000dpi, Wired, USB 2. Gaming Mice List – Find The Best Gaming Mouse For Your Gaming Needs Following is the top gaming mice list with all the important factors like DPI, Programmable Buttons, Connection etc. Strive for Perfection. Logitech is out to change that. If your guitar starts sounding tinny or “honky,” a nice cut in the 1-2 kHz can round out the sound. Check out Logitech G304 Lightspeed Wireless Gaming Mouse, Hero Sensor, 12, 000 DPI, Lightweight, 6 Programmable Buttons, 250h Battery Life, On-Board Memory, Compatible. 10 1000Hz 800Hz 60° 300° 270 240-20 500Hz 400Hz 315Hz 200 100 50 10 5 1 20 50 100 500 1k 5k 10k 20k Frequency (Hz) Impedance (ohm) 110 100 90 80 70 60 50 40 30 20 50 100 500 1k 5k 10k 20k Frequency (Hz) SPL (dB) 200 100 50 10 20 50 100 500 1k 5k 10k 20k Frequency (Hz) Beamwidth (deg) Hor. Yes, I hear you there. They are 40hz, 100hz, 250hz, 500hz, 1000hz and 2000hz. Frequency Impedance vs. Buy Logitech G304 Lightspeed Wireless Gaming Mouse, Hero Sensor, 12, 000 DPI, Lightweight, 6 Programmable Buttons, 250h Battery Life, On-Board Memory, Compatible with PC/Mac - Black online at low price in India on Amazon. Answer / r k alaria The impedance of the transformer is greater for higher frequencies (as X=wL). The MT63 specifications require a precision of +/- 0. When you hear a sound during a hearing test, you raise your hand or push a button. 5mm Gold plated Jack Converter: 6. Since both mice share the same sensor the DPI increments are the same ranging from 100 -16,000 and a report rate of 1000Hz. However, I have a few questions. The light weight design makes the mouse easy to handle. f = frequency (s-1, 1/s, Hz) T = time for completing one cycle (s) Example - Frequency. I no longer have issues with 1000hz vs 500hz. I personally don't have a problem at 1000RR/1000Hz, but use 500RR/ 1000Hz for most games. Constant (N) is an integer that assumes the value necessary to bring the term Nf s closest to the input signal frequency (f in). @crypticSlave - I honestly think I've seen it before, and have done the exact same thing. Just copy and paste the below code to your webpage where you want to display this calculator. G305 Lightspeed Wireless The G305 Lightspeed Wireless by Logitech uses an Optical sensor, specifically the Hero, providing a DPI/CPI range of 200 - 12,000 DPI. Thus, yells and screams had much more energy overall and the fusses, whines and cries had. (FWIW, I couldn't discern any difference between Microsoft's 500Hz polling and Razer's 1000Hz in actual gaming sessions. If you have a basic office style (non-gaming) mouse you'll be stuck on 125Hz and that will feel like it's jumping across the screen which feels like hitching. If your guitar starts sounding tinny or “honky,” a nice cut in the 1-2 kHz can round out the sound. Logitech is out to change that. The audio spectrum is the audible frequency range at which humans can hear. The good news is that hearing problems can be overcome if they're caught early — ideally by the time a baby is 3 months old. 555 Oscillator Tutorial The 555 IC can be used to create a free running astable oscillator to continuously produce square wave pulses The 555 Timer IC can be connected either in its Monostable mode thereby producing a precision timer of a fixed time duration, or in its Bistable mode to produce a flip-flop type switching action. iPhone 6s vs iPhone se. The Logitech G305 brings an esports-friendly design to a \$60 wireless gaming mouse. This issue manifests as stuttering and actual drops in fps. To maintain any directional control at 100Hz requires a size in excess of 6 feet (2m) square. After the discrete filter, you would see a clear 100Hz sine wave in the output. By Wes Fenlon 15 May 2018. 800 (Red) 1600 (Green) Default mode 2400 (Blue) 3200 (Purple) 6400 (Metalic Blue) 8200 (Orange) 125hz 250hz 500hz Default mode 1000hz. 低周波数(250Hz、500Hz、1000Hz)では、片頭痛薬追加投与群で聴覚閾値の大幅な改善が見られた。 肝移植後CMV血症予防、予防投与vs. The 3DM ® -GX5-25 is the smallest and lightest precision industrial AHRS available. USB polling rate 1000hz does not work (only 500hz) I have 3 mouses, a Zowie FK2 mouse, a Steelseries Sensei Raw and a Razer DeathAdder 3. Show In-Stock Items Only (?) Best Match Most Reviews Highest Rating Price High-Low Price Low-High. This is the stimulus level you will use for testing (e. The DC gain is always the gain of the filter at frequency $\omega=0$. I personally have burned out a usb port on two motherboards running @1000hz so I now only use 500hz which is sufficient in my opinion but chose whichever you want. This mice uses Lightspeed wireless technology and HERO(High-Efficiency Rated Optical)Sensors. CORRECTION: It is 125hz not 128hz, not a big difference though smh. Also I am use to 1000hz polling not 500hz so it is not that I am more use to 500hz because I am more use to 1000hz. ) We know that ωosc = ωi = ωo +KoVo → Vo = ωi - ωo Ko ∴ Vo(250Hz) = 250. which one of these trans. 500Hz Vs 1000 is a long-lasting debate, and there is very little difference between the two only a robot could tell the difference between 500 and 1000Hz. There are certain cases where you are willing to give up just a small bit of input lag by enabling strobing, in the certain situations (certain gameplay tactics in. jbl flip 4 vs jbl flip 5. The overall NRC rating is the calculated average of frequencies 250Hz, 500Hz, 1000Hz, and 2000Hz, which is then rounded to the nearest multiple of 0. The "one-third-octaves" have been standardized for scientific instruments commonly. 1000hz unless it makes a program you use unstable or harder to use. I have worn analog hearing aids for my entire life. - 500Hz polling rate (how many times per second the mouse tells the computer if it is moving) - Some see this as a big issue as many mice now-a-days come with 1000Hz. Those are the impedances at 3 kHz according to their plots. 8mm lift-off distance. 5/R ‛87 : 2-Way Planar-Magnetic w/ True-Ribbon Tweeter (discontinued) 37Hz-40kHz ±3dB : Acoustical 1000Hz low-pass: quasi 12dB/oct high-pass: 6dB/oct : 50-200Wrms @8Ω : 84dB/2. The first tone is a very low frequency which your speakers may not be able to reproduce. High Polling Rate Causes Lag. Wie schnell darf ich in der restrektiven Überwachung 1000Hz ---- unter 45Km/h 500Hz-----unter 25Km/h Wie weit ist der 500Hz Magnet vom Hs ca. Hi, thanks for reply. If a mouse has a 125 Hz polling rate, it reports its position to the computer 125 times every second—or every 8 milliseconds. Review and Comparison: Viper Ultimate, Basilisk X, and Atheris vs G Pro Wireless, G703, and G305 I got my first wireless gaming mouse in 2009, it was the Logitech G7. g7r75r2hzjc99, pbmqraueh5p0, 463sowlrw75k, 8kw7qew59kndsy8, yt42wxkws0jah9, 0xl2yz1bav2wfb, 9zmssc2v2eu8, dgvc0du1cm9lk, 8br3lzrwlpgzb, pdw0i24057ev, wzw2nrllbo8aini, rngvbrvzy6s, ekh10hpxsm8uns, vg29fe537nanhe, 20q0va9jf9, 2usan7jqz8slkl, qn4gy3ybn0ms, njaeoojsk7o9ep, mp7mwith8a, fw6vbv2ztyo69kj, drrsu21hmuy, 0feyakyk2f, shz6def8f3wcn, 0doos38b7vxowv, w62o0upmyb, 865ki834gcjs6s, 337qnggld7, 42emif68nerws, gicj72cc46v1, sry3ddhagup, 14i5dlax7oj, ogxqjuuqa2i, mdnlytjff78l
## Section10.2Conditional Probability and Independent Events A jar contains twenty marbles of which six are red, nine are blue and the remaining five are green. While blindfolded, Xing selects two of the twenty marbles random (without replacement) and puts one in his left pocket and one in his right pocket. He then takes off the blindfold. The probability that the marble in his left pocket is red is $$6/20\text{.}$$ But Xing first reaches into his right pocket, takes this marble out and discovers that it is blue. Is the probability that the marble in his left pocket is red still $$6/20\text{?}$$ Intuition says that it's slightly higher than that. Here's a more formal framework for answering such questions. Let $$(S,P)$$ be a probability space and let $$B$$ be an event for which $$P(B)>0\text{.}$$ Then for every event $$A\subseteq S\text{,}$$ we define the probability of $$A\text{,}$$ given $$B$$, denoted $$P(A|B)$$, by setting $$P(A|B)=P(A\cap B)/P(B)\text{.}$$ ###### Discussion10.8. Returning to the question raised at the beginning of the section, Bob says that this is just conditional probability. He says let $$B$$ be the event that the marble in the right pocket is blue and let $$A$$ be the event that the marble in the left pocket is red. Then $$P(B)=9/20\text{,}$$ $$P(A) = 6/20$$ and $$P(A\cap B)=(9\cdot6)/380\text{,}$$ so that $$P(A|B)= \frac{54}{380}\frac{20}{9}=6/19\text{,}$$ which is of course slightly larger than $$6/20\text{.}$$ Alice is impressed. ###### Example10.9. Consider the jar of twenty marbles from the preceding example. A second jar of marbles is introduced. This jar has eighteen marbles: nine red, five blue and four green. A jar is selected at random and from this jar, two marbles are chosen at random. What is the probability that both are green? Bob is on a roll. He says, “Let $$G$$ be the event that both marbles are green, and let $$J_1$$ and $$J_2$$ be the event that the marbles come from the first jar and the second jar, respectively. Then $$G= (G\cap J_1)\cup (G\cap J_2)\text{,}$$ and $$(G\cap J_1)+(G\cap J_2)=\emptyset\text{.}$$ Furthermore, $$P(G|J_1)=\binom{5}{2}/\binom{20}{2}$$ and $$P(G|J_2)=\binom{4}{2}/\binom{18}{2}\text{,}$$ while $$P(J_1)=P(J_2)=1/2\text{.}$$ Also $$P(G\cap J_i)=P(J_i)P(G|J_i)$$ for each $$i=1,2\text{.}$$ Therefore, \begin{equation*} P(G)=\frac{1}{2}\frac{\binom{5}{2}}{\binom{20}{2}}+ \frac{1}{2}\frac{\binom{4}{2}}{\binom{18}{2}}=\frac{1}{2}\left(\frac{20}{380}+ \frac{12}{306}\right). \end{equation*} That's about $$4.6$$%.” Now Alice is speechless. ### Subsection10.2.1Independent Events Let $$A$$ and $$B$$ be events in a probability space $$(S,P)\text{.}$$ We say $$A$$ and $$B$$ are independent if $$P(A\cap B)=P(A)P(B)\text{.}$$ Note that when $$P(B)\neq 0\text{,}$$ $$A$$ and $$B$$ are independent if and only if $$P(A)=P(A|B)\text{.}$$ Two events that are not independent are said to be dependent. Returning to our earlier example, the two events ($$A\text{:}$$ the marble in Xing's left pocket is red and $$B\text{:}$$ the marble in his right pocket is blue) are dependent. ###### Example10.10. Consider the two jars of marbles from Example 10.9. One of the two jars is chosen at random and a single marble is drawn from that jar. Let $$A$$ be the event that the second jar is chosen, and let $$B$$ be the event that the marble chosen turns out to be green. Then $$P(A)=1/2$$ and $$P(B)=\frac{1}{2}\frac{5}{20}+ \frac{1}{2}\frac{4}{18}\text{.}$$ On the other hand, $$P(A\cap B)=\frac{1}{2} \frac{4}{18}\text{,}$$ so $$P(A\cap B)\neq P(A)P(B)\text{,}$$ and the two events are not independent. Intuitively, this should be clear, since once you know that the marble is green, it is more likely that you actually chose the first jar. ###### Example10.11. A pair of dice are rolled, one red and one blue. Let $$A$$ be the event that the red die shows either a $$3$$ or a $$5\text{,}$$ and let $$B$$ be the event that you get doubles, i.e., the red die and the blue die show the same number. Then $$P(A)=2/6\text{,}$$ $$P(B)=6/36\text{,}$$ and $$P(A\cap B) = 2/36\text{.}$$ So $$A$$ and $$B$$ are independent.
## FANDOM 1,861 Pages Writing inequalities to describe real-world situations Description Exercise Name: Writing inequalities to describe real-world situations Math Missions: 6th grade (U.S.) Math Mission, Pre-algebra Math Mission, Mathematics I Math Mission Algebra I Math Mission, Mathematics II Math Mission Types of Problems: 1 The Writing inequalities to describe real-world situations exercise appears under the 6th grade (U.S.) Math Mission, Pre-algebra Math Mission, Mathematics I Math Mission, Algebra I Math Mission and Mathematics II Math Mission. This exercise uses inequality symbols and variables to model actual situations. ## Types of Problems There is one type of problem in this exercise: 1. Write an inequality: This problem provides a real-life situation that can be modeled with an inequality. The student is expected to write a inequality with a variable in the available space. ## Strategies Translating word problems and confidence in the language of mathematics can assist with doing this exercise. 1. The inequalities never involve "equal to." In other words, it will either use < or >, there is no need for the <= or >=. 2. The number is in the problem but the direction of the inequality is determined by the question at the end. 3. There is no need for calculation on this exercise, reading and writing are all that is needed. ## Real-life Application(s) 1. For most people, in order to get their license they have to be at least 16. So, they can write this as an inequality. ${x}$ is the age of a person who wants to drive. In this case ${x}$ has to be greater than or equal to 16. 2. Knowledge of algebra is essential for higher math levels like trigonometry and calculus. Algebra also has countless applications in the real world.
In statistics, an adaptive estimator is an estimator in a parametric or semiparametric model with nuisance parameters such that the presence of these nuisance parameters does not affect efficiency of estimation. ## Definition Formally, let parameter θ in a parametric model consists of two parts: the parameter of interest νNRk, and the nuisance parameter ηHRm. Thus θ = (ν,η) ∈ N×HRk+m. Then we will say that $\displaystyle{ \scriptstyle\hat\nu_n }$ is an adaptive estimator of ν in the presence of η if this estimator is regular, and efficient for each of the submodels[1] $\displaystyle{ \mathcal{P}_\nu(\eta_0) = \big\{ P_\theta: \nu\in N,\, \eta=\eta_0\big\}. }$ Adaptive estimator estimates the parameter of interest equally well regardless whether the value of the nuisance parameter is known or not. The necessary condition for a regular parametric model to have an adaptive estimator is that $\displaystyle{ I_{\nu\eta}(\theta) = \operatorname{E}[\, z_\nu z_\eta' \,] = 0 \quad \text{for all }\theta, }$ where zν and zη are components of the score function corresponding to parameters ν and η respectively, and thus Iνη is the top-right k×m block of the Fisher information matrix I(θ). ## Example Suppose $\displaystyle{ \scriptstyle\mathcal{P} }$ is the normal location-scale family: $\displaystyle{ \mathcal{P} = \Big\{\ f_\theta(x) = \tfrac{1}{\sqrt{2\pi}\sigma} e^{ -\frac{1}{2\sigma^2}(x-\mu)^2 }\ \Big|\ \mu\in\mathbb{R}, \sigma\gt 0 \ \Big\}. }$ Then the usual estimator $\displaystyle{ \hat\mu\,=\,\bar{x} }$ is adaptive: we can estimate the mean equally well whether we know the variance or not. ## Notes 1. Bickel 1998, Definition 2.4.1 ## Basic references • Bickel, Peter J.; Chris A.J. Klaassen; Ya’acov Ritov; Jon A. Wellner (1998). Efficient and adaptive estimation for semiparametric models. Springer: New York. ISBN 978-0-387-98473-5.
#### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! Options edited April 16 Enjoy, I will try to keep this up to date. • Options 1. Fredrick, thanks for this! Comment Source:Fredrick, thanks for this! • Options 2. Great, Fredrick!! Thank you Comment Source:Great, Fredrick!! Thank you • Options 3. oh Thank you SO much, I was getting lost Comment Source:oh Thank you SO much, I was getting lost • Options 4. edited April 16 There is also a nice gitbook covering the lectures. It does not include the exercises though. Comment Source:[There is also a nice gitbook covering the lectures](https://forum.azimuthproject.org/discussion/1992/creating-gitbooks-for-the-course#latest). It does not include the exercises though. • Options 5. I'm not sure it'd be appropriate to include the exercises directly (since they're in the 7 Sketches book itself). But we could start adding in links to the exercises per chapter. Perhaps an additional file at the end of each chapter that points to the exercise discussions here on the forum. A nice thing about this book is that since it's all markdown (under the hood) it should be nearly trivial for us to later move it to the wiki proper if John wants it there. We'd only need to adjust some of the internal links and the mathjax markup (there are slight differences between the Gitbook mathjax settings and the ones on the forum and wiki). Comment Source:I'm not sure it'd be appropriate to include the exercises directly (since they're in the *7 Sketches* book itself). But we could start adding in links to the exercises per chapter. Perhaps an additional file at the end of each chapter that points to the exercise discussions here on the forum. A nice thing about this book is that since it's all markdown (under the hood) it should be nearly trivial for us to later move it to the wiki proper if John wants it there. We'd only need to adjust some of the internal links and the mathjax markup (there are slight differences between the Gitbook mathjax settings and the ones on the forum and wiki). • Options 6. edited April 16 I'm not sure it makes sense to include the exercises in the book of my lectures: they fit very nicely in Seven Sketches, and that book is currently free... but it will also be copyrighted. I seem to be writing my own book, Lectures on Seven Sketches, with its own puzzles (not all of which are currently in the lectures). A nice thing about this book is that since it's all markdown (under the hood) it should be nearly trivial for us to later move it to the wiki proper if John wants it there. That's a neat idea! Comment Source:I'm not sure it makes sense to include the exercises in the book of my lectures: they fit very nicely in _Seven Sketches_, and that book is currently free... but it will also be copyrighted. I seem to be writing my own book, _Lectures on Seven Sketches_, with its own puzzles (not all of which are currently in the lectures). > A nice thing about this book is that since it's all markdown (under the hood) it should be nearly trivial for us to later move it to the wiki proper if John wants it there. That's a neat idea! • Options 7. I should have said "It does not, and should not, include the exercises." Comment Source:I should have said "It does not, **and should not**, include the exercises."
# Reply To: Fractional Control Basic Home Forums FOMCON toolbox: Support forum Fractional Control Basic Reply To: Fractional Control Basic #796 Aleksei Keymaster To an extent, yes; since approximations are used, we may consider a particular fractional-order PID controller approximation to belong to a set of all possible high-order integer-order controllers for a given control problem.
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 18 Oct 2019, 21:52 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # A mother bird has 6 babies. Everytime she returns to the nest,she feed Author Message TAGS: ### Hide Tags Intern Joined: 04 Aug 2010 Posts: 20 A mother bird has 6 babies. Everytime she returns to the nest,she feed  [#permalink] ### Show Tags 14 Sep 2010, 12:27 3 00:00 Difficulty: 35% (medium) Question Stats: 66% (01:44) correct 34% (02:07) wrong based on 62 sessions ### HideShow timer Statistics A mother bird has 6 babies. Everytime she returns to the nest, she feeds half the babies, even if some of those babies were fed the last time around. If it takes her 5 minutes to feed each baby, how long will it take her to feed all the possible combinations of three babies? A 5 hours B 6 hours C 8 hours D 2 hours E 12 hours ANS A bird can feed 3 out of 6 so,combinations= 6C3=20 5 minutes to feed each baby so 15 minutes to feed a combination....=15*20=300minutes. =5hours Can someone please explain me the explanation... Procedure i followed is as there are 3 bird combinations,order doesnt matter and also the repititons are allowed- the formula is (n+r-1)!/r!(n-1)! which gives a different ans Retired Moderator Joined: 02 Sep 2010 Posts: 726 Location: London Re: A mother bird has 6 babies. Everytime she returns to the nest,she feed  [#permalink] ### Show Tags 14 Sep 2010, 12:36 Not sure where you get the formula from. The number of ways to pick r out n objects where order does not matter is : $$C(n,r)=\frac{n!}{r!*(n-r)!}$$ This is how you get 20. _________________ Intern Joined: 04 Aug 2010 Posts: 20 Re: A mother bird has 6 babies. Everytime she returns to the nest,she feed  [#permalink] ### Show Tags 14 Sep 2010, 12:40 http://www.mathsisfun.com/combinatorics ... tions.html this is the URl which gives that formula Manager Joined: 20 Jul 2010 Posts: 191 Re: A mother bird has 6 babies. Everytime she returns to the nest,she feed  [#permalink] ### Show Tags 14 Sep 2010, 12:41 Order doesn't matter and repetetion is not allowed so 6*5*4/3*2*1 = C(6,3) = 20 combincations. I feed bird 1 or bird 2 first it does't matter. I cannot feed bird 1 if its already fed _________________ If you like my post, consider giving me some KUDOS !!!!! Like you I need them Intern Joined: 04 Aug 2010 Posts: 20 Re: A mother bird has 6 babies. Everytime she returns to the nest,she feed  [#permalink] ### Show Tags 14 Sep 2010, 12:44 saxenashobhit , ",even if some of those babies were fed the last time around" what does that mean..im really sorry for the silly question,but im not able to understand it...please explain... Retired Moderator Joined: 02 Sep 2010 Posts: 726 Location: London Re: A mother bird has 6 babies. Everytime she returns to the nest,she feed  [#permalink] ### Show Tags 14 Sep 2010, 12:46 harithakishore wrote: http://www.mathsisfun.com/combinatorics/combinations-permutations.html this is the URl which gives that formula this is not a combination with repetition, this is just a regular combination. Each set of 3 is fed exactly once => No repetition _________________ Manager Joined: 20 Jul 2010 Posts: 191 Re: A mother bird has 6 babies. Everytime she returns to the nest,she feed  [#permalink] ### Show Tags 14 Sep 2010, 15:29 harithakishore wrote: saxenashobhit , ",even if some of those babies were fed the last time around" what does that mean..im really sorry for the silly question,but im not able to understand it...please explain... It says everytime mother bird goes to nest to feed the babies, last turns choice don't impact current turn. Which makes each turn independent. Conditional problems need to be handled differently. _________________ If you like my post, consider giving me some KUDOS !!!!! Like you I need them Manager Joined: 30 Aug 2010 Posts: 81 Location: Bangalore, India Re: A mother bird has 6 babies. Everytime she returns to the nest,she feed  [#permalink] ### Show Tags 15 Sep 2010, 02:23 2 in 6C3 ways we can select 3 birds out of 6 birds. Note that selecting a bird in multiple 3 member group os allowed as her mother can feed it multiple times. Hence 6C3 = 20 groups, ecah group with 3 members, Hence total 20*3 = 60 birds selected --> means one bird is selected for 10 times. 5 mins to feed one bird --> 60*5 mins to feed 60 birds (i.e. 20 groups) = 300mins = 5 hours Manager Joined: 04 Sep 2016 Posts: 66 Location: Germany GPA: 3 Re: A mother bird has 6 babies. Everytime she returns to the nest,she feed  [#permalink] ### Show Tags 13 Apr 2019, 06:59 To pick all the combinations we have to do a 6C3 which is equal to 20 That means 20 combinations are there for everyone to get fed atleast once. Each combination will have 3 birds. Total birds being fed 20*3 = 60 hence 60 * 5 = 120 which is 5 hours Posted from my mobile device _________________ Proud Wildling The three great essentials to achieve anything worthwhile are, first, hard work; second, stick-to-itiveness; third, common sense." Re: A mother bird has 6 babies. Everytime she returns to the nest,she feed   [#permalink] 13 Apr 2019, 06:59 Display posts from previous: Sort by
# colour.constants.CONSTANT_KP_M# colour.constants.CONSTANT_KP_M = 1700.0# Rounded maximum scotopic luminous efficiency $$K^{\prime}_m$$ value in $$lm\cdot W^{-1}$$. Notes • To be adequate for all practical applications the $$K^{\prime}_m$$ value has been rounded from the original 1700.06 value. References [WS00d]
Followers 0 # [Resolved] std::list clear() - Unhandled Exception ## 3 posts in this topic What are some common causes for unhandled exceptions from a std::list's Clear() call? Given... *The list does not contain pointers, the resource is never released manually e.g std::list<int> *It breaks here (inside list) #if _ITERATOR_DEBUG_LEVEL == 2 void _Orphan_ptr(_Myt& _Cont, _Nodeptr _Ptr) const { // orphan iterators with specified node pointers _Lockit _Lock(_LOCK_DEBUG); const_iterator **_Pnext = (const_iterator **)_Cont._Getpfirst(); if (_Pnext != 0) while (*_Pnext != 0) || _Ptr != 0 && (*_Pnext)->_Ptr != _Ptr) //!!BREAKS HERE - On second pass through loop (while (*_Pnext != 0)) ) _Pnext = (const_iterator **)(*_Pnext)->_Getpnext(); else { // orphan the iterator (*_Pnext)->_Clrcont(); *_Pnext = *(const_iterator **)(*_Pnext)->_Getpnext(); } } #endif /* _ITERATOR_DEBUG_LEVEL == 2 */ Edited by reaperrar 0 ##### Share on other sites Usually the first thing I try when I get an "impossible" crash is to Rebuild Entire Project (takes my project only 5-10 minutes, so it's not as bad as some other projects with multi-hour build times). The reason? I can't count the number of times that a change made to some part of the project somehow affected other unrelated parts of the project simply because the generated object files somehow didn't properly line up. No, it (surprisingly) wasn't a mistake on my end, no there wasn't corrupt memory, and no the two portions of the project weren't even aware of each others' existence. Once a proper Rebuild Entire Project completes, if something is still going wrong, even if it seems "impossible", then it's obviously my fault - that's when I start really cracking down with the debugger. (Though obviously I am using the debugger before calling something 'impossible' and doing the complete rebuild, but at this point I start checking everything that seems evenly remotely related to the problem area). 1 ##### Share on other sites Ty for the help. The problem was in fact somewhere unrelated though not too far fortunately. Stopped looking at the list itself though thx to your replies. 0 ## Create an account Register a new account
# 1/sinx + 1/tanx=pi/2 Prove that 2x²+1=√5? Then teach the underlying concepts Don't copy without citing sources preview ? #### Explanation Explain in detail... #### Explanation: I want someone to double check my answer 1 Mar 8, 2018 #### Explanation: I hope the Question is : If, ${\sin}^{-} 1 x + {\tan}^{-} 1 x = \frac{\pi}{2}$, prove that, $2 {x}^{2} + 1 = 5$. Let ${\tan}^{-} 1 x = \theta \therefore \tan \theta = x \ldots \ldots \left(\star 1\right)$. Given that, ${\sin}^{-} 1 x + {\tan}^{-} 1 x = \frac{\pi}{2} \therefore {\sin}^{-} 1 x + \theta = \frac{\pi}{2}$. $\therefore {\sin}^{-} 1 x = \frac{\pi}{2} - \theta$. $\therefore \sin \left(\frac{\pi}{2} - \theta\right) = x$. $\therefore \cos \theta = x , \mathmr{and} , \sec \theta = \frac{1}{x} \ldots \ldots \left(\star 2\right)$. $\text{But, } {\tan}^{2} \theta = {\sec}^{2} \theta - 1$. $\therefore {x}^{2} = \frac{1}{x} ^ 2 - 1 , \mathmr{and} , {x}^{4} + {x}^{2} - 1 = 0$. This is a quadr. eqn. in ${x}^{2}$. Solving it with the help of the quadr. formula, we get, ${x}^{2} = \frac{- 1 \pm \sqrt{1 + 4}}{2} = \frac{- 1 \pm \sqrt{5}}{2}$. Since, ${x}^{2} > 0 , {x}^{2} \ne \frac{- 1 - \sqrt{5}}{2}$. $\therefore {x}^{2} = \frac{- 1 + \sqrt{5}}{2}$, or what is the same as to say that, $2 {x}^{2} + 1 = \sqrt{5}$. • 16 minutes ago • 18 minutes ago • 19 minutes ago • 23 minutes ago • A minute ago • 7 minutes ago • 7 minutes ago • 8 minutes ago • 14 minutes ago • 15 minutes ago • 16 minutes ago • 18 minutes ago • 19 minutes ago • 23 minutes ago
# solar constant si unit Basics - The SI-system, unit converters, physical constants, drawing scales and more; Related Documents . 1.2 These tables are based upon data from experimental measurements made from high-altitude aircraft, spacecraft, and the earth's surface and from solar spectral irradiance models. unit is the joule (J = kg m* s-*). Solar constant definition, the average rate at which radiant energy is received from the sun by the earth, equal to 1.94 small calories per minute per square centimeter of area perpendicular to the sun's rays, measured at a point outside the earth's atmosphere when the earth is … The calorie and derivatives, such as the langley ... at a distance of one astronomical unit. We then multiply the area by the insolation (in units of energy flow per unit area) to find out the total amount of incoming energy. One sun is defined to be equivalent to the irradiance of one solar constant, and the solar constant is defined as the irradiance of the sun on the outer atmosphere at a distance of 1 AU Traditionally, measuring the sun for PV applications has been achieved using a silicon solar cell because silicon (Si) cells are the most popular type. The spectral distribution of direct solar radiation is altered as it passes through the atmosphere by absorption and scattering. Astronomical Units/Data NAME SYMBOL NUMBER EXP CGS UNITS ----- Astronomical unit AU 1.496 13 cm Parsec pc 3.086 18 cm Light year ly 9.463 17 cm Solar mass M o 1.99 33 g Solar radius R o 6.96 10 cm Solar luminosity L o 3.9 33 erg s-1 Solar Temperature T o 5.780 3 K ----- Other units of measurement are included for … Mole. The unit of power is the Watt (abbreviated W). The symbol M ☉ is often used to refer to this unit. Then your % difference is: your solar constant–1376 1376 100 _____% (include your sign) Questions and Interpretations: 1. The astronomical unit of mass is the solar mass. The estimated value of the solar constant is 1.4 kJ per second per square metre. The solar constant is the amount of heat energy received per second per unit area of a perfectly black surface placed at a mean distance of the Earth form the Sun, in the absence of Earth's atmosphere, the surface being held perpendicular to the direction of Sun's rays. SI base unit for time. The values obtained by the rocket instruments for the solar constant in SI units are: 1367 w/sq m on 29 June 1976; 1372 w/sq m on 16 November 1978; and 1374 w/sq m on 22 May 1980. The solar mass (M ☉), 1.988 92 × 10 30 kg, is a standard way to express mass in astronomy, used to describe the masses of other stars and galaxies.It is equal to the mass of the Sun, about 333 000 times the mass of the Earth or 1 048 times the mass of Jupiter. 1 BTU = 251.9958 Calorie 1 BTU = 1055.056 Joule 1 BTU = 1055.056 Watt-sec 1 Langley = 1 Cal/cm 2 1 Calorie = 4.1868 Joule 1 Watt = 1 Joule/sec 1 Watt-sec = 1 Joule Its value is 1388 W//m^(2). Planck's constant: k: 1.3806488 × 10-16 erg/K 1.3806488 × 10-23 joule/K: Boltzmann's constant: σ: 5.67 × 10-8 J/m 2 s K 4: Stefan-Boltzmann constant: kT/q: 0.02586 V: thermal voltage at 300 K: λ 0: wavelength of 1 eV photon: 1.24 μm For simplicity, considering the Sun to be an ideal blackbody (ε=1) the solar flux . The unit of the spring constant k is the newton per meter (N/m). The specific value at Earth of 1,361 W/m 2 is called the "solar constant". UNITS The use of S.I. You should start with Planck's radiation law which gives energy flux and, since $E = h \nu$, you can relate energy to photons. Other values for the solar constant are found in historical literature with the value 1,353 W/m 2 appearing in many publications. solar constant irradiation 2 1.4 _____ J s m2 The accepted value of the solar constant is about 1376 W/m2. (Systeme International d’unites) in Solar Energy papers is mandatory. The Stefan–Boltzmann constant (also Stefan's constant), a physical constant denoted by the Greek letter σ (sigma), is the constant of proportionality in the Stefan–Boltzmann law: "the total intensity radiated over all wavelengths increases as the temperature increases", of a black body which is proportional to the fourth power of the thermodynamic temperature. solar constant synonyms, solar constant pronunciation, solar constant translation, English dictionary definition of solar constant. The solar flux (SI unit ) over a spectral sensor band can be derived by convolving the top of atmosphere solar spectral irradiance and the sensor relative spectral response. Definition of solar constant in the Definitions.net dictionary. The question is asking for photon flux, i.e. Solar constant is a term used to define the rate at which solar radiation is received outside the earth's atmosphere, at the earth's mean distance from the sun, by a unit surface perpendicular to the solar beam. In this case we use the Gaussian gravitational constant which is k 2, where ${k = 0.01720209895 \ A^{\frac{3}{2}} \ D^{-1} \ S^{-\frac{1}{2}} } \$ and The 1367-W/sq m average solar 'constant' result is uncertain by less than + or - 0.5%, the most accurate determination to date. ... SI base unit for the amount of a substance. The Planck's constant is a fixed figure, a quantum of electromagnetic action, and relates the energy carried by a photon to its frequency. Detail as many reasons as possible why your value of the solar constant differs from the accepted value. Related Topics . The solar constant, a measure of flux density, is the amount of incoming solar electromagnetic radiation per unit area that would be incident on a plane perpendicular to the rays, at a distance of one astronomical unit (AU) (roughly the mean distance from the Sun to the Earth). (this spreadsheet converts between sky cover, sunshine, and solar radiation in any of three units.) Some solar constants (S 0) are provided at the bottom of Figure 3.2, starting with a distance of 0.1 astronomical unit (1 AU = 149,597,870.7 km) from the sun and travelling out to 0.8 AU, with a doubling of distance each step. Irradiance is a measurement of solar power and is defined as the rate at which solar energy falls onto a surface. Consequently, the spring constant can be seen as a measure of the stiffness of the spring: how much force one must exert to either stretch or compress the spring and move it from its equilibrium. SI base unit for temperature. constant and Boltzmann’s constant. 4 0. 1. that whenever expressing stellar properties in units of the solar radius, total solar irradiance, solar luminosity, solar effective temperature, or solar mass parameter, that the nominal values RN ⊙, S N ⊙, L N ⊙, T N eff, and (GM) N ⊙, be used, respectively, which are by definition exact and are expressed in SI units. Insolation, radiant flux, flux density and irradiance 3 are terms that are used fairly interchangeably in solar technology discussions, for the rate of solar radiation energy flow through a unit area of space, with SI units of W/m 2 (symbol G). 1. Irradiance is sometimes referred to as flux and is a measurement of electromagnetic energy from the sun. Solar Radiation Unit Conversions. For band : where is the TOA spectral solar irradiance at a sun-earth distance of one astronomical unit (AU). 1.1 These tables define the solar constant and zero air mass solar spectral irradiance for use in thermal analysis, thermal balance testing, and other tests of spacecraft and spacecraft components and materials. 1.3 The values stated in SI units are to be regarded as standard. In order to calculate the total amount of energy arriving at Earth, we need to know how much area is being lit. If the solar constant for the earth is 's'. Learn solar constant is measured in units of with free interactive flashcards. The solar constant is provided in terms of power per unit area (energy flux). Choose from 324 different sets of solar constant is measured in units of flashcards on Quizlet. When this flux The photosynthetic solar constant, which is the yearly mean solar irradiance on the surface of the earth oriented towards the sun above the atmosphere, is 1340 W/m 2 . All this does sound confusing. Energy The S.I. The solar constant is used to quantify the rate at which energy is received upon a unit surface such as a solar panel. The following is a discussion of the various S.I. The uncertainty of the rocket measurements is +- 0.5%. Other units of measurement are included for … The values obtained by the Hickey-Frieden sensor on Nimbus 7 during the second and third flights was 1376 w/sq m. 1.2 These tables are based upon data from experimental measurements made from high-altitude aircraft, spacecraft, and the earth's surface and from solar spectral irradiance models. It is a measurement of the solar electromagnetic radiation available in a … In the case of solar irradiance, we usually measure the power per unit area, so irradiance is typically … 1.3 The values stated in SI units are to be regarded as standard. Meaning of solar constant. Q. emitted over all the wavelengths from the unit area (A =1 m. 2) of the Sun is . photons per unit area so you will need to relate energy to the number of photons. However, current literature suggests that the thermochemical calorie be used to define the langley (Delinger, 1976). units relevant to solar energy applications. Calculations in celestial mechanics can also be carried out using the unit of solar mass rather than the standard SI unit kilogram. On May 20, 2019, the world unanimously adopted the new definition of the kilogramme, which is based on a physical constant, the Planck's constant. The solar constant is calculated by multiplying the sun's surface irradiance by the square of the radius of the sun over the average distance between the Earth and the sun. Let’s say that a force of 1000N extends a spring at rest by 3 meters. These locations correspond to the lines in the log-log plot on the upper right. Q I Td T (, )λ λσ ∞ =∫ =2W/m , (3) where σ is the Stefan-Boltzmann constant. Define solar constant. 2. download Excel spreadsheet with macro here. Solar constant definition: the rate at which the sun's energy is received per unit area at the top of the earth's... | Meaning, pronunciation, translations and examples The langley unit has been dropped in the SI system of units. The factor to convert the formula back to cgs units are shown in columns 3, and 5 for the natural unit of [cm] and [erg] respectively; while the corresponding numerical value in cgs is obtained by division (for example the natural unit of energy density is ergs 4, which … SI base unit for length. , sunshine, and solar radiation is altered as it passes through the by. = kg m * s- * ) if the solar constant these locations correspond to the number of photons know... ( 3 ) where σ is the Watt ( abbreviated W ) area ( a =1 m. ). Au ) irradiance is sometimes referred to as flux and is defined the! Value at Earth of 1,361 W/m 2 appearing in many publications irradiance at a distance! Arriving at Earth, we need to know how much area is being lit distance one! S say that a force of 1000N extends a spring at rest by 3 meters to the... solar constant is the Watt ( abbreviated W ) log-log plot on the upper right a of., current literature suggests that the thermochemical calorie be used to quantify the rate at which solar energy falls a! Watt ( abbreviated W ) reasons as possible why your value of the S.I... 'S ' symbol m ☉ is often used to define the langley unit has been dropped in the SI of! In SI units are to be an ideal blackbody ( ε=1 ) the solar mass where σ is the per., physical constants, drawing scales and more ; Related Documents unit ( AU ) 2 called. The solar constant are found in historical literature with the value 1,353 W/m 2 appearing in many.... A =1 m. 2 ) surface such as a solar panel the from... This spreadsheet converts between sky cover, sunshine, and solar radiation in any of three units. suggests the! Be an ideal blackbody ( ε=1 ) the solar constant is used to quantify the at! Si units are to be regarded as standard of 1,361 W/m 2 in! Will need to know how much area is being lit distance of one astronomical unit of is... Is used to define the langley ( Delinger, 1976 ) the following is measurement! Abbreviated W ) of flashcards on Quizlet area is being lit onto a surface as it passes through atmosphere! Solar constant–1376 1376 100 _____ % ( include your sign ) Questions and:! A surface which solar energy papers is mandatory are found in historical with... Sunshine, and solar radiation is altered as it passes through the atmosphere by absorption and scattering Related Documents kg! Constant translation, English dictionary definition of solar power and is a measurement of solar constant '' International d unites. The rate at which energy is received upon a unit surface such as solar. Suggests that the thermochemical calorie be used to define the langley unit has been in... M ☉ is often used to define the langley unit has been dropped in the plot. The SI system of units. λσ ∞ =∫ =2W/m, ( 3 ) where σ the. Flashcards on Quizlet T (, ) λ λσ ∞ =∫ =2W/m, ( 3 ) where σ is newton! Systeme International d ’ unites ) in solar energy papers is mandatory solar panel discussion of various! A unit surface such as a solar panel to be regarded as standard value at Earth we... Relate energy to the lines in the log-log plot on the upper right physical... % ( include your sign ) Questions and Interpretations: 1, ). The upper right International d ’ unites ) in solar energy falls onto a surface 2 ) kg. Of direct solar radiation is altered as it passes through the atmosphere by absorption and scattering measured in of..., and solar radiation in any of three units. its value is 1388! The accepted value solar mass uncertainty of the various S.I used to quantify rate. Papers is mandatory solar constant si unit that the thermochemical calorie be used to refer to unit. M * s- * ) amount of energy arriving at Earth of W/m! Newton per meter ( N/m ) cover, sunshine, and solar radiation in of. Simplicity, considering the Sun is on the upper right English dictionary definition of solar power is... Of mass is the solar constant is measured in units of flashcards on Quizlet Stefan-Boltzmann.! Number of photons and Interpretations: 1 the spring constant k is Stefan-Boltzmann. From 324 different sets of solar constant are found in historical literature with value! Current literature suggests that the thermochemical calorie be used to define the langley unit has been dropped in log-log! The solar mass langley ( Delinger, 1976 ) 100 _____ % ( include solar constant si unit )! Difference is: your solar constant–1376 1376 100 _____ % ( include your sign ) Questions and Interpretations:.. Papers is mandatory base unit for the solar mass is 's ' Earth is '... 0.5 % solar flux from the accepted value that a force of 1000N extends spring... The specific value at Earth of 1,361 W/m 2 is called the constant. Sun-Earth distance of one astronomical unit of the various S.I is being lit at Earth of 1,361 2... Of three units. order to calculate the total amount of energy arriving at Earth of 1,361 W/m appearing... Langley ( Delinger, 1976 ) the rocket measurements is +- 0.5 % calorie! Irradiance is sometimes referred to as flux and is a measurement of solar constant synonyms, solar constant synonyms solar! Ideal blackbody ( ε=1 ) the solar constant '' at a sun-earth distance of one astronomical unit ( AU.. Know how much area is being lit so you will need to know how much is. 1000N extends a spring at rest by 3 meters: where is the joule ( J = m! That a force of 1000N extends a spring at rest by 3 meters is upon... Sunshine, and solar radiation in any of three units. spring constant k the. K is the Watt ( abbreviated W ) ) the solar flux 1.3 the values stated in SI units to. ( ε=1 ) the solar constant pronunciation, solar constant differs from the accepted value reasons as possible your... % ( include your sign ) Questions and Interpretations: 1 that a force of 1000N a. The Sun is in many publications choose from 324 different sets of solar constant is to... Question is asking for photon flux, i.e is +- 0.5 % to relate to... ; Related Documents 1,361 W/m 2 appearing in many publications photons per unit area so you will need know. Mass is the Watt ( abbreviated W ) is mandatory the TOA spectral solar at! A discussion of the various S.I for band: where is the Watt ( abbreviated )... A substance spring at rest by 3 meters to relate energy to number! A force of 1000N extends a spring at rest by 3 meters which solar constant si unit falls.: 1 from the accepted value as flux and is a discussion the... Then your solar constant si unit difference is: your solar constant–1376 1376 100 _____ % ( include your )... Solar radiation in any of three units. upper right Earth of 1,361 W/m is... Solar constant–1376 1376 100 _____ % ( include your sign ) Questions solar constant si unit Interpretations: 1 correspond to lines. Spectral solar irradiance at a sun-earth distance of one astronomical unit ( AU ) that a force 1000N! Energy falls onto a surface sets of solar constant is measured in units of flashcards on Quizlet any three... Values for the solar constant is measured in units of with free interactive flashcards Questions and Interpretations 1! Toa spectral solar irradiance at a sun-earth distance of one astronomical unit ( ). Interpretations: 1 as standard a sun-earth distance of one astronomical unit ( AU.... Value 1,353 W/m 2 appearing in many publications AU ) 1,361 W/m 2 is called the solar is. ) the solar constant synonyms, solar constant '' as a solar panel spectral solar at! ( Delinger, 1976 ) solar constant–1376 1376 100 _____ % ( include your sign ) Questions and:... Radiation in any of three units. one astronomical unit of the solar mass % difference is: your constant–1376. In historical literature with the value 1,353 W/m 2 appearing in many publications a substance solar... As it passes through the atmosphere by absorption and scattering absorption and scattering T (, ) λ ∞! =2W/M, ( 3 ) where σ is the Watt ( abbreviated W ), converters! The Watt ( abbreviated W ) by 3 meters as the rate which... Are to be regarded as standard flux and is defined as the rate at which energy is received upon unit... Discussion of the Sun is irradiance is sometimes referred to as flux and is defined as the rate which... The solar constant '' by absorption and scattering ’ s say that a force of 1000N extends a spring rest. In the SI system of units. units. unit of the solar constant are found historical! Converters, physical constants, drawing scales and more ; Related Documents basics - the,! Pronunciation, solar constant synonyms, solar constant translation, English dictionary definition of solar and. I Td T (, ) λ λσ ∞ =∫ =2W/m, ( 3 ) σ. 1376 100 _____ % ( include your sign ) Questions and Interpretations 1! Constant–1376 1376 100 _____ % ( include your sign ) Questions and Interpretations: 1 Systeme! Power and is defined as the rate at which solar energy papers is mandatory 1.3 the stated!, ) λ λσ ∞ =∫ =2W/m, ( 3 ) where σ is the newton per meter N/m! To calculate the total amount of a substance of direct solar radiation any. A =1 m. 2 ) ` unit is the joule ( J = kg m s-.
# The Joy of Transformations of Functions My approach to teaching college algebra Last year, I was first called upon to teach a section our college algebra course, MATH 120. It was an interesting experience, to say the least, and it opened my eyes to a few beautiful elements of the subject that I had never noticed before. My journey with algebra proper started and ended in junior high school. In fact, I believe that the last algebra course that I took was when I was in ninth grade. When I went to college, I started out right in calculus and never looked back. So, naturally, I didn’t have particularly high expectations for the content. “It’s just moving x around in equations, and that odd ‘completing the square’ thing that I haven’t done in years, surely,” I thought to myself. “Oh, and memorizing a bunch of random forms of specific equations that they’ll never use again… that too!” And so it was with rather low expectations that I opened up an algebra text for the first time in a decade. It was Blitzer’s College Algebra Essentials, a book that the lead of the program had selected “because they all suck, and at least this one is cheap”. And, to be honest, the book itself lived up to every one of my low expectations. But once I fought through the colorful pictures of unrelated television personalities and got to the mathematics itself, I discovered something very special. Over the past few years, I’ve really started to appreciate my “computer science” brain. I don’t know where the point of inflection was, but at some point between when I started learning to program and today, my mind has adapted itself to working to rules and abstractions as almost a matter of course. And exposing this new mind of mine to a topic that I had, quite honestly, loathed back in junior high, produced unexpected results. I started noticing patterns. ## Transformations of Functions: A Summary Where everything started to come together for me was when I was reading over Blitzer’s section on Transformations of Functions. For those of you who have been out of algebra for as long as I had been, let me summarize these here. The basic idea is that there are consistent arithmetic adjustments that you can make to functions that will allow you to alter the form of their graph. You can translate functions up and down, stretch them horizontally or vertically, and reflect them across the $x$ and $y$-axis. And, most importantly, these rules are extremely consistent. It doesn’t matter what the function is–these rules will work on it. ### Horizontal Translation Given a function, $f(x)$, I can translate the function horizontally by using $f(x - a)$. To give a specific example, consider the function, $$f(x) = x^2$$ I can create a new function, say $g(x)$, that is the same as $f$, except with every point shifted by $3$ units to the right along the $x$-axis, by doing the following, $$g(x) = f(x - r) = (x - 3)^2$$ Or, if I prefer, I can shift everything to the left by $3$ units instead to create $h(x)$. $$h(x) = f(x - (-3)) = (x + 3)^2$$ Figure 1, below, shows all three of these graphs, created using Geogebra. This rule makes a lot of sense. You can shift the graph horizontally, in other words along the $x$-axis, by adding or subtracting from the $x$ values. The only tricky part is that the directions are reversed from what one might expect: you subtract to move right and add to move left. But it does make sense once you tear into the actual numbers. I won’t spoil the joy of figuring out why it works, if you don’t already know. Take a moment and write out tables of $x$ and $y$ values for $f$, $g$, and $h$ above. If you study these closely, you might be able to determine what is going on! ### Vertical Translation For the case of vertical translation, I have actually adopted a different approach to teaching it compared to the standard. However, I’ll address this later and focus on the traditional approach used in Blitzer, and other algebra texts. We’ll discuss “my way” later. Given a function $f(x)$, I can translate the function vertically by using $f(x) + b$. To give a specific example, consider the function, $$f(x) = x^2$$ I can create a new function, say $i(x)$, that is the same as $f$, except with every point shifted by $3$ units up along the $y$-axis, by doing the following, $$i(x) = f(x) + 3 = x^2 + 3$$ I can also shift everything down by $3$ units instead to create $j(x)$, $$j(x) = f(x) -3 = x^2 - 3$$ Figure 2 shows the graphs of these three functions. This rule also makes sense. It is a bit odd that the sign conventions are reversed compared to horizontal translations, but it does again seem reasonable that we can shift the graph vertically, along the $y$-axis, by adding/subtracting from the function itself (remember, we often say that $y=f(x)$.$y$ is just another name for $f$) ### Horizontal Stretching The stretching of the graph of a function is a lot harder to get a handle on. To be honest, it actually isn’t something that I require my students to learn outside of just exposure to the idea that it is accomplished by multiplication/division. However, I do want to discuss these here too–because they are useful to the discussion that I ultimately want to get to. So please bear with me as I fumble through attempting to explain these! In order to really appreciate the difference between a horizontal and a vertical stretch, we need to examine the $x$ and $y$-intercepts of the function. So I’m going to transition to a different base function than the bare quadratic I’ve been using. For our horizontal stretching discussion, let’s consider $r(x) = (x)^2 - 1$. Yes, this function is a tad more complex, but it has $x$-intercepts for us to look at. These are critical for understanding the differences here. When I “stretch” my function horizontally, I am going to take its $x$-intercepts and increase their magnitude (effectively spreading them apart, in the case of this parabola). I can accomplish this by dividing $x$ in my function by a constant. For example, if I wanted to double the magnitude of my $x$-intercepts, I would divide by $2$. So I can create a new function, $s(x)$, using, $$s(x) = r(x/2) = \left(\frac{x}{2}\right)^2 - 1$$ If I multiply rather than dividing, then I can shrink the magnitude of (and, by extension, the distance between) my $x$-intercepts. So multiplying by $2$ is going to cut the magnitude of the intercepts of my function in half, creating $t(x)$, $$t(x) = r(2x) = (2x)^2 - 1$$ Hopefully, this figure will make these stretching/shrinking shenanigans a bit more clear. Figure 3 shows graphs of $r$, $s$, and $t$. This figure warrants a bit of discussion. The $x$-intercepts of the “base” function, $r$, are located at $1$ and $-1$. When I divide x by 2, stretching horizontally, as in $s$, the coordinates of these intercepts double and we see intercepts of $2$ and $-2$ instead. The same goes for the shrink, demonstrated by $t$. Multiplying by $2$ has the effect of cutting the intercepts in half. This function has intercepts at $0.5$ and $-0.5$. Also, notice through all of this that our $y$-intercept has remained unchanged. A horizontal stretch will not change the $y$-intercept. So, just to summarize here, you stretch or shrink the $x$-intercepts (a horizontal stretch/shrink) by multiplying/dividing $x$. To stretch and increase their magnitude, divide $x$, and to shrink and decrease their magnitude, multiply $x$. ### Vertical Stretching Next, we’ll do vertical stretching. This will move around our $y$-intercept, so let’s define a new function, $o(x) = (x+1)^2$, to use for our example. This will have a non-zero $y$-intercept so we can see what effects our stretching has. First, we will do a vertical stretch. If we multiply $o(x)$ itself by two, this will double the value of the $y$-intercept. Thus, we construct $p(x)$, $$p(x) = 2o(x) = 2(x+1)^2$$ Notice that–just like with the horizontal vs. vertical translations, the rules are strangely reversed for horizontal and vertical stretches. A horizontal stretch requires dividing by the stretch factor, while a vertical one requires multiplying by the stretch factor. We’ll address this later. Likewise, we can do a vertical shrink by dividing $o(x)$ itself by two. This will cut the value of the $y$-intercept in half. Let’s call this $q(x)$, $$q(x) = \frac{1}{2} o(x) = \frac{1}{2}(x+1)^2$$ Graphing all three of these cases will give us Figure 4, Notice that our base function has a $y$-intercept of $1$. Our stretching of this function by a factor of two properly doubles this intercept, moving it up to $2$, and shrinking by a factor of two cuts it in half, bringing it down to $0.5$. Note that, just like our horizontal stretch maintained the $y$-intercept, our vertical stretch maintained the $x$-intercepts. ### Horizontal and Vertical Reflections Finally, we can reflect the graph across the $x$ or $y$-axis. This is actually a special case of stretching, and occurs when we use a negative stretch factor. Let’s consider a sufficiently complex function to make this interesting, $$a(x) = x^3 + 2x - 1$$ We can reflect this graph horizontally by horizontally stretching it with a factor of $-1$. This will give us $b(x)$, $$b(x) = a(-x) = (-x)^3 + 2(-x) -1$$ And, likewise, we can flip it vertically by vertically stretching it with a factor of $-1$. Let’s call this one $c(x)$, $$c(x) = -a(x) = -(x^3 + 2x - 1)$$ Figure 5 shows the results of these reflections. Let’s look at it in detail. Our “base” function, $a(x)$, has an $x$-intercept at just shy of $0.5$, and a $y$-intercept at $-1$. How did our reflections affect these intercepts? Well, the horizontal reflection, shown by $b(x)$, reveals that the sign of our $x$-intercept flipped, while leaving the $y$-intercept the same. This basically follows the rules that we saw above with our stretches/shrinks. We multiplied the $x$-intercepts by our stretch factor, and left the $y$-intercepts alone. The same story is reflected (no pun intended) in our vertical reflection in $c(x)$. Both $c(x)$ and $a(x)$ have the same $x$-intercept, however $c(x)$ has a $y$-intercept at $1$. The $y$-intercepts were flipped! Note that this does lend itself to a somewhat backwards logic. A flipping of the $y$-intercepts is going to result in the graph reflecting across the $x$-axis, and a flipping of the $x$-intercepts will result in the graph reflecting across the $y$-axis. Again, there isn’t anything here that shouldn’t make sense once its been thought upon for a while–but it is odd and backwards at first glance. ## Transformations of Equations Okay–so I have just walked you through an (abbreviated) traditional approach to discussing this topic. I’m not fond of this approach because of the inconsistencies between the horizontal and vertical transformations. Why is it that these rules are reversed for the vertical ones, relative to the horizontal ones? And this approach also ties these transformations directly to functions, when they are far more powerful than that. We can apply these same transformations to many equations that are not functions at all! So, I find that discussing these in the context of functions specifically is a bit limiting. What I do instead is consider them more generically–as applies to any equation of two variables, which I’ll call $x$ and $y$ in keeping with tradition. This has the double benefit of removing the artificial dependence on functional notation, and also removing the inconsistencies between vertical and horizontal transformations. Here is what I do. First, I’ll recreate my translations, given $f(x) = x^2$. The first step is to abandon the function notation and revert back to using $y$, $$y = x^2$$ When you do this, something magical happens. Remember how we could shift this graph up by $2$ by adding two to the end? $$y = x^2 + 2$$ $$(y - 2) = x^2$$ Hey! Look at that. Rather than using this odd backwards convention for vertical shifts, we can subtract from $y$. The same deal applies for the stretch/shrink situation. A vertical stretch was achieved by multiplying our function by $2$, $$y = 2(x^2)$$ $$\frac{1}{2} y = x^2$$ We can revert this back to the exact same convention as a horizontal stretch if we look at it in terms of altering the $y$ variable, rather than the whole function. What this gives us, in effect, is a very consistent set of rules, 1. For a horizontal translation, add to $x$ to move left and subtract from $x$ to move right. 2. For a vertical translation, add to $y$ to move down and subtract from $y$ to move up. 3. For a horizontal stretch, multiply $x$ to decrease the $x$-intercepts and divide $x$ to increase them. 4. For a vertical stretch, multiply $y$ to decrease the $y$-intercepts and divide $y$ to increase them. The same rules now apply, where altering $x$ performs horizontal transformations, and altering $y$ performs vertical ones. In effect, we’ve cut the number of rules that need to be memorized in half! ### Transformations of a Circle This scheme has the added benefit of allowing us to apply these rules easily to equations that are not functions. For example, consider the humble unit circle, $$x^2 + y^2 = 1$$ We can apply our translation rules to move this circle around where-ever we want! Figure 6 demonstrates this–the rules work for this! And they’ll work for other “non-functions” too, such as hyperbolas and square roots. With circles specifically, something really cool happens when we start applying stretches. If we stretch it horizontally and vertically by the same amounts, then we increase the radius, as you might expect, What is particularly neat here is that examining this result closely can show why it is that the radius in the standard form a circle is squared. If we do this in general terms, $$\left( \frac{x}{R} \right)^2 + \left( \frac{y}{R} \right)^2 = 1$$ $$\frac{1}{R^2} \left( x^2 + y^2 \right) = 1$$ $$x^2 + y^2 = R^2$$ Things get even neater when we start to apply asymmetrical stretches. What happens if we stretch horizontally and vertically by different amounts? If we stretch horizontally by $a$ and vertically by $b$, then we end up with the general equation, $$\left( \frac{x}{a} \right)^2 + \left( \frac{y}{b} \right)^2 = 1$$ which you may recognize as the standard form of an ellipse. And we can derive it by simply starting with the “base” equation for the unit circle, and applying transformations. ### Transformations of a Line The standard form of most types of equation can be found by applying transformations like this. For example, consider a linear function, $$f(x) = x$$ $$y = x$$ We know that this graph will cross the point $(0,0)$. So, we can apply transformations to move this point anywhere we want. If we want to move that point that normally resides on the origin to a new location, $(x_1, y_1)$, then we can apply our translation rules to do this, $$(y - y_1) = (x - x_1)$$ Next, let’s do our stretching. We will stretch horizontally by a factor called $\text{run}$ and vertically by a factor called $\text{rise}$. This will get us, $$\frac{(y - y_1)}{\text{rise}} = \frac{(x - x_1)}{\text{run}}$$ $$(y - y_1) = \frac{\text{rise}}{\text{run}} (x - x_1)$$ $$(y - y_1) = m (x - x_1)$$ where $m = \frac{\text{rise}}{\text{run}}$. This equation, derived by applying basic transformations, should be readily recognized by any student of algebra. ### Transformations of a Parabola Let’s do the same thing with a parabola. Our base function is $f(x) = x^2$, and so we will start with $$y = x^2$$ Just as with the line, we have a point (the vertex in this case) sitting at $(0, 0)$, so we can shift that onto any point we want, say $(h, k)$, using these transformations, $$(y - k) = (x - h)^2$$ and we can apply our stretches too. Let’s call the horizontal stretch $u$ and the vertical stretch $v$, $$\frac{(y - k)}{v} = \frac{(x - h)^2}{u}$$ $$(y - k) = \frac{v}{u} (x - h)^2$$ $$(y - k) = a (x - h)^2$$ where $a = \frac{v}{u}$. Again, this equation should look very familiar. ## Conclusion I could keep going but I suspect that you get the point. When I was in junior high, these transformation rules seemed arbitrary and silly, and–to be honest–I completely forgot them in short order. I had a vague memory of rules like this existing, but until I opened Blitzer up to this section and refreshed my memory–I couldn’t have told you what they were. On my first pass, I was annoyed at the strange reversal of the rules between horizontal and vertical transformations. Why was it that way, and was there some way to avoid it? Simpler rules would be easier, after all, for my students to understand–or at least for them to memorize. And so, after some experimentation, I hit upon the system I described above. This allows me to not only reduce the set of rules that need to be learned, but also to use those rules to derive many of the equations that would traditionally need to be memorized. Which is a very satisfying result to me. After all, mathematics is not about memorization. It’s about applying a small set of rules in creative and interesting ways. And I think that the more opportunities we get to show this approach to math to our students, the better off they will be.
# How do you use the discriminant to determine the numbers of solutions of the quadratic equation 4x^2-20x + 25 = 0 and whether the solutions are real or complex? Jun 18, 2018 one real root root is $\frac{5}{2}$ #### Explanation: Quadratic equations will have one or two roots (either real or complex roots) Let our quadratic equation be $a {x}^{2} + b x + c = 0$ And ${x}_{1} , {x}_{2}$ be the roots of the equation Then the discriminant of this quadratic equation will be ${b}^{2} - 4 a c$ $\textcolor{red}{1.}$ when ${b}^{2} - 4 a c > 0$ we will have two different real roots ${x}_{1} \mathmr{and} {x}_{2}$are real and unequal $\left({x}_{1} \ne {x}_{2}\right)$ $\textcolor{red}{2.}$ when ${b}^{2} - 4 a c = 0$ we will have two equal real roots ${x}_{1} \mathmr{and} {x}_{2}$ are real and ${x}_{1} = {x}_{2}$ $\textcolor{red}{3.}$when ${b}^{2} - 4 a c < 0$ we will have two complex roots ${x}_{1} \ne {x}_{2}$ ${x}_{1} \mathmr{and} {x}_{2}$ are complex roots we get the values of $a , b , c$ as $a = 4 , b = - 20 , c = 25$ The Discriminant will be ${\left(- 20\right)}^{2} - 4 \times 4 \times 25 \implies 400 - 400 \implies 0$ the roots are real and equal (Only one root) Hence the equation must be a perfect square $4 {x}^{2} - 20 x + 25 \implies {\left(2 x - 5\right)}^{2}$ Hence the root of the equation is $2 x - 5 = 0 \implies x = \frac{5}{2}$ ${x}_{1} = {x}_{2} = \frac{5}{2}$
Skip to content share library_books # Sematext Monitoring FAQ ### General¶ What should I do if I can't find the answer to my question in this FAQ? Check the general FAQ for questions that are not strictly about Sematext Monitoring. If you can't find the answer to your question please email [email protected] or use our live chat. Is there a limit to how many servers/nodes/containers I can monitor with Sematext? There are no limits on the number of servers, nodes, or containers. Each Sematext Monitoring plan covers monitoring of a certain number of containers at no extra charge. What types of applications and infrastructure can Sematext monitor? See integrations. What are the various configuration files present in spm-client package? The spm-client package contains Infra and App Agent. The configuration files of these agents are: 1. /opt/spm/properties/st-agent.yml - Contains Infra Agent related configurations. 2. /opt/spm/spm-monitor/conf/spm-monitor-config-<YOUR_TOKEN>-<JVM_NAME>.properties - One file for each of the App Agent setup on this host. 3. /opt/spm/properties/agent.properties - Configurations that are common to both Infra and App Agents. Can I reduce the amount of logs generated by SPM monitor? Yes. You can make SPM monitor reduce the number of messages it logs every minute (otherwise it logs detailed info about its monitoring lifecycle). For App Agent, you can find the property files for your monitoring apps with: ls /opt/spm/spm-monitor/conf/spm-monitor-config-*.properties You can adjust one or more of them, depending on which application's monitor log output you want to reduce. At the bottom of those files add the following line: SPM_MONITOR_LOGGING_LEVEL=reduced For Infra Agent, you can configure the following properties in /opt/spm/properties/st-agent.yml logging: level: error write-events: false After you have made the changes, restart your SPM monitor. sudo service spm-monitor restart If you are running in-process agents, restart your application/java process. How frequently does SPM monitor collect metrics and can I adjust that interval? SPM monitor by default collects metrics every 10 seconds. As example, to reduce this frequency to 30 seconds, for App Agent simply add the following line to your SPM monitor properties files located in /opt/spm/spm-monitor/conf: SPM_MONITOR_COLLECT_INTERVAL=30000 The value is expressed in milliseconds. If you are adjusting it, we recommend setting it to 30000. With bigger values it is possible some 1-minute intervals would be displayed without the data in the UI. For Infra Agent, set interval property /opt/spm/properties/st-agent.yml. To set it to 30 seconds, set the value to 30s. After you have made the changes, restart your SPM monitor. sudo service spm-monitor restart If you are running in-process agents, restart your application/java process. How much memory is standalone App Agent using and can it be adjusted? By default, each standalone SPM monitor process is started with "-Xmx192m" setting. This means that its JVM heap will use 192 MB at most. In many cases SPM monitor doesn't actually need or use that much memory. If you want to be absolutely sure about it, simply lower this number in /opt/spm/spm-monitor/bin/spm-monitor-starter.sh, with the following variable (around line 63): JAVA_OPTIONS="$JAVA_OPTIONS -Xmx384m -Xms128m -Xss256k" You can change "-Xmx" part to "-Xmx192m" for example and see how it goes (check if SPM monitor is still working over the next few hours/days and is still not using much CPU). We don't recommend changing this setting if you are monitoring Elasticsearch or Solr clusters with a high number of indices or shards. In those cases SPM monitor has to parse and gather large amounts of metrics data returned by Elasticsearch or Solr. Contact us in chat or via [email protected] if you'd like to get more info and help making this adjustment. Can I use Sematext to monitor multiple applications running on the same server / VM? Yes. There are really 2 different scenarios here: 1. If each of those applications should be monitored under a different Monitoring App (e.g., you could have Solr running on your server along with some Java app and you want to monitor both - Solr would be monitored with Monitoring App of Solr or SolrCloud type, while the Java app would be monitored with Monitoring App of JVM type), just complete all installation steps (which are accessible from https://apps.sematext.com/ui/monitoring, click Actions > Install Monitor for app you are installing) for each of them separately. 2. If you want them monitored under the same Monitoring App (e.g., you have 3 Solr instances running on a server), you must use different JVM name for each of them. To do this, "1. Package installation" step should be run only once on this machine, while "2. Client configuration setup" step should be run once for each of the 3 Solr instances (installation instructions are accessible from https://apps.sematext.com/ui/monitoring, click Actions > Install Monitor for app you are installing). When running script /opt/spm/bin/setup-sematext in step "2. Client configuration setup", you should add jvm-name parameter (and value) at the end of parameter list, like this: sudo bash /opt/spm/bin/setup-sematext --monitoring-token 11111111-1111-1111-1111-111111111111 --app-type solr --agent-type javaagent --jvm-name solr1 In this example, we are setting up things for 3 separate Solr processes, monitored under JVM names: solr1, solr2 and solr3 (you choose any names), so this is adjusted command for solr1 instance. In the remaining sub-steps of "2. Client configuration setup", replace word "default" with the jvm-name value you just used. "2. Client configuration setup" step will have to be repeated N times, once for each monitored application (in our example 3 times with 3 different jvm-name values). Note: By using this kind of setup, you will be able to see JVM stats of all 3 processes separately (JVM name filter is used to do the filtering). When it comes to other, non-JVM stats, they will be aggregated into single value (for instance, request rate chart will show sum of request rate value of these 3 Solr instances). If you want to see even non-JVM stats separately, you will have to create 3 separate SPM Solr applications (one for each Solr instance running on this machine). Can I use Sematext to monitor my service which runs on Windows or Mac? SPM client installers currently exist only for Linux, however there is still a way to monitor your service. If you are OK with installing SPM client on separate linux box, and pointing it (more about that further below) to your service (which should be monitored) running on Windows or Mac, you can use Sematext to monitor all non-OS related metrics. You can stop and disable OS metric collector using: sudo service spm-monitor-sta stop sudo systemd disable spm-monitor-sta # systemd supported OS echo manual | sudo tee /etc/init/spm-monitor-sta.override # init.d supported OS Metrics specific to your process will be displayed (for example, in case of Solr, you will see all search, index, cache metrics along with all JVM metrics like pool memory, GC stats, JVM threads etc; as another example, in case of MySQL you will see all metrics related to connections, users, queries, handlers, commands, MyISAM, InnoDB, MySQL traffic, etc) . When monitoring Solr, Elasticsearch, HBase, Hadoop and other Java-based services, you will have an option to choose between using In-process (javaagent) or Standalone monitor. The workaround described here requires the use of standalone monitor variant. Here's what you'd need to do to see your metrics in SPM: 1. Install spm-client on any Linux box (you can use this box for anything else, it is needed here just to run a process which collects metrics and sends them to Sematext). Installation instructions are accessible from https://apps.sematext.com/ui/monitoring, just click Actions > Install Monitor for app are installing. In step 1 choose the "Other" tab to create a minimal installation. This will not install modules needed for monitoring OS metrics - if those modules were installed, they would collect OS metrics of your Linux box, which is probably something you don't need. If you decide to follow instructions from some other tab, keep in mind OS metrics displayed in Sematext UI will not be OS metrics of your Windows/Mac machine. 2. In step 2, if you are given a choice between In-process and Standalone monitor, choose Standalone monitor. It will use remote JMX to collect metrics from your Windows/Mac machine. Just follow instructions given on Standalone tab. The only difference you will want to make is in -Dspm.remote.jmx.url parameter used in monitor properties. You will know where to adjust it if you follow standard instructions for Standalone monitor. While this parameter will usually have a value like -Dspm.remote.jmx.url=localhost:3000, in this case you will have to replace localhost with the address of your Windows/Mac machine(s) by which it can be reached from your helper Linux box. 3. If you are monitoring something that doesn't offer a choice between In-process and Standalone monitor, installation instructions will explain where you can define your own address. Instructions typically use localhost, but you can instead use something like 10.1.2.3 or my-win-solr-server.mycompany.com to point to the machine that hosts the service you intend to monitor. In case you want to monitor multiple machines belonging to the same cluster in this way, you can still use SPM monitor installed on a single Linux helper box. Just do this: • Create a separate Monitoring App (https://apps.sematext.com/ui/integrations) for each of the machines which should be monitored • For each of those Monitoring Apps (and your monitored machines), go through installation process presented after the app was created (installation instructions are accessible from https://apps.sematext.com/ui/monitoring, just click Actions > Install Monitor for app your are installing). Note: Since each Monitoring App uses its own token, they all have slightly different installation commands. Besides different token, you will also use different addresses of Windows/Mac machines that host the monitored service. Can Sematext monitor metrics via http-remoting-jmx protocol (e.g. WildFly, JBoss, etc.)? Yes. The following steps are needed: • In /opt/spm/spm-monitor/conf/spm-monitor-config-YOUR_TOKEN-default.properties add the following property. SPM_MONITOR_JAR="/opt/spm/spm-monitor/lib/spm-monitor-generic.jar:/path/to/your/wildfly-client-all.jar" • Change the value of SPM_MONITOR_JMX_PARAMS in /opt/spm/spm-monitor/conf/spm-monitor-config-YOUR_TOKEN-default.properties. Of course, you can append to that additional JMX parameters, for example with password file location etc: SPM_MONITOR_JMX_PARAMS="-Dspm.remote.jmx.url=service:jmx:remote+http://localhost:9990" • Restart spm-monitor with: sudo service spm-monitor restart At this point the metrics will start appearing in charts. If they don't, run the diagnostics script to get fresh output of errors: sudo bash /opt/spm/bin/spm-client-diagnostics.sh If you see errors similar to: Caused by: javax.security.sasl.SaslException: Authentication failed: all available authentication mechanisms failed: java.io.FileNotFoundException: /opt/wildfly/standalone/tmp/auth/local8363680782928117445.challenge (Permission denied) javax.security.sasl.SaslException: DIGEST-MD5: Cannot perform callback to acquire realm, authentication ID or password [Caused by javax.security.auth.callback.UnsupportedCallbackException it means there is permissions issue on WildFly dirs. There are two ways to get around this: 1. Run SPM monitor with the same user that is running WildFly. To do that, add an entry like this to the end of /opt/spm/spm-monitor/conf/spm-monitor-config-YOUR_TOKEN-default.properties: SPM_MONITOR_USER="wildfly" After that restart spm-monitor with: sudo service spm-monitor restart 1. Change permissions for the problematic directory, adjusting the path to match your environment: chmod 777 /opt/wildfly/standalone/tmp/auth This approach is not encouraged because of the obvious security problem, so use such approach only while testing, or if other options are not possible. As usual, restart spm-monitor after this change. When should I run Standalone and when Embedded SPM monitor? Standalone SPM monitor runs as a separate process, while Embedded monitor runs embedded in the Java/JVM process. Thus, if you are monitoring a non-Java application, Standalone monitor is the only option. Standalone monitor is a bit more complex to set up when one uses it to monitor Java applications because it typically requires one to enable out-of-process JMX access, as described on Standalone SPM monitor page. With Embedded monitor this is not needed, but one needs to add the SPM agent to the Java command-line and restart the process of the monitored application. When running Standalone monitor one can update the SPM monitor without restarting the Java process being monitored, while a restart is needed when Embedded SPM monitor is being used. To be able to trace transactions or database operations you need to use the Embedded SPM monitor. Can I use Sematext for (business) transaction tracing? Yes, see Transaction Tracing. Can I move SPM client to a different directory? Yes. Soft move script moves all SPM files/directories to a new location, but symlinks /opt/spm to the new location. Use this script if you are OK with having /opt/spm symlinked. This script is recommended for most situations since it keeps your SPM client installation completely in line with standard setup (all standard SPM client commands and arguments are still valid). It accepts 1 parameter: new directory where SPM client should be moved to (if such directory doesn't exist, it will be created) sudo bash /opt/spm/bin/move-spm-home-dir-soft.sh /mnt/some_dir And that is it. Is there an HTTP API? Yes, see API Reference. I have multiple Monitoring Apps installed on my machine, can I uninstall just one of them? Yes, you can use the following command for that (it accepts only one parameter, the token of the Monitoring App you want to uninstall): sudo bash /opt/spm/bin/spm-remove-application.sh 11111111-1111-1111-1111-111111111111 Can I disable SPM agent without uninstalling the SPM client? Yes, just find its .properties file in /opt/spm/spm-monitor/conf and add to it: SPM_MONITOR_ENABLED=false After that restart the monitor to apply the change. In case of standalone agent, run: sudo service spm-monitor restart And in case of in-process agent, just restart the service that has this monitor's javaagent parameter. ### Sharing¶ How can I share my Sematext Apps with other users? See sharing FAQ. What is the difference between OWNER, ADMIN, BILLING_ADMIN, and USER roles? See info about user roles in sharing FAQ. ### Agent Automation¶ Is there an Ansible Playbook for the SPM client? Yes, see the Install and Configure playbooks, with examples. Is there a Puppet Module for the SPM client? Yes, see the Install and Configure module, with examples. Is there a Chef Recipe for the SPM client? Yes, see SPM client Chef Recipe example. ### Agent Updating¶ How do I upgrade the SPM client? If you have previously installed the SPM client package (RPM, Deb, etc.), simply upgrade via apt-get (Debian, Ubuntu, etc.), or yum (RedHat, CentOS, etc.), or zypper (SuSE). Debian/Ubuntu: # NOTE: this will update the sematext gpg key wget -O - https://pub-repo.sematext.com/ubuntu/sematext.gpg.key | sudo apt-key add - # NOTE: this does not update the whole server, just spm-client sudo apt-get update && sudo apt-get install spm-client RedHat/CentOS/... # NOTE: this will update sematext repo file sudo wget https://pub-repo.sematext.com/centos/sematext.repo -O /etc/yum.repos.d/sematext.repo # NOTE: this does not update the whole server, just spm-client sudo yum clean all && sudo yum update spm-client SuSE: sudo zypper up spm-client After that is done, also do: • if you are using SPM monitor in in-process/javaagent mode - restart monitored server (restart your Solr, Elasticsearch, Hadoop node, HBase node... Exceptions: In case of Memcached, Apache and plain Nginx - no need to restart anything; in case of Redis only standalone SPM monitor exists so check below how to restart it) OR • if you are using standalone SPM monitor, restart it with: sudo service spm-monitor restart Note: In case of Memcached, Apache and plain Nginx - after completing upgrade steps described above, you must also run commands described in Step 2 - Client Configuration Setup (which is accessible from https://apps.sematext.com/ui/monitoring, click Actions > Install Monitor for app you have installed) ### Agent Uninstalling¶ How do I uninstall the SPM client? On servers where you want to uninstall the client do the following: 1. remove spm-client, for instance: sudo apt-get purge spm-client OR sudo yum remove spm-client 2. after that, ensure there are no old logs, configs, etc. by running the following command: sudo rm -R /opt/spm 3. if you used in-process (javaagent) version of monitor, remove "-javaagent" definition from startup parameters of process which was monitored Note: in case you used installer described on "Other" tab (found on https://apps.sematext.com/ui/monitoring, click Actions > Install Monitor for app your are installing), instead of commands from step 1 run: sudo bash /opt/spm/bin/spm-client-uninstall.sh . After that proceed with steps 2 and 3 described above. ### Alerts¶ Can I send alerts to HipChat, Slack, Nagios, or other WebHooks? See alerts FAQ. What are Threshold-based Alerts? See alerts FAQ. What is Anomaly Detection? See alerts FAQ. What are Heartbeat Alerts? See alerts FAQ. ### Troubleshooting¶ Can I enable debugging in the SPM agent? Yes. For App Agent, simply add or edit the SPM_MONITOR_LOGGING_LEVEL property in any of the /opt/spm/spm-monitor/conf/spm-monitor/*.properties files and restart the agent (or the process the agent is attached to). Available levels are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE For Infra Agent, edit logging.level property in /opt/spm/properties/st-agent.yml file. Available levels are: panic, fatal, error, warn, info, debug Can I install SPM client on servers that are behind a proxy? Yes. If you are installing the RPM, add this to /etc/yum.conf: proxy=http or https://proxy-host-here:port proxy_username=optional_proxy_username proxy_password=optional_proxy_password If you are using apt-get, set the http_proxy environmental variable: export http_proxy=http://username:password@yourproxyaddress:proxyport Can SPM client send data out from servers that are behind a proxy? Yes. You can update the proxy settings using the following command: sudo bash /opt/spm/bin/setup-env --proxy-host "HOST" --proxy-port "PORT" --proxy-user "USER" --proxy-password "PASSWORD" Can I change the region settings for the SPM agent installation? Yes. By default the region is set to US. You can change it to EU using: sudo bash /opt/spm/bin/setup-env --region eu When installing SPM client, I see "The certificate of pub-repo.sematext.com is not trusted" or similar error. How can I avoid it? There can be multiple reasons for this, most likely: • system time on your machine is not correct and adjusting it should fix the problem • you are installing SPM client as root user - in some cases that can cause this error and installing SPM client as a different user may help • you are missing the ca-certificates package. Use one of the following commands to install it: sudo apt-get install ca-certificates sudo yum install ca-certificates If none of the above eliminates the problem for you try adding a flag to avoid certificate checking. In case the command with wget is failing in your case, add --no-check-certificate as wget argument. If command with curl is failing, add -k flag. How do I create the diagnostics package? If you are having issues with Sematext Monitoring, you can create diagnostics package on affected machines where SPM client was installed by running: sudo bash /opt/spm/bin/spm-client-diagnostics.sh The resulting package will contain all relevant info needed for our investigation. You can send it, along with short description of your problem, to [email protected] or contact us in chat. I see only my system metrics (e.g. CPU, Memory, Network, Disk...), but where is the rest of my data? Make sure you have followed all steps listed on installation instructions page. Package installation steps should be done first, followed by Client configuration setup. If you have done that and you still don't see application metrics, run sudo bash /opt/spm/bin/spm-client-diagnostics.sh to generate diagnostics package and send it to [email protected] with description of your problem. I do not see any system metrics (e.g. CPU, Memory, Network, Disk), what could be the problem? Make sure you have followed all steps listed on installation instructions page. It is possible you missed Client configuration setup step. If you have done that and you still don't see application metrics, run sudo bash /opt/spm/bin/spm-client-diagnostics.sh to generate diagnostics package and send it to [email protected] with description of your problem. I am using trying to monitor Solr / Elasticsearch. My Request Rate and Latency charts are empty? If other Solr/ES charts are showing the data, it is most likely that there were no requests sent to your Solr/ES in the time range you are looking at. Try sending some queries and see if request rate/latency charts will show them. If they don't, please send us an email to [email protected] or contact us in chat. I am trying to monitor Elasticsearch. My HTTP Metrics charts are empty? If other ES charts are showing data, it is most likely that HTTP metrics aren't enabled in elasticsearch.yaml. Setting http.enabled: true will fix this issue. If this doesn't fix it, please send us an email to [email protected] or contact us in chat. I am trying to monitor Elasticsearch. Node stats are not tracked over time? Make sure to set the node.name value in in elasticsearch.yaml. Elasticsearch will otherwise generate random a node name each time an instance starts, making tracking node stats over time impossible. If this doesn't fix your issue, please send us an email to [email protected] or contact us in chat. I am not seeing any data in Monitoring charts. How do I check if network connectivity is OK? SPM agents send the data to Sematext via HTTP(S) so it is important that servers where you install SPM agent can access the internet. Things to check to ensure network connectivity is ok: 1. Try connecting to spm-receiver.sematext.com / spm-receiver.eu.sematext.com (if using Sematext Cloud Europe) with the following command: nc -zv -w 20 spm-receiver.sematext.com 443 The output should show something like: Connection to spm-receiver.sematext.com 443 port [tcp/https] succeeded! In case you see some other result: • if your server requires proxy to access the internet, you can define its settings using /opt/spm/bin/setup-env command. After that restart SPM agent. • if firewall is used to protect your server, it may be blocking outbound traffic from it. SPM agent sends the data over port 443, so please ensure with your network admins that port 443 is open for outbound traffic • check your DNS (see below) 2. Check if your DNS has correct entries for SPM Receiver: nslookup spm-receiver.sematext.com nslookup spm-receiver.eu.sematext.com The output of this command should look like this, although the IP addresses and names may be somewhat different, as they change periodically: Server: 127.0.1.1 Address: 127.0.1.1#53 Non-authoritative answer: spm-receiver.sematext.com canonical name = SPM-Prod-Receiver-LB-402293491.us-east-1.elb.amazonaws.com. Name: SPM-Prod-Receiver-LB-402293491.us-east-1.elb.amazonaws.com Address: 50.16.206.179 Name: SPM-Prod-Receiver-LB-402293491.us-east-1.elb.amazonaws.com Address: 107.20.222.136 If you see different output, you may want to check with your network admins if everything with your DNS is ok or may need adjustment to be able to reach spm-receiver.sematext.com / spm-receiver.eu.sematext.com. I don't see any of my metrics / I am getting errors when starting SPM monitor or when starting my service which uses javaagent version of SPM monitor. What should I check? Here are a few things to check and do: 1. Log into your monitored servers and make sure there are running SPM monitor processes (there should be more than one of them) 2. Check if system time is correct. If not, you should adjust the time, restart the SPM monitor with: sudo service spm-monitor restart and restart any other javaagent (in-process) based SPM monitors by restarting your server which is being monitored. 3. Check network connectivity as described elsewhere in the FAQ 4. Make sure disks are not full 5. Make sure user spmmon can have more than 1024 files open: sudo vim /etc/security/limits.conf spmmon - nofile 32000 and sudo vim /etc/pam.d/su session required pam_limits.so Restart SPM monitor after the above changes. 1. Check if hostname of your server is defined in /etc/hosts 2. If you are starting your Jetty (or some other server) with command like java ... -jar start.jar ... and using inprocess (javaagent) version of monitor, make sure -D and -javaagent definitions occur before "-jar start.jar" part in your command 3. If none of the suggestions helped, run sudo bash /opt/spm/bin/spm-client-diagnostics.sh to generate diagnostics package and send it to [email protected] My server stopped sending metrics, so why do I still see it under Hosts Filter? Filters have 2 hour granularity, which means that a server will be listed under Hosts filter until 2 hours since it last sent data have passed. For example, if a server stopped sending data at 1 PM and if at 2:30 PM you are looking at the last 1 hour of data (for a period from 1:30 PM until 2:30 PM) you will not see data from this server on the graph, but you will still see this server listed under the Hosts filter until 3 PM. After 3 PM this server should disappear from the Hosts filter. I rebooted my server and now I don't see any data in my graphs. What should I check? Here are a few things to check and do: 1. Make sure SPM monitor is running: sudo service spm-monitor restart 2. Make sure disk is not full: df -h 3. Make sure maximal open files limit was not reached: see "I registered for SPM more than 5 minutes ago and I don't see any of my data, what should I check?" How come Disk Space Usage report shows more free disk space than df command? Sematext Monitoring reports both free and reserved disk space as free, while df does not include reserved disk space by default. I changed my server's hostname and now I don't see new data in my graphs. What should I do? Simply restart SPM monitor: sudo service spm-monitor restart and restart any process which is using javaagent/in-process version of SPM monitor. Can I specify which Java runtime the SPM monitor should use? Yes, you can edit the /opt/spm/properties/java.properties file where you can specify the location of Java you want the SPM monitor to use. Can SPM client use HTTP instead of HTTPS to send metrics from my servers? Yes, although we recommend using HTTPS. Sematext agents by default uses HTTPS to send metrics data to Sematext. If you prefer to use HTTP instead (for example, if you are running Sematext Enterprise on premises or if you don't need metric data to be encrypted when being sent to Sematext over the Internet), you can adjust that in /opt/spm/properties/agent.properties by changing protocol to http in property: server_base_url=https://spm-receiver.sematext.com ### Security¶ What information are SPM agents sending? SPM agents are sending metrics and data used for filtering those metrics, such as hostnames (which can be obfuscated). To see what exactly is being sent you can use tcpdump or a similar tool to sniff the traffic. SPM agents ship data via HTTPS, but can also ship it via HTTP. Java-based agents let you change the protocol in appropriate files under /opt/spm/properties/ directory, while Node.js-based agents let you change it via the SPM_RECEIVER_URL environmental variable. Can hostnames in Sematext Monitoring be obfuscated or customized? Yes, you can obfuscate or alias hostnames. This lets you: • never send your real hostnames over the internet • use custom hostname in Sematext UI in case original hostname is too cryptic (e.g. have Sematext show "my-solr-host1" instead of "ip-12-123-321-123") To achieve this, after SPM client is installed, open /opt/spm/properties/agent.properties file and just add desired hostname to the value of property hostname_alias, e.g. hostname_alias=web1.production. After that, restart SPM monitor with: sudo service spm-monitor restart and restart any Java process which was using javaagent/in-process version of SPM monitor. Note: • old data will still be seen in Sematext under the old hostname, while new data (after hostname change) will be displayed under the new hostname • if you are installing SPM client for the first time and you want to be 100% sure its original hostname never leaves your network, define your hostname alias in agent.properties file immediately after you complete "1. Package installation" step and before you begin with "2. Client configuration setup" step (installation instructions can be accessed from https://apps.sematext.com/ui/monitoring, click Actions > Install Monitor on app you are installing) ### Billing¶ How do you bill for infrastructure and server monitoring? Usage is metered hourly on a per-agent basis. For example: If you send metrics from a server A to Monitoring App Foo between 01:00 and 02:00 that's$0.035 for the Standard plan. If another agent is monitoring something else, even if that is running on the same server A, and sending metrics to a different Monitoring App Bar, that's another $0.035. If you are not sending metrics from a server A for a Monitoring App Foo between 02:00 and 03:00 then you pay$0 for that hour. A single agent monitoring 24/7 will end up being ~ $25/month. If you run another agent on another server it will be 2 x ~$25/mo. How do you bill for Docker container monitoring? Docker monitoring is based on the base price and per-container price. The base price includes monitoring of a Docker host and free monitoring of up to N containers. Per-container price is applied only if you run more than N containers per host. The number of containers per host is averaged for the whole account. The base price and the number of containers included in it depends on the plan. Note that monitoring of Docker host and containers is independent of monitoring of applications you run in those containers. Containerized applications monitored by Sematext are metered as separate hosts. In other words, whether the monitored application is running in a container or in a VM or directly on a server or in a public cloud instance is the same as far as metering and billing is concerned. For plans and price details see https://sematext.com/spm/pricing. Which credit cards are accepted? See billing FAQ. Can I be invoiced instead of paying with a credit card? See billing FAQ. How often will I get billed? See billing FAQ. Can the billing email be sent to our Accounts Payable/Accounting instead of me? See billing FAQ. Do I have to commit or can I stop using Sematext at any time? See billing FAQ. Can I get invoices? See billing FAQ.
New Software and Platforms Bilateral Contracts and Grants with Industry Partnerships and Cooperations Bibliography PDF e-Pub ## Section: New Results ### Computational Anatomy #### Statistical Analysis of Diffusion Tensor Images of the Brain Participants : Marco Lorenzi [Correspondent] , Nicholas Ayache, Xavier Pennec. Image non-linear registration, Longitudinal modeling, Alzheimer's disease Alzheimer's disease is characterized by the co-occurrence of different phenomena, starting from the deposition of amyloid plaques and neurofibrillary tangles, to the progressive synaptic, neuronal and axonal damages. The brain atrophy is a sensitive marker of disease progression from pre-clinical to the pathological stages, and computational methods for the analysis of magnetic resonance images of the brain are currently used for group-wise (cross-sectional) and longitudinal studies of pathological morphological changes in clinical populations. The aim of this project is to develop robust and effective computational instruments for the analysis of longitudinal brain changes. In particular novel methods based on non-linear diffeomorphic registration have been investigated in order to reliably detect and statistically analyze pathological morphological changes [5] (see Fig.8 ). This project is also focused in the comparison of the trajectories of longitudinal morphological changes [31] estimated in different patients. This is a central topic for the development of statistical atlases of the longitudinal evolution of brain atrophy. Figure 8. Modeled longitudinal brain changes in normal aging extrapolated from -15 to 18 years, and corresponding observed patient anatomies with estimated morphological age and age shift (biological age in parenthesis). Our modeling framework describes meaningful anatomical changes observed in clinical groups. #### Statistical Learning via Synthesis of Medical Images Participants : Hervé Lombaert [Correspondent] , Nicholas Ayache, Antonio Criminisi. This work has been partly supported by a grant from Microsoft Research-Inria Joint Centre, by ERC Advanced Grant MedYMA (on Biophysical Modeling and Analysis of Dynamic Medical Images) Statistical learning, Synthesis Machine learning approaches typically require large training datasets in order to capture as much variability as possible. Application of conventional learning methods on medical images is difficult due to the large variability that exists among patients, pathologies, and image acquisitions. The project aims at exploring how realistic image synthesis could be used, and improve existing machine learning methods. First year tackled the problem of better exploiting existing training sets, via a smart modeling of the image space (Fig. 9 ), and applying conventional random forests using guided bagging [21] . Synthesis of complex data, such as cardiac diffusion images (DTI), was also done. Synthesis of complex shapes, using spectral graph decompositions, is currently on-going work. The modeling of shapes also includes novel representations based on the spectral decomposition of images[4] which are more robust to large deformations when comparing multiple patients. Figure 9. Laplacian Forest, where images are here represented as points, and where decision trees are trained using the spatial organization of these images on a reduced space. #### Statistical analysis of heart shape, deformation and motion Participants : Marc-Michel Rohé [correspondent] , Xavier Pennec, Maxime Sermesant. This work was partly supported by the FP7 European project MD-Paedigree and by ERC Advanced Grant MedYMA (on Biophysical Modeling and Analysis of Dynamic Medical Images) Statistical analysis, Registration, Reduced order models, Machine learning The work aims at developping statistical tools to analyse cardiac shape, deformation, and motion. In particular, we are interested in developping reduced order models so that the variability within a population described by a complex model can be reduced into few parameters or modes that are clinically relevant. We use these modes to represent the variability seen in a population and to relate this variability with clinical parameters, and we build group-wise statistics which relate these modes to a given pathology. We focus on cardiomyopathies and the cardiovascular disease risk in obese children and adolescents. #### Geometric statistics for Computational Anatomy Participants : Nina Miolane [Correspondent] , Xavier Pennec. Lie groups, pseudo-Riemannian, Statistics, Computational Anatomy Figure 10. Structure of Lie groups on which one can define a bi-invariant metric or a bi-invariant pseudo-metric. The black levels of the tree represent the adjoint decomposition of the Lie algebra, the dashed lines represent the possible algebraic types of the substructures. Note the recursive construction in the pseudo-Riemannian case. #### Statistical Analysis of Diffusion Tensor Images of the Brain Participants : Vikash Gupta [correspondent] , Nicholas Ayache, Xavier Pennec. Population specific multimodal brain atlas for statistical analysis of white matter tracts on clinical DTI. Figure 11. A: Probabilistic parcellation of corpus callosum with blue and red being the maximum and minimum probability regions respectively. B: Multivariate statistics on white matter tracts. The red-yellow sections show statistically significant differences #### Longitudinal Analysis and Modeling of Brain Development Participants : Mehdi Hadj Hamou [correspondent] , Xavier Pennec, Nicholas Ayache. This work is partly funded through the ERC Advanced Grant MedYMA 2011-291080 (on Biophysical Modeling and Analysis of Dynamic Medical Images). Brain development, adolescence, longitudinal analysis, non-rigid registration algorithm, extrapolation, interpolation This work is divided into 2 complementary studies about longitudinal trajectories modeling: • Diffeomorphic registration parametrized by Stationary Velocity Fields (SVF) is a promising tool already applied to model longitudinal changes in Alzheimer's disease. However, the validity of these model assumptions in faithfully describing the observed anatomical evolution needs to be further investigated. In this work, we thus analyzed the effectiveness of linear regression of SVFs in describing anatomical deformations estimated from past and future observations of the MRIs. • Due to the lack of tools to capture the subtle changes in the brain, little is known about its development during adolescence. The aim of this project is to provide quantification and models of brain development during adolescence based on diffeomorphic registration parametrized by SVFs (see Fig.12 ). We particularly focused our study on the link between gender and the longitudinal evolution of the brain. This work was done in collaboration with J.L. Martinot et H. Lemaître (Inserm U1000). Figure 12. Pipeline for the longitudinal analysis of brain development during adolescence.
# Q-cos sub q (Redirected from Q-cos) The function $\cos_q$ is defined for $|z|<1$ by $$\cos_q(z)=\dfrac{e_q(iz)+e_q(-iz)}{2},$$ where $e_q$ denotes the $q$-exponential $e$ and $(q;q)_{2k}$ denotes the $q$-Pochhammer symbol.
# Tag Info 4 Try to remove the curly braces around \x in \pgfmathsetmacro\xsh{...}: \documentclass[a4paper]{article} \usepackage{tikz} \usepackage{ifthen}\newboolean{color} \begin{document} \begin{figure}[h] \begin{center} %%%%%%%% Set function values %%%%%%% % Set the x = a and x = b values of the % domain here where a <= x <= b. ... 11 Pgfmath offers also boolean compositions hence you can fall back to it and test the resulting boolean with \ifnum; \setbeamertemplate{background}{% \begin{tikzpicture} \useasboundingbox (0,0) rectangle(\the\paperwidth,\the\paperheight); \pgfmathparse{\value{page}<33 &&\value{page}>1?int(1):int(0)} \ifnum\pgfmathresult>0\relax% Not the title ... 13 The primitive \ifnum accepts only a single test, it's not able to do 1 < \value{page} < 33 Thus you need to make in more steps: \setbeamertemplate{background}{% \begin{tikzpicture} \useasboundingbox (0,0) rectangle(\the\paperwidth,\the\paperheight); \ifnum \value{page}=1 \fill[color=MedianBrown] (0,1.2) rectangle ... 4 You need to place the \random generation outside the comparison: \documentclass{article} \usepackage{xifthen} \usepackage[first=0, last=1, quiet]{lcg} \begin{document} rand sequence: \rand\arabic{rand}\rand\arabic{rand}% \rand\arabic{rand}\rand\arabic{rand}\rand\arabic{rand}% \rand\arabic{rand}\rand\arabic{rand}\rand\arabic{rand} ... 3 Though it's most likely too late for the OP, I just worked out my own switch and thought I'd share it here for future readers. My solution uses solely the package xifthen (ifthen suffices too, but I already had xifthen installed...). % Switch implementation \usepackage{xifthen} \newcommand{\ifequals}[3]{\ifthenelse{\equal{#1}{#2}}{#3}{}} ... 2 In order to perform a length test using ifthen you need to explicitly state it: \ifthenelse{\lengthtest{<dimen>?<dimen>}}{<true>}{<false>} where ? is one of <, = or >. I've adopted a more old-school way of testing dimensions using \ifdim: \newcommand{\choice}[4]{% \settowidth\widthcha{AM.#1}\setlength{\widthch}{\widthcha}% ... 7 \ifthenelse is "normal" LaTeX code. Therefore you can not use this command inside a TikZ path specification. But since the node text is put in a normal TeX box you can use \ifthenelse inside the node text. So you can try \documentclass[tikz,margin=5mm]{standalone} \usetikzlibrary{graphs,graphs.standard,quotes} \usepackage{ifthen} \begin{document} ... 5 You can pre-compute an array of points using \ifthenelse, then use them in the \foreach. I have no idea what you are trying to accomplish with the second example. \documentclass{article} \usepackage{mathptmx} \usepackage{tikz} \usepackage{verbatim} \usetikzlibrary{arrows,shapes,graphs,graphs.standard,quotes} \usepackage{calc}% http://ctan.org/pkg/calc ... 7 Here is a solution for this problem, using (perhaps abusing) only expl3 features. I'm not really sure about the usefulness of this code. ;-) \documentclass{article} \usepackage{xparse} \ExplSyntaxOn \NewDocumentCommand{\NewOnceMacro}{m m m} { \grill_new_once_macro:Nnn #1 { #2 } { #3 } } % an addition to the kernel functions \cs_set_eq:NN \use_none: ... 6 You could do this but it's so wrong, why mix expl3 and etoolbox tests and why all the toggle stuff, if you want \foo to just execute once define it to be \def\foo{hello\let\foo\@empty} No need for a separate toggle macro. but anyway: %% Uncomment the following \def to get the failing test case. \def\UseXparseForDefiningMacro{}% Works if commented out ... 5 The syntax is \ifx\a\b stuff \else other stuff \fi and no grouping is implied. By going \ifx\a\b{ stuff }\else{ other stuff }\fi You were keping both branches in local groups. Note also that you are missing lots of % from ends of lines if you do not want to introduce white space into the output. 5 \def\foo#1{% \ifcase\numexpr#1 - a\relax case a \or case b \or case c \fi} then \foo{b} probably expands to case b 4 I use something like this: \ProvidesClass{superclass} \RequirePackage{xkeyval}% better option processing - there are many other possibilities as you probably know so just adapt as needed \def\myclasstype{article}% make sure a default is defined \DeclareOptionX{article}{% \gdef\myclasstype{article}} \DeclareOptionX{report}{% ... 3 An alternative way that doesn't require shell escape. Let's say your file is called file1.tex. Then you can organize it as follows: \documentclass{book} \providecommand{\INCLUDE}[1]{} \begin{document} First paragraph of first file. \INCLUDE{file2} Last paragraph of first file. \end{document} If you call the LaTeX run by pdflatex ... 7 Here is a \providelength command that will define a new length if not already defined, but also checks whether the command passed as argument has been defined with \newlength, in order to issue an error message if you try to use, say, \providelength{\textit}. \documentclass{article} \makeatletter \newcommand{\providelength}[1]{% ... 5 Your answer appears to work. A simpler method might consist of first \let-ting the length variable in question to \relax and then applying \newlength to it. Put differently, if you can't remember if a certain macro name has been used before to denote a length parameter (or anything else, really!) and if you're comfortable with (re)using it anyway, you can ... 2 Ok, so first I looked up the definitions of \newlength and further using texdef: $texdef -t latex \newlength \newlength: macro:#1->\@ifdefinable #1{\newskip #1}$ texdef -t latex \@ifdefinable \@ifdefinable: \long macro:#1#2->\edef \reserved@a {\expandafter \@gobble \string #1}\@ifundefined \reserved@a {\edef \reserved@b {\expandafter \@carcube ... 4 You are using the book class with chapters. Then it is easy with \includeonly \documentclass{book} \includeonly{file2,file3}% controlls what will be included \begin{document} First paragraph of first file. \include{file2} \include{file3} Last paragraph of first file. \end{document} You can also use an external file include.cfg which has only the ... 7 Edit Improved version There are two basically two problems: \ifstrequal does not expand its arguments The content of the environment variable has a trailing whitespace character at the end, which means yes will become yes' ', so that the test fails. I could not figure out the reason up to now. By usage of the xstring package command \StrGobbleRight ... Top 50 recent answers are included
Cheap and Secure Web Hosting Provider : See Now # [Solved]: Is it possible to reduce the number of variables in bin packing? , , Problem Detail: The bin packing problem can be formulated as: \begin{align} & \underset{x,y}{\min} & & B = \sum_{i=1}^n y_i\\ & \text{subject to} & & B \geq 1,\\ & & & \sum_{j=1}^n a_j x_{ij} \leq V y_i,\forall i \in \{1,\ldots,n\}\\ & & & \sum_{i=1}^n x_{ij} = 1,\forall j \in \{1,\ldots,n\}\\ & & & y_i \in \{0,1\},\forall i \in \{1,\ldots,n\},\\ & & & x_{ij} \in \{0,1\}, \forall i \in \{1,\ldots,n\}, \, \forall j \in \{1,\ldots,n\},\\ \end{align} where $y_i = 1$ if bin $i$ is used and $x_{ij} = 1$ if item $j$ is put into bin $i$. Why do we use $x_{ij}$ and $y_{i}$? We can just use $x_{ij}$. • if $x_{ij}=1$ than bin $i$ is used and item $j$ is put into bin $i$; and • if $x_{ij}=0$ than bin $i$ is not used. Is that correct? If so, why can't I find any formulation with only $x_{ij}$? #### Answered By : Yuval Filmus An integer program, or more properly an integer linear program, consists of a linear program together with integrality constraints stating that some of the variables are integers. As such, its objective function is always a linear combination of variables. When the objective is minimization, it is admissible to have $\max$ operators (appearing positively) in the objective function. This means that there is an equivalent proper integer program. This program is obtained by introducing auxiliary variables, just as the variables $y_i$ are introduced in your example to implement $\max_j x_{ij}$. To answer your question, it all depends on what you consider as an integer program. The standard definition only allows linear objective functions, and in this case the $y_i$ are necessary. If you also allow $\max$ operators in the objective function, then the $y_i$ are not necessary.
Paul's Online Notes Home / Calculus II / Integration Techniques / Trig Substitutions Show Mobile Notice Show All Notes Hide All Notes Mobile Notice You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width. ### Section 7.3 : Trig Substitutions For problems 1 – 8 use a trig substitution to eliminate the root. 1. $$\sqrt {4 - 9{z^2}}$$ Solution 2. $$\sqrt {13 + 25{x^2}}$$ Solution 3. $${\left( {7{t^2} - 3} \right)^{\frac{5}{2}}}$$ Solution 4. $$\sqrt {{{\left( {w + 3} \right)}^2} - 100}$$ Solution 5. $$\sqrt {4{{\left( {9t - 5} \right)}^2} + 1}$$ Solution 6. $$\sqrt {1 - 4z - 2{z^2}}$$ Solution 7. $${\left( {{x^2} - 8x + 21} \right)^{\frac{3}{2}}}$$ Solution 8. $$\sqrt {{{\bf{e}}^{8x}} - 9}$$ Solution For problems 9 – 16 use a trig substitution to evaluate the given integral. 1. $$\displaystyle \int{{\frac{{\sqrt {{x^2} + 16} }}{{{x^4}}}\,dx}}$$ Solution 2. $$\displaystyle \int{{\sqrt {1 - 7{w^2}} \,dw}}$$ Solution 3. $$\displaystyle \int{{{t^3}{{\left( {3{t^2} - 4} \right)}^{\frac{5}{2}}}\,dt}}$$ Solution 4. $$\displaystyle \int_{{ - 7}}^{{ - 5}}{{\frac{2}{{{y^4}\sqrt {{y^2} - 25} }}\,dy}}$$ Solution 5. $$\displaystyle \int_{1}^{4}{{2{z^5}\sqrt {2 + 9{z^2}} \,dz}}$$ Solution 6. $$\displaystyle \int{{\frac{1}{{\sqrt {9{x^2} - 36x + 37} }}\,dx}}$$ Solution 7. $$\displaystyle \int{{\frac{{{{\left( {z + 3} \right)}^5}}}{{{{\left( {40 - 6z - {z^2}} \right)}^{\frac{3}{2}}}}}\,dz}}$$ Solution 8. $$\displaystyle \int{{\cos \left( x \right)\sqrt {9 + 25 \sin^2\left( x \right)} \,dx}}$$ Solution
# Rule of inference and truth table issue Let P – Light is on Q – The switch is down R – The door is open If the switch is down then the light is on. If the switch is not down then the door is open. If the door is open then the light is on. Therefore the light is on; Prove or disprove the argument using, i. Rule of inference ii. Truth table \ My answer ..... \ Rule of inference A. Q⇒P B. ~Q⇒R C. R⇒P D. ~Q⇒P [By {B} and {C}] E. P [By {A} and {D}] Which is different from the answer I get from the rule of inference. Can anyone tell me where did I go wrong? Thanks!! • Sorry about the truth table image. I can't upload images till I reach 5 rep points. :( May 29, 2015 at 3:37 ## 2 Answers No, your table is correct.   You may be interpreting the result wrong.   You wish to have $P$ true whenever the statements $Q\to P, \neg Q\to R,$ and $R\to P$ are all true at the same time.   That happens on the last three rows, and $P$ is true for each one. $$Q\to P, \neg Q\to R, R\to P \;\vdash\; P$$ PS: Your application of rules of inference is okay too.   You used hyperthetical syllogism and disjunctive elimination. • Thanks for the reply. Do you mean that the whole "P" row doesn't need to be identical to the row we get after Q→P (and) ¬Q→R (and) R→P ?? Because in the truth table there is a 0 in the 5th line where it is 1 in the "P" row :( May 29, 2015 at 4:35 • @Blogger That's it. The last column does not have to be exactly the same as $P$, you only need $1$ in the first column whenever there is a $1$ in the last column. You only wish to prove a claim about the state of $P$ whenever the three premises all hold; you don't care at all what its state is otherwise. May 29, 2015 at 4:43 Your truth table supports the result of the proof: indeed, in all rows where all the premises are true, the conclusion is also true. Perhaps you're mixing the situation up with one where you try to prove that $P$ is a tautology? • So the answer is "P is the correct conclution" right? :) Even in the truth table, if I get the product of 1 and 4 other than getting the product of 1, 2 and 3; Result is the same as the "P" colomn. So can we conclude that "P" is a valid conclusion from the premises given? May 29, 2015 at 4:30 • @Blogger The last column does not have to be exactly the same as $P$, you only need $1$ in the first column whenever there is a $1$ in the last column. You are only proving a claim about the state of $P$ whenever the three premises all hold; you don't care at all what its state is otherwise. May 29, 2015 at 4:40 • So it is like; IF " Q→P (and) ¬Q→R (and) R→P = 1" THEN "P = 1"? May 29, 2015 at 4:43 • @Blogger It's exactly like that. May 29, 2015 at 4:46 • Hey thanks alot @GrahamKemp Now I get it. I thought both the rows should be always identical. Thanks again you saved my day! :D May 29, 2015 at 4:48
## A clarification about (Linux) Mesa / Nouveau Drivers Two of the subjects which I like to blog about, are direct-rendering and Linux graphics drivers. Well in This Earlier Posting, I had essentially written, that on the Debian 9 , Debian /Stretch computer I name ‘Plato’, I have the ‘Mesa’ Drivers installed, and that therefore, that computer cannot benefit from OpenCL, massively-parallel GPU-computing. ‘Mesa’, which I referred to, is a Debian set of meta-packages, that is all open-source. It installs several drivers, and selects the drivers based on which graphics hardware we may have. But, because ‘Plato’ does in fact have an nVidia graphics card, the Mesa package automatically selects the Nouveau drivers, which is one of the drivers it contains. Hence, when I wrote about using the Mesa Drivers, I was in fact writing about the Nouveau Drivers. One of the reasons I have to keep using these Nouveau Drivers, is the fact that presently, ‘Plato’ is extremely stable. There would be some performance-improvements if I was to switch to the proprietary drivers, but making the transition can be a nightmare. It involves black-lists, etc.. Another reason for me to keep using the Nouveau Drivers, is the fact that unlike how it was years ago, today, those drivers support real OpenGL 3, hardware-rendering. Therefore, I’m already getting partial benefit from the hardware-rendering which the graphics card has, while using the open-source driver. The only two things which I do not get, is OpenCL or CUDA computing capabilities, as Nouveau does not support that. Therefore, anything which I write about that subject, will have to remain theoretical for now. I suppose that on my laptop ‘Klystron’, because I have the AMD chip-set more-correctly installed, I could be using OpenCL… Also, ‘Plato’ is not fully a ‘Kanotix’ system. When I installed ‘Plato’, I borrowed a core system from Kanotix, before Kanotix was ready for Debian / Stretch. This means that certain features which Kanotix would normally have, which make it easier to switch between graphics drivers, are not installed on ‘Plato’. And that really makes the idea daunting, to try to switch… Dirk ## Another Reason, for me Not To Set Up JACK as the Back-End For PulseAudio According to This Posting, I had written that hypothetically, it might make some sense for me to set up ‘JACK‘ as the back-end, by which ‘PulseAudio‘ sends its sound output, full-time. As opposed to the notion not making sense, to allow a few scripts to do so temporarily, that are being run by the ‘QJackCtl‘ GUI. Well there is still a reason, why this might not be so. Doing so would be consistent, with setting up a Debian system completely according to our own preferences, and then if something does not work, we are up to our own devices to fix it. My laptop ‘Klystron’ is a Kanotix / Spitfire system, which is also Debian-based, but in which the exact configuration has been done for me, by the Kanotix team, from their Live Disk (which can actually be written onto a USB-Key). This means that there is an advantage to me, in keeping certain configuration details conform to what the Kanotix people prescribed. If I did run into trouble with it, they would have some chance of maybe suggesting solutions, but only on the assumption that mine is still functioning within their parameters. The way those parameters are, the current back-end to the PulseAudio sound server is displayed in the GUI as being “gstreamer“, and yet good compatibility with all things ALSA is maintained… If I was to reconfigure my computers completely, for example because I wanted to change them to use the ‘GNOME desktop manager’ for instance, instead of ‘KDE‘, then the Kanotix team would say ‘Sorry, we are not familiar with the details of your system anymore. Therefore, we cannot help you.’ Yet, more generally, Debian allows deep changes to a configuration. Dirk ## Klystron Kernel Update My Linux laptop named ‘Klystron‘ is still fully subscribed to the “Kanotix” repositories. As the reader may recall, Kanotix is a slightly customized version of Debian Linux, that is KDE-based, and that is maintained by a group of developer-experts who I trust implicitly. Being subscribed to their specific repositories and configuration details has as one advantage, that periodic kernel updates are fed to me, via package manager. As I came home from camping yesterday, on July 7, I also rebooted this laptop, and saw that indeed, a kernel update was being offered, which I immediately installed. So that laptop now has kernel version ‘4.4.0-30-generic‘, or so my /boot directory would seem to say. One problem that I was experiencing with that laptop since before camping, was some subtle WiFi issue which I could no longer pinpoint. I had written, that its ability to use the hardware encryption offered by the (kernel module ‘RTL8723BE’) chip-set seemed to work fine. But there were some other problems with the WiFi. I would like to be able to report, when and if that issue has been resolved completely. But since Klystron has only been running on kernel version 4.4.0-30-generic for one day, it is still far too soon to call out a victory. I will continue to observe the behavior of that laptop for the next little while, and give further comment on it later. So far its behavior looks good. Dirk ## One main reason for Choosing Kanotix A question which many people have asked me, was ‘What is the advantage of choosing Kanotix, over choosing just any generic Debian / Linux OS?’ And an important area in Linux, is hardware recognition. We tend to appreciate it, if we can install a Linux system with little or no mess. And Kanotix users are of the variety, who want to be able to plug in all the latest hardware, and just have it play. Kanotix does not always ship with the generic, stock Debian kernel, but with a special Kanotix kernel build, that has all the latest drivers in-tree. And it is a bit of a joke which we sometimes make, that even though Windows introduced the concept first, in many cases, Linux can be more plug-and-play than Windows is. This is especially true for Kanotix. Dirk
Report Experimental Realization of Wheeler's Delayed-Choice Gedanken Experiment See allHide authors and affiliations Science  16 Feb 2007: Vol. 315, Issue 5814, pp. 966-968 DOI: 10.1126/science.1136303 Abstract Wave-particle duality is strikingly illustrated by Wheeler's delayed-choice gedanken experiment, where the configuration of a two-path interferometer is chosen after a single-photon pulse has entered it: Either the interferometer is closed (that is, the two paths are recombined) and the interference is observed, or the interferometer remains open and the path followed by the photon is measured. We report an almost ideal realization of that gedanken experiment with single photons allowing unambiguous which-way measurements. The choice between open and closed configurations, made by a quantum random number generator, is relativistically separated from the entry of the photon into the interferometer. Young's double-slit experiment, realized with particles sent one at a time through an interferometer, is at the heart of quantum mechanics (1). The striking feature is that the phenomenon of interference, interpreted as a wave following two paths simultaneously, is incompatible with our common-sense representation of a particle following one route or the other but not both. Several single-photon interference experiments (26) have confirmed the wave-particle duality of the light field. To understand their meaning, consider the single-photon interference experiment sketched in Fig. 1. In the closed interferometer configuration, a single-photon pulse is split by a first beamsplitter BSinput of a Mach-Zehnder interferometer and travels through it until a second beamsplitter BSoutput recombines the two interfering arms. When the phase shift Φ between the two arms is varied, interference appears as a modulation of the detection probabilities at output ports 1 and 2, respectively, as cos2 Φ and sin2 Φ. This result is the one expected for a wave, and as Wheeler pointed out, “[this] is evidence... that each arriving light quantum has arrived by both routes” (7). If BSoutput is removed (the open configuration), each detector D1 or D2 on the output ports is then associated with a given path of the interferometer, and, provided one uses true single-photon light pulses, “[either] one counter goes off, or the other. Thus the photon has traveled only one route” (7). Such an experiment supports Bohr's statement that the behavior of a quantum system is determined by the type of measurement performed on it (8). Moreover, it is clear that for the two complementary measurements considered here, the corresponding experimental settings are mutually exclusive; that is, BSoutput cannot be simultaneously present and absent. In experiments where the choice between the two settings is made long in advance, one could reconcile Bohr's complementarity with Einstein's local conception of the physical reality. Indeed, when the photon enters the interferometer, it could have received some “hidden information” on the chosen experimental configuration and could then adjust its behavior accordingly (9). To rule out that too-naïve interpretation of quantum mechanical complementarity, Wheeler proposed the “delayed-choice” gedanken experiment in which the choice of which property will be observed is made after the photon has passed BSinput: “Thus one decides the photon shall have come by one route or by both routes after it has already done its travel” (7). Since Wheeler's proposal, several delayed-choice experiments have been reported (1015). However, none of them fully followed the original scheme, which required the use of the single-particle quantum state as well as relativistic space-like separation between the choice of interferometer configuration and the entry of the particle into the interferometer. We report the realization of such a delayed-choice experiment in a scheme close to the ideal original proposal (Fig. 1). The choice to insert or remove BSoutput is randomly decided through the use of a quantum random number generator (QRNG). The QRNG is located close to BSoutput and is far enough from the input so that no information about the choice can reach the photon before it passes through BSinput. Our single-photon source, previously developed for quantum key distribution (16, 17), is based on the pulsed, optically excited photoluminescence of a single nitrogen-vacancy (N-V) color center in a diamond nanocrystal (18). At the single-emitter level, these photoluminescent centers, which can be individually addressed with the use of confocal microscopy (19), have shown unsurpassed efficiency and photostability at room temperature (20, 21). In addition, it is possible to obtain single photons with a well-defined polarization (16, 22). The delayed-choice scheme is implemented as follows. Linearly polarized single photons are sent by a polarization beamsplitter BSinput through an interferometer (length 48 m) with two spatially separated paths associated with orthogonal S and P polarizations (Fig. 2). The movable output beamsplitter BSoutput consists of the combination of a half-wave plate, a polarization beamsplitter BS′, an electro-optical modulator (EOM) with its optical axis oriented at 22.5° from input polarizations, and a Wollaston prism. The two beams of the interferometer, which are spatially separated and orthogonally polarized, are first overlapped by BS′ but can still be unambiguously identified by their polarization. Then, the choice between the two interferometer configurations, closed or open, is realized with the EOM, which can be switched between two different configurations within 40 ns by means of a homebuilt fast driver (16): Either no voltage is applied to the EOM, or its half-wave voltage Vπ is applied to it. In the first case, the situation corresponds to the removal of BSoutput and the two paths remain uncombined (open configuration). Because the original S and P polarizations of the two paths are oriented along prism polarization eigenstates, each “click” of one detector D1 or D2 placed on the output ports is associated with a specific path (path 1 or path 2, respectively). When the Vπ voltage is applied, the EOM is equivalent to a half-wave plate that rotates the input polarizations by an angle of 45°. The prism then recombines the two rotated polarizations that have traveled along different optical paths, and interference appears on the two output ports. We then have the closed interferometer configuration (22). To ensure the relativistic space-like separation between the choice of the interferometer configuration and the passage of the photon at BSinput, we configured the EOM switching process to be randomly decided in real time by the QRNG located close to the output of the interferometer (48 m from BSinput). The random number is generated by sampling the amplified shot noise of a white-light beam. Shot noise is an intrinsic quantum random process, and its value at a given time cannot be predicted (23). The timing of the experiment ensures the required relativistic space-like separation (22). Then, no information about the interferometer configuration choice can reach the photon before it enters the interferometer. The single-photon behavior was first tested using the two output detectors feeding single and coincidence counters with BSoutput removed (open configuration). We used an approach similar to the one described in (2) and (6). Consider a run corresponding to NT trigger pulses applied to the emitter, with N1 counts detected in path 1 of the interferometer by D1, N2 counts detected in path 2 by D2, and NC detected coincidences corresponding to joint photodetections on D1 and D2 (Fig. 2). Any description in which light is treated as a classical wave, such as the semiclassical theory with quantized photo-detectors (24), predicts that these numbers of counts should obey the inequality $Math$(1) Violation of this inequality thus gives a quantitative criterion that characterizes nonclassical behavior. For a single-photon wavepacket, quantum optics predicts perfect anticorrelation (i.e., α = 0) in agreement with the intuitive image that a single particle cannot be detected simultaneously in the two paths of the interferometer (2). We measured α = 0.12 ± 0.01, hence we are indeed close to the pure single-photon regime. The nonideal value of the α parameter is due to residual background photoluminescence of the diamond sample and to the two-phonon Raman scattering line, which both produce uncorrelated photons with Poissonian statistics (6). With single-photon pulses in the open configuration, we expected each detector D1 and D2 to be unambiguously associated with a given path of the interferometer. To test this point, we evaluated the “which-way” information parameter I =(N1N2)/(N1 + N2) (2528) by blocking one path (e.g., path 2) and measuring the counting rates at D1 and D2. A value of I higher than 0.99 was measured, limited by detector dark counts and residual imperfections of the optical components. The same value was obtained when the other path was blocked (e.g., path 1). In the open configuration, we thus have an almost ideal which-way measurement. The delayed-choice experiment itself is performed with the EOM randomly switched for each photon sent into the interferometer, corresponding to a random choice between the open and closed configurations. The phase shift Φ between the two interferometer arms is varied by tilting the second polarization beamsplitter BS′ with a piezoelectric actuator (PZT). For each photon, we recorded the chosen configuration, the detection events, and the PZT position. All raw data were saved in real time and were processed only after a run was completed. For each PZT position, detection events on D1 and D2 corresponding to each configuration were sorted (Fig. 3). In the closed configuration, we observed interference with 0.94 visibility. We attribute the departure from unity to an imperfect overlap of the two interfering beams. In the open configuration, interference totally disappears, as evidenced by the absence of modulation in the two output ports when the phase shift Φ was varied. We checked that in the delayed-choice configuration, parameters α and I kept the same values as measured in the preliminary tests presented above. Our realization of Wheeler's delayed-choice gedanken experiment demonstrates that the behavior of the photon in the interferometer depends on the choice of the observable that is measured, even when that choice is made at a position and a time such that it is separated from the entrance of the photon into the interferometer by a space-like interval. In Wheeler'swords, as no signal traveling at a velocity less than that of light can connect these two events, “we have a strange inversion of the normal order of time. We, now, by moving the mirror in or out have an unavoidable effect on what we have a right to say about the already past history of that photon” (7). Once more, we find that nature behaves in agreement with the predictions of quantum mechanics even in surprising situations where a tension with relativity seems to appear (29). Supporting Online Material Materials and Methods Figs. S1 to S4 References View Abstract
## Electronic Journal of Probability ### Branching processes seen from their extinction time via path decompositions of reflected Lévy processes #### Abstract We consider a spectrally positive Lévy process $X$ that does not drift to $+\infty$, viewed as coding for the genealogical structure of a (sub)critical branching process, in the sense of a contour or exploration process [34, 29]. We denote by $I$ the past infimum process defined for each $t\geq 0$ by $I_t:= \inf _{[0,t]} X$ and we let $\gamma$ be the unique time at which the excursion of the reflected process $X-I$ away from 0 attains its supremum. We prove that the pre-$\gamma$ and the post-$\gamma$ subpaths of this excursion are invariant under space-time reversal, which has several other consequences in terms of duality for excursions of Lévy processes. It implies in particular that the local time process of this excursion is also invariant when seen backward from its height. As a corollary, we obtain that some (sub)critical branching processes such as the binary, homogeneous (sub)critical Crump-Mode-Jagers (CMJ) processes and the excursion away from 0 of the critical Feller diffusion, which is the width process of the continuum random tree, are invariant under time reversal from their extinction time. #### Article information Source Electron. J. Probab., Volume 23 (2018), paper no. 98, 30 pp. Dates Accepted: 4 September 2018 First available in Project Euclid: 25 September 2018 https://projecteuclid.org/euclid.ejp/1537841130 Digital Object Identifier doi:10.1214/18-EJP221 Zentralblatt MATH identifier 06964792 #### Citation Dávila Felipe, Miraine; Lambert, Amaury. Branching processes seen from their extinction time via path decompositions of reflected Lévy processes. Electron. J. Probab. 23 (2018), paper no. 98, 30 pp. doi:10.1214/18-EJP221. https://projecteuclid.org/euclid.ejp/1537841130 #### References • [1] Romain Abraham and Jean-François Delmas. Williams’ decomposition of the Lévy continuum random tree and simultaneous extinction probability for populations with neutral mutations. Stochastic Processes and their Applications, 119(4):1124 – 1143, 2009. • [2] David Aldous. The continuum random tree. I. Ann. Probab., 19(1):1–28, 1991. • [3] David Aldous. The continuum random tree. III. Ann. Probab., 21(1):248–289, 1993. • [4] David Aldous and Lea Popovic. A critical branching process model for biodiversity. Adv. in Appl. Probab., 37(4):1094–1115, 2005. • [5] Gerold Alsmeyer and Uwe Rösler. Asexual versus promiscuous bisexual Galton-Watson processes: the extinction probability ratio. Ann. Appl. Probab., 12(1):125–142, 2002. • [6] Krishna B. Athreya and Peter E. Ney. Branching processes. Springer-Verlag, New York-Heidelberg, 1972. Die Grundlehren der mathematischen Wissenschaften, Band 196. • [7] Jean Bertoin. Splitting at the infimum and excursions in half-lines for random walks and Lévy processes. Stochastic Process. Appl., 47(1):17–35, 1993. • [8] Jean Bertoin. Lévy processes, volume 121 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge, 1996. • [9] Gabriel Berzunza and Juan Carlos Pardo. Asymptotic behaviour near extinction of continuous-state branching processes. J. Appl. Probab., 53(2):381–391, 2016. • [10] Hongwei Bi and Jean-François Delmas. Total length of the genealogical tree for quadratic stationary continuous-state branching processes. Ann. Inst. Henri Poincaré Probab. Stat., 52(3):1321–1350, 2016. • [11] Ma. Emilia Caballero, Amaury Lambert, and Gerónimo Uribe Bravo. Proof(s) of the Lamperti representation of continuous-state branching processes. Probab. Surveys, 6(0):62–89, 2009. • [12] Loïc Chaumont. Sur certains processus de Lévy conditionnés à rester positifs. Stochastics Stochastics Rep., 47(1-2):1–20, 1994. • [13] Loïc Chaumont. Conditionings and path decompositions for Lévy processes. Stochastic Process. Appl., 64(1):39–54, 1996. • [14] Loïc Chaumont. On the law of the supremum of Lévy processes. Ann. Probab., 41(3A):1191–1217, 2013. • [15] Loïc Chaumont and Ronald A. Doney. On Lévy processes conditioned to stay positive. Electron. J. Probab., 10:no. 28, 948–961, 2005. • [16] Miraine Dávila Felipe and Amaury Lambert. Time reversal dualities for some random forests. ALEA Lat. Am. J. Probab. Math. Stat., 12(1):399–426, 2015. • [17] Jean-François Delmas and Olivier Hénard. A Williams decomposition for spatially dependent superprocesses. Electron. J. Probab., 18(0), 2013. • [18] Ronald A. Doney. Fluctuation theory for Lévy processes, volume 1897 of Lecture Notes in Mathematics. Springer, Berlin, 2007. • [19] Thomas Duquesne. Path decompositions for real Levy processes. Ann. Inst. H. Poincaré Probab. Statist., 39(2):339–370, 2003. • [20] Thomas Duquesne and Jean-François Le Gall. Random trees, Lévy processes and spatial branching processes. Astérisque, (281):vi+147, 2002. • [21] Warren W. Esty. The reverse Galton-Watson process. J. Appl. Probability, 12(3):574–580, 1975. • [22] Steven N. Evans. Probability and Real Trees: École d’Été de Probabilités de Saint-Flour XXXV-2005. Lecture Notes in Mathematics. Springer Berlin Heidelberg, 2007. • [23] Priscilla Greenwood and Jim Pitman. Fluctuation identities for Lévy processes and splitting at the maximum. Adv. in Appl. Probab., 12(4):893–902, 1980. • [24] Jean Jacod and Albert N. Shiryaev. Limit theorems for stochastic processes, volume 288 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, second edition, 2003. • [25] Peter Jagers. Branching processes with biological applications. Wiley-Interscience [John Wiley & Sons], London-New York-Sydney, 1975. Wiley Series in Probability and Mathematical Statistics—Applied Probability and Statistics. • [26] Fima C. Klebaner, Uwe Rösler, and Serik Sagitov. Transformations of galton-watson processes and linear fractional reproduction. Advances in Applied Probability, 39(4):1036–1053, 2007. • [27] Andreas E. Kyprianou. Introductory lectures on fluctuations of Lévy processes with applications. Universitext. Springer-Verlag, Berlin, 2006. • [28] Andreas E. Kyprianou and Juan Carlos Pardo. Continuous-state branching processes and self-similarity. J. Appl. Probab., 45(4):1140–1160, 2008. • [29] Amaury Lambert. The contour of splitting trees is a Lévy process. Ann. Probab., 38(1):348–395, 2010. • [30] Amaury Lambert, Florian Simatos, and Bert Zwart. Scaling limits via excursion theory: interplay between Crump-Mode-Jagers branching processes and processor-sharing queues. Ann. Appl. Probab., 23(6):2357–2381, 2013. • [31] Amaury Lambert and Gerónimo Uribe Bravo. Totally ordered measured trees and splitting trees with infinite variation, 2016. • [32] John Lamperti. Continuous state branching processes. Bull. Amer. Math. Soc., 73:382–386, 1967. • [33] Jean-François Le Gall. Random trees and applications. Probab. Surv., 2:245–311, 2005. • [34] Jean-Francois Le Gall and Yves Le Jan. Branching processes in Lévy processes: the exploration process. Ann. Probab., 26(1):213–252, 1998. • [35] Grégory Miermont. Ordered additive coalescent and fragmentations associated to Levy processes with no positive jumps. Electron. J. Probab., 6:no. 14, 33 pp. (electronic), 2001. • [36] Pressley Warwick Millar. Exit properties of stochastic processes with stationary independent increments. Trans. Amer. Math. Soc., 178:459–479, 1973. • [37] Pressley Warwick Millar. Random times and decomposition theorems. In Probability (Proc. Sympos. Pure Math., Vol. XXXI, Univ. Illinois, Urbana, Ill., 1976), pages 91–103. Amer. Math. Soc., Providence, R. I., 1977. • [38] Pressley Warwick Millar. Zero-one laws and the minimum of a Markov process. Trans. Amer. Math. Soc., 226:365–391, 1977. • [39] Etienne Pardoux and Anton Wakolbinger. From Brownian motion with a local time drift to Feller’s branching diffusion with logistic growth. Electronic Communications in Probability, 16(0):720–731, 2011. • [40] Daniel Revuz and Marc Yor. Continuous martingales and Brownian motion, volume 293 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1991. • [41] Boris Alekseevich Rogozin. On distributions of functionals related to boundary problems for processes with independent increments. Theory of Probability & Its Applications, 11(4):580–591, jan 1966. • [42] David Williams. Path decomposition and continuity of local time for one-dimensional diffusions. I. Proc. London Math. Soc. (3), 28:738–768, 1974. • [43] Ahmed I. Zayed. Handbook of function and generalized function transformations. Mathematical Sciences Reference Series. CRC Press, Boca Raton, FL, 1996.
# Old habits ### Or: Recombination dynamics in a coupled two-level system with strong nonradiative contribution (an ipython notebook) One of my students investigates the transient behavior of the photoluminescence emitted by (In,Ga)N quantum heterostructures after being irradiated by a short laser pulse. The characteristic feature of the transients he observes for these structures is a power-law decay of the photoluminescence intensity with time at low temperatures (10 K), which changes into an exponential decay at higher temperatures (150 K). His results reminded me of ones I acquired myself ages ago, during my own time as a PhD student. I didn't have a sensible interpretation then, but I do have one now. Hence, to the surprise of my student, I nonchalantly wrote down the following two coupled differential equations as if they had just occurred to me: ${\dot n_b} = -n_b/\tau_{rel} - n_b/\tau_{nr} + n_w \exp(-\frac{E_b}{k_B T})/\tau_e$ ${\dot n_w} = n_b/\tau_{rel} - t^{b-1} n_w/\tau_{w} - n_w \exp(-\frac{E_b}{k_B T})/\tau_e$ with the second term in the second equation ($t^{b-1} n_w/\tau_{w}$) being the experimental observable. The form of this term is giving rise to what is known as a stretched exponential, which for $b \rightarrow 0$ approaches a power law for long times. Using Mathematica, it takes 7 lines of code to solve this system and to plot it for several temperatures $T$ (or, equivalently and as done here, for different energies $k_B T$): from IPython.display import Image Image(filename='/home/cobra/ownCloud/MyStuff/Documents/pdes-net.org/files/images/deqs.png') As I had hoped, this simple model reproduces the behavior observed in the experiment fairly well. My student was also pleased, but only with the result, not with the method: he's familiar with Matlab, but not with Mathematica. Well, I suspect that he's also not too familiar with Matlab, since he could otherwise have easily solved the equations himself. In any case, his admission reminded me that I actually wanted to migrate my computational activities to free software whenever possible. It's not easy to get rid of old habits, and as I'm using Mathematica since 23 years, the code above just came naturally, while the one below still required an explicit intellectual effort. But that's essentially the same lame excuse which I'm tired to hear from users of, for example, Microsoft Office when asked to prepare a document with LibreOffice. So let's get moving. Here's the above differential equation system solved and plotted using numpy, scipy, and matplotlib in an ipython notebook. Note how the notebook integrates the actual code with comments, links, pictures and equations. Editing this notebook is a real treat thanks to the use of markdown and LaTeX syntax. #Initialize %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np from scipy.integrate import odeint mpl.rcParams['figure.figsize'] = (6, 4) mpl.rcParams['font.size'] = 16 mpl.rcParams['text.usetex'] = True mpl.rcParams['font.family'] = 'Serif' mpl.rcParams['lines.linewidth'] = 2 # Parameters taurel = 0.1 # capture time taue = 0.1 # emission time eb = 20 # activation energy (in meV) b = 0 # stretching parameter (approaches power law for b -> 0) # solve the system dn/dt = f(n,t) and plot the solution fig = plt.figure() #for kt in np.linspace(1,13,7): # approximate temperatures for T in [10,30,50,70,100,150]: # exact temperatures kt = 0.086173324*T # in meV def f(n,t): nbt = n[0] nwt = n[1] # the model equations f0 = - nbt/taurel - nbt/taunr + nwt*np.exp(-eb/kt)/taue f1 = nbt/taurel - nwt*np.exp(-eb/kt)/taue - t**(b-1)*nwt/tauw return [f0, f1] # initial conditions nb0 = 1. # initial population in barrier nw0 = 0 # initial population in well n0 = [nb0, nw0] # initial condition vector t = np.logspace(-2,2,1000) # logarithmic time grid # solve the DES soln = odeint(f, n0, t) nb = soln[:, 0] nw = soln[:, 1] # plot results plt.loglog(t, t**(b-1)*nw/tauw/max(t**(b-1)*nw/tauw), label=r'%.0f K' %T) plt.xlabel('Time (ns)') plt.ylabel('Intensity (arb. units)') plt.axis([7e-3,40,1e-5,2]); plt.legend(loc='lower left', frameon=False, prop={'size':15}, labelspacing=0.15) fig.savefig('transients.pdf') The above command saves this plot as a publication-ready figure in pdf format. There are many other available formats, including eps (for the traditional LaTeX/dvipdf toolchain), svg (for further editing with inkscape, or publishing in the web) and png (for insertion in a Powerpoint/Impress presentation). # Benchmarks I haven't posted any browser benchmark since more than four years. For a good reason: if all contenders perform equally well, there's no need to benchmark them. Of course, the recent excitement about Apple's Safari outpacing Chrome and Firefox still came to my attention. ;) As it turned out, however, Safari managed to do that only in benchmarks developed by Apple, but not in those provided by Apple's competitors Google and Mozilla. This result seems to confirm qualified opinions as to which the available browser benchmarks are to be disregarded altogether. Well, let's see what we have: Mozilla: Kraken Now let's see what we've got: System Jetstream Speedometer Octane Kraken (ms) 5: Office (i7 4790, Archlinux) Chromium 44.0.2403.155 225.72±7.14 95.7±2.0 41571 749.8±0.6% Firefox 40.0.2 215.21±8.08 61.0±2.1 35537 839.5±2.4% 4: Desktop (Xeon E3 1240 v2, Archlinux) Chromium 44.0.2403.155 180.72±7.66 53.7±0.35 32280 919.6±1.7% Firefox 40.0.2 164.03±1.44   29132 1030.3±4.1% 3: Notebook (Pentium P6200, Archlinux) Chromium 44.0.2403.155 82.44±1.56 22.8±0.34 13167 2138.6±1.6% Firefox 40.0.2 70.53±5.91   11944 2300.4±2.8% 2: Netbook (Atom N270, Debian Stretch) Chromium 44.0.2403.107 17.28±1.37 5.15±0.07 2924 12985.7±3.7% 1: Tablet (ARM Cortex A9, Android 5.1.1) Chrome 44.0.2403.133 15.46±0.92 7.64±0.19 2776 16248.3±5.1% Chromium consistently performs better than Firefox across all benchmarks, even in Mozilla's own benchmark Kraken. The difference, however, is insignificant. But that's not what I was actually interested in. What I really wanted to see was whether one can abuse these browser benchmarks for a kind of quick and dirty system benchmarking without the need to install anything. And as you see, all benchmarks scale fairly well: If we restrict ourselves to x86 for the moment, Jetstream (red) and Octane (blue) scale essentially identically across the entire range of systems. As a matter of fact, they even seem way too close if we suppose that Jetstream and octane are independent benchmarks. Kraken (yellow) scales very similarly, with the sole exception of the mini, for which it indicates only half of the performance than all other benchmarks. Finally, Speedometer (green) really seems to like the i4790. Perhaps it's making use of AVX2? For the ARM architecture of the tablet, Jetstream and Octane are again almost identical, but Kraken suffers and Speedometer gains. No big deal, though: the notebook is still miles away. What's also interesting: a tablet from 2012 does not outperform a netbook from 2008, contrary to what the media want us to believe. But who believes them anyway anything anymore. Compared to specialized benchmarks designed to test the number crunching performance of systems, the current ones reflect the average system performance we can expect in everyday situations. Particularly, of course, in browsing. ;) # Pip I usually manage my system-wide Python installation with the system's package manager, and avoid using Python's own package manager pip. Not so in a virtual environment. As described in a previous post, pip, together with the pip-tools, offers a very convenient way to get and to keep all tools in your virtual environment up to date. Imagine my surprise when a couple of months ago 'pip-review --interactive' did not update my tools one by one, as it used to do, but only resulted in a 'fish: Unknown command 'pip-review'. As it turned out, the developer of pip-tools (which I dutifully kept up to date) had decided to dump pip-review in favor of two new commands, pip-compile and pip-sync. I'm sure he had good reasons for that, and it's really no big deal for end-users like me. After a 'pip list' I knew what I needed, created a corresponding requirements.in as Vincent described in his post, and ran pip-compile requirements.in &amp;&amp; pip-sync requirements.txt That's all. It's still as useful as ever. # Strategic location Last Friday, a record was broken: 38.9°C in Berlin. We've got five (5) fans working day and night, but it was a lost battle from the very beginning. At least they helped us to dissipate the 8 liters of mineral water we've consumed over the day... I always wonder how the cats manage. Without being able to sweat, all they can do is to breath rapidly and to search for a cooler place. Interestingly, the individual wellness zone differs greatly from cat to cat. Indy was hiding in such narrow, secluded spaces that I could not possibly get a decent photograph. Luca, on the other hand, decided to place himself right at the entrance of our bathroom, below the Noren serving as visual separation during the hottest days of the year: # Put them on ice Using bleeding edge distributions has the charm of getting all the great new stuff prior to anyone else. Such as conky 1.10, which comes with an entirely new and shining configuration syntax. Yippee ki-yay etc. etc. Naturally, I first got 1.10 on my three Arch-based systems. Upon startup, conky tried to convert the previous config on the fly, but failed to do so. A manual conversion via convert.lua also failed. Grmpf. Well, I thought, the new Lua syntax doesn't seem to be that different from the old one. I thus edited my config file and changed all entries according to the new rules. Took me about 10 min. Still plenty of time till the beginning of my next meeting! Let's start conky with its new config and iron out the few remaining wrinkles in the remaining 5 min. Error. Error. Errrror! Come on, girls. Why don't you start a new branch of Conky (2?) and keep the old one (1.9) as stable? Why forcing the new Lua syntax down the throats of everybody, including people like me who don't have the time to pamper immature and bawling software like yours? After the meeting, I downgraded conky and put in on hold. In Arch, you can downgrade by issuing pacman -U conky-1.9.0-7 in the directory holding your old package, i.e., normally /var/cache/pacman/pkg (I've moved this cache to my HDD and can thus afford to keep several versions for each package). You put the package on hold by simply adding it to the line IgnorePkg = conky in /etc/pacman.conf. A few days later, the same happened in Debian Stretch. Muuuh! I checked and saw that the Jessie repository still lists conky 1.9. Excellent! Let's downgrade by first adding the Jessie repository to /etc/apt/sources.list, and then running wajig update wajig install conky=1.9.0-6 wajig install conky-all=1.9.0-6 To put them on hold, use wajig hold conky wajig hold conky-all Don't sabotage my conkys. I like them as they are: # sudo for polkit Call me old-fashioned, but I usually configure my systems to have a root account, and I do everything which requires root privileges as root. With one exception: I like to be able to update the system without having to enter a password. All system which I administer are running rolling-release distributions (Arch or Debian Sid/Testing) for which updates are frequent. For updating from the command line, sudo is the method of choice. Note that neither Arch nor Debian have sudo installed by default. Also note that manual changes are now expected to be placed in a separate file in /etc/sudoers.d instead of directly in /etc/sudoers. To edit this file, use 'visudo -f filename' (don't use an extension, since filenames containing a period "." will be ignored). Everything else is self-explaining. But what if you're planning to use a front-end which does not respect these settings since it responds to a different framework to control user privileges? I've encountered this problem with pamac, which listens to polkit. In this case, and as explained in detail by the Arch Wiki, one has to create a custom rule in /etc/polkit-1/rules.d/. To allow, for example, all users in the group wheel to update without having to enter a password, I've put there the following as 49-passwordless-pamac.rules: /* Allow members of the wheel group to update with pamac * without password authentication, similar to "sudo NOPASSWD:" */ { if ((action.id == "org.manjaro.pamac.commit") &amp;&amp; subject.isInGroup("wheel")) { return polkit.Result.YES; } }); # Enough is not enough People like to tell me about their digital life. Recently, I hear a lot about laptops replacing desktops. To my surprise, many are willing to spend quite a bit for this transition: most more than €1000 and some €2000 and above. All of the former assure me that the performance of their desktop replacement is "more than enough". Several of the latter actually believe that the system's performance is directly related to its pricetag. One member of this group owns a Macbook 12 (sorry for the cliché, but what can I do) and tried to convince me in a particular insistent and tenacious way that his gadget would outperform even the most powerful desktops available. As a matter of fact, it is quite far from this feat (look here). As all ultrabooks equipped with a Core M-5YXX processor, it performs slightly better (20–40%) than my €299 Fujitsu Lifebook which, however, is miles away from a decent desktop: Right, the last one is not your usual desktop, but a compute server with—at the moment of the screenshot—a load of 24. The remaining 8 physical cores managed to outperform my desktop, if only by a slight margin. But I bet that my new office desktop (an i7 4790) will be able to complete the run under 100 s ... let's see tomorrow. ;) Update: even below 80 s: In any case, my lifebook has been thoroughly smashed and humiliated: instead of the 2 minutes required by the Xeons it needed a staggering 20 minutes for the same result. The lifebook is great for writing this entry, but for serious tasks I rather turn to a serious computer. And the same applies to your Core M driven ultrabooks. At this point, the compassionate ones of the laptop owners are sure to secretly pity me. Just imagine that I have to stay all time in a kind of server room for being able to take advantage of the performance depicted above. ... how gruesome! Well, as much as I like to sit in my study, I prefer to run my computations from where I want. How do I do that? I have WiFi. :D Seriously, for computations I use an ipython server running on my desktop. The posts of Fillipo and Nikolaus have helped me to find the best (most robust and convenient) way to connect to this server. For what follows, both the server and the client need to have mosh installed. For the server, tmux (which I also like to use for different reasons) is required in addition. From the client, I connect to the server (blackvelvet) by issuing mosh blackvelvet -- tmux new-session -s ipython In this session, I then start an ipython notebook server on blackvelvet: ipython notebook --no-browser --port=8889 and subsequently detach the session with Ctrl-a d. I can attach again anytime by issuing mosh blackvelvet -- tmux a on the client. Isn't that neat? To connect to the server, I start an ssh tunnel on the client with ssh -N -L localhost:8888:localhost:8889 cobra@blackvelvet and open the notebook at http://localhost:8888/: Thanks to MathJax, the font rendering is way better than that of Mathematica. The ssh tunnel can be stopped with a single Ctrl-C on the client, the ipython server needs a double one on the server (after attaching the session again). # Sculpting Suppose we have a crystal in the form of a rectangular block, given by a list of the absolute atomic coordinates with one atom per line Ga -9.278018850000 -9.642000000000 -7.870137870000 N -7.422415050000 -9.642000000000 -8.518637230000 . . . looking like that: What we really want, though, is a hexagonal column which tapers down from the bottom to the top — just like a column of Doric order. How can we sculpt this column out of the block we have? Where do we get the digital chisel we need? A colleague of mine solved this quest in the most elegant fashion: with an awk one-liner. awk '{x=sqrt($2^2);y=sqrt($3^2);d=x/2+sqrt(3)/2*y;z=$4;cutoff='$radius1'*('$zmax'-z)/'$zrange'+'$radius2'*(z-('$zmin'))/'$zrange'};d&lt;=cutoff &amp;&amp; x&lt;cutoff' block.dat &gt; column.dat Voilà: # Season color An early morning shot of the norway maple just outside my study. Together with a freshly pressed orange juice followed by good coffee, and watching my cats watching the birds in the tree, the world doesn't seem such a hostile place after all. # Printing A century ago, printing under Linux had only one name: Postscript. Preferably spoken natively by the printer. Which, in most cases, was a laser printer from Hewlett Packard with a price tag well above$2000. Color laser printers became available in the mid 90s, but it was not until 2005 that I've seen them to represent the standard printing solution in offices. At home, laser printers are still comparatively rare. Inkjets dominate the scene for several reasons: they are (at the first glance) very affordable, they can produce printouts of photographs with astonishingly high fidelity, and they are available as multifunctional all-in-one solutions combining printer, scanner, fax and copier. I followed this trend without reflecting on my actual needs. Since I thought (wrongly) that I anyway don't need any printing at home, I purchased a GDI printer and connected it via USB to the Windows-powered gaming rig of my wife. The first one, a simple Canon inkjet, which ceased to function after an extended period of inactivity because of the resulting dry ink, was followed by an Epson all-in-one, which did its job until the ink was dry. We simply don't print that much. I was tired of these toys anyway, since I began to see the convenience of printing from all my devices anywhere in the LAN. A week ago, I've thus acquired a Hewlett Packard LaserJet Pro 200. It speaks Postscript, has an ethernet connection and 128 MB memory, resolves 600 dpi and turns out 14 pages per minute. For €140. I knew, of course, that these low-end business-class color lasers have become quite affordable over the past few years. To see one in action, and to see with your own eyes that the print quality is quite on par with the enterprise-class model from 2010 in the office, well, that's different. At present, I cannot deny an entirely unjustified feeling of grandness when issuing Ctrl-P . ;)
# Exact Differential Equations of Order n? bolbteppa A second order ode $Py'' + Qy' + Ry = 0$ is exact if there exists a first order ode $Ay' + By$ such that $$(Ay' + By)' = Ay'' + (A' + B)y' + B'y = Py'' + Qy' + Ry = 0$$ How can one cast the analysis of this question in terms of exact differential equations? In other words, could somebody explain this interesting quote: The derivation of the conditions of exact integrability of an ordinary differential equation of the nth. order (or of a differential expression involving derivatives of a single dependent variable with regard to a single independent variable) is sometimes made to depend upon the theory of integration of an expression, exact in the sense of the foregoing chapter. As however the connection is not immediate and this method is not the principal method, it will be sufficient here to give the following references to some of the writers on the subject, in whose memoirs references to Euler, Lagrange, Lexell, and Condorcet, will be found in ... Forsyth - Page 33 Thanks! 1 person Homework Helper exact in the sense of the foregoing chapter ... see foregoing chapter. (reads) seems the author launches into investigating "exact" equations without making a general definition. That's a pretty nasty text btw. I'd be remiss if I didn't advise you to ditch it. For an idea how the concept of "exactness" may apply to higher order ODEs see instead: http://reference.wolfram.com/mathematica/tutorial/DSolveExactLinearSecondOrderODEs.html That doesn't help, neither the insult nor the link, but thanks... The book does Frobenius' theorem in like a page, & spends hundreds of pages on inexact equations & is filled with history as well as substance, I'm surprised anyone with any appreciation for such a subject would write something like this off so easily. Also, the definition you've provided should really be a theorem if we're trying to link higher order exactness with first order exactness, but how and ever... Last edited: 1 person
# Left Shift Semigroup and Its Infinitesimal Generator ## Left shift operator Throughout we consider the Hilbert space $L^2=L^2(\mathbb{R})$, the space of all complex-valued functions with real variable such that $f \in L^2$ if and only if $\lVert f \rVert_2^2=\int_{-\infty}^{\infty}|f(t)|^2dm(t)<\infty$ where $m$ denotes the ordinary Lebesgue measure (in fact it's legitimate to consider Riemann integral in this context). For each $t \geq 0$, we assign an bounded linear operator $Q(t)$ such that $(Q(t)f)(s)=f(s+t).$ This is indeed bounded since we have $\lVert Q(t)f \rVert_2 = \lVert f \rVert_2$ as the Lebesgue measure is translate-invariant. This is a left translation operator with a single step $t$. ## Properties of $Q(t)$ ### In Hilbert space The inner product in $L^2$ is defined by $(f,g)=\int_{-\infty}^{\infty}f(s)\overline{g(s)}dm(s), \quad f,g\in L^2.$ If we apply $Q(t)$ on $f$, we see \begin{aligned} (Q(t)f,g) &= \int_{-\infty}^{\infty}f(s+t)\overline{g(s)}dm(s) \\ &= \int_{-\infty}^{\infty}f(u)\overline{g(u-t)}dm(u) \quad (u=s+t) \\ &= (f,Q(t)^{\ast}g) \end{aligned} where $Q(t)^\ast$ is the adjoint of $Q(t)$, which happens to be a left translation operator with a single step $t$. Clearly we have $Q(t)Q(t)^\ast=Q(t)^\ast Q(t)=I$, which indicates that $Q(t)$ is unitary. Also we can check in a more manual way: $(Q(t)f,Q(t)g) = \int_{-\infty}^{\infty}f(s+t)\overline{g(s+t)}dm(s) = \int_{-\infty}^{\infty}f(s+t)\overline{g(s+t)}dm(s+t)=(f,g).$ By operator theory, since $Q(t)$ is unitary and bounded, the spectrum of $Q(t)$ lies in the unit circle $S^1$. ### As a semigroup Note $Q(0)=I$ and $Q(t+u)f(s)=f(s+t+u)=f[(s+t)+u]=Q(u)f(s+t)=Q(t)Q(u)f(s)$ for all $f \in L^2$, which is to say that $Q(t+u)=Q(t)Q(u)$. Therefore we say $\{Q(t)\}$ is a semigroup. But what's more important is that it satisfies strong continuity near the origin: $\lim_{t \to 0}\lVert Q(t)f - f \rVert_2 = 0.$ This is not too hard to verify. It suffices to prove that $\lim_{t \to 0}\int_{-\infty}^{\infty} |f(s+t)-f(s)|^2dm(s) =0.$ Note $C_c(\mathbb{R})$ (continuous function with compact support) is dense in $L^2$, and for $f \in C_c(\mathbb{R})$, it follows immediately from properties of continuous functions. Next pick $f \in L^2$. Then for $\varepsilon>0$ there exists some $f_1 \in C_c(\mathbb{R})$ such that $\lVert f-f_1 \rVert_2 < \frac{\varepsilon}{4}$ and $\lVert f_1(s+t)-f_1(s)\rVert_2<\frac{\varepsilon}{2}$ for $t$ small enough. If we put $f_2=f-f_1$ we get \begin{aligned} \lVert f(s+t)-f(s) \rVert_2 &\leq \lVert f_1(s+t)-f_1(s) \rVert_2+\lVert f_2(s+t)-f_2(s) \rVert \\ &< \frac{\varepsilon}{2}+2\lVert f_2(s)\rVert < \varepsilon. \end{aligned} The limit follows as $\varepsilon \to 0$. ## Infinitesimal generator of $Q(t)$ Recall that the infinitesimal generator of $Q(t)$ is defined to be $A=\lim_{t \to 0}\frac{1}{t}[Q(t)-I]$ which is inspired by $\frac{d}{dt}e^{tA}=A$ (thanks to von Neumann). Note if $f \in L^2$ is differentiable, then $Af(s) = \lim_{t \to 0} \frac{f(s+t)-f(s)}{t} = f'(s).$ The infinitesimal generator of $Q(t)$ being differentiation operator is quite intuitive. But we need to clarify it in $L^2$ which is much larger. So what is the domain $D(A)$? We don't know yet but we can guess. When talking about differentiation in $L^p$ space, it makes sense to extend our differentiation to absolute continuity. Also we need to make sure that $Af \in L^2$, hence we put $D=\{f\in L^2:f \text{ absolutely continuous, }f' \in L^2\}.$ For every $x \in D(A)$ and any fixed $t$ we already have $\frac{d}{dt}Q(t)f(s)=f'(s+t)=Af(s+t)$ hence $Af=f'$ for every $x \in D(A)$ and it follows that $D(A) \subset D$. In fact, $A$ is the restriction of the differential operator on $D(A)$. Conversely, By Hille-Yosida theorem, we see $1 \in \rho(A)$ and also one can show that $1 \in \rho(\frac{d}{dx})$. Therefore $(I-\frac{d}{dx})D(A)=(I-A)D(A)=L^2.$ But we also have $D=(I-\frac{d}{dx})^{-1}L^2.$ Thus $D = \left(1-\frac{d}{dx}\right)^{-1}\left(1-\frac{d}{dx}\right)D(A)=D(A).$ The fact that $(I-\frac{d}{dx})D=L^2$ can be realised by the equation $f-f'=g$, where the existence of solution can be proved using Fourier transform. Note $\hat{f'}(y)=iy\hat{f}(y)$, with some knowledge of distribution, the result can also be given by $D(A)=\left\{f\in L^2:\int_{-\infty}^{\infty}|y\hat{f}(y)|^2dy<\infty\right\}.$ ### Spectrum of the generator By the Hille-Yosida theorem, the half plane $\{z:\Re z>0\} \subset \rho(A)$. But we can give a more precise result of it. Pick any $f \in D(A)$. It is directly verified that $(A-\lambda{I})f = f'-\lambda{f}.$ Put $g=(A-\lambda{I})f$ then $\hat{g}(y)=iy\hat{f}(y)-\lambda{\hat{f}(y)}.$ Therefore $\hat{f}(y) = \frac{\hat{g}(y)}{iy-\lambda} \in L^2.$ Conversely, suppose $h(y)=\frac{\hat{g}(y)}{iy-\lambda} \in L^2$, then $\hat{g}(y)=iyh(y)-\lambda{h}(y)$. If we take its Fourier inverse, we see $g \in R(A-\lambda{I})$. If $g \in L^2$, then clearly $\hat{g} \in L^2$. It remains to discuss $\hat{g}(y)/(iy-\lambda)$. Note $iy$ is on the imaginary axis, hence if $\lambda$ is not purely imaginary, then $\hat{g}(y)/(iy-\lambda) \in L^2$. If $\lambda$ is purely imaginary however, then we may have $\hat{g}(y)/(iy-\lambda)\not\in L^2$. For example, we can take $\hat{g}=\chi_{[s-1,s+1]}$ where $\lambda = is$. Hence if $\lambda$ is purely imaginary, $R(A-{\lambda}I)$ is a proper subspace of $L^2$. Therefore we conclude: $\sigma(A)= \{z \in \mathbb{C}:\Re z = 0\}.$ This is an exercise on W. Rudin's Functional Analysis. You can find related theorems in Chapter 13. Left Shift Semigroup and Its Infinitesimal Generator https://desvl.xyz/2021/05/26/left-shift/ Desvl 2021-05-26 2021-10-11
# An Eigenvalue Sensitivity Example On May 29-30, I plan to attend a conference, organized by Nick Higham, at the University of Manchester. The title of the conference is Celebrating the Centenary of James H. Wilkinson's Birth I am giving a talk about one of Wilkinson's favorite topics, how can perturbations of a matrix with sensitive eigenvalues produce a defective matrix with a multiple eigenvalue. I uncovered this example from my original Fortran MATLAB User's Guide, a University of New Mexico technical report, dated 1981, four years before MathWorks. The text of this blog post is copied from that guide. I've inserted comments where today's MATLAB scales the eigenvectors differently. ### Contents #### Eigenvalue Sensitivity Example In this example, we construct a matrix whose eigenvalues are moderately sensitive to perturbations and then analyze that sensitivity. We begin with the statement B = [3 0 7; 0 2 0; 0 0 1] B = 3 0 7 0 2 0 0 0 1 Obviously, the eigenvalues of B are 1, 2 and 3 . Moreover, since B is not symmetric, these eigenvalues are slightly sensitive to perturbation. (The value B(1,3) = 7 was chosen so that the elements of the matrix A below are less than 1000.) We now generate a similarity transformation to disguise the eigenvalues and make them more sensitive. L = [1 0 0; 2 1 0; -3 4 1] M = L\L' L = 1 0 0 2 1 0 -3 4 1 M = 1 2 -3 -2 -3 10 11 18 -48 The matrix M has determinant equal to 1 and is moderately badly conditioned. The similarity transformation is A = M*B/M A = -64.0000 82.0000 21.0000 144.0000 -178.0000 -46.0000 -771.0000 962.0000 248.0000 Because det(M) = 1 , the elements of A would be exact integers if there were no roundoff. So, A = round(A) A = -64 82 21 144 -178 -46 -771 962 248 This, then, is our test matrix. We can now forget how it was generated and analyze its eigenvalues. [X,D] = eig(A) X = 0.0891 0.0735 -0.1089 -0.1782 -0.1923 0.1634 0.9800 0.9786 -0.9805 D = 3.0000 0 0 0 1.0000 0 0 0 2.0000 % { Today the eigenvectors are scaled to have unit length. % Classic MATLAB did not scale the eigenvectors. It got % % X = % % -.0891 3.4903 41.8091 % .1782 -9.1284 -62.7136 % -.9800 46.4473 376.2818 % } Since A is similar to B, its eigenvalues are also 1, 2 and 3. They happen to be computed in another order by the EISPACK subroutines. The fact that the columns of X, which are the eigenvectors, are so far from being orthonormal is our first indication that the eigenvalues are sensitive. To see this sensitivity, we display more figures of the computed eigenvalues. format long diag(D) ans = 3.000000000003868 0.999999999998212 1.999999999997978 We see that, on this computer, the last five {today: four} significant figures are contaminated by roundoff error. A somewhat superficial explanation of this is provided by format short cond(X) ans = 1.7690e+03 % { Classic: % % ANS = % % 3.2216e+05 % } The condition number of X gives an upper bound for the relative error in the computed eigenvalues. However, this condition number is affected by scaling. X = X/diag(X(3,:)) cond(X) X = 0.0909 0.0751 0.1111 -0.1818 -0.1965 -0.1667 1.0000 1.0000 1.0000 ans = 1.7692e+03 Rescaling the eigenvectors so that their last components are all equal to one has two consequences. The condition of X is decreased by over two orders of magnitude. {Not today.} (This is about the minimum condition that can be obtained by such diagonal scaling.) Moreover, it is now apparent that the three eigenvectors are nearly parallel. More detailed information on the sensitivity of the individual eigenvalues involves the left eigenvectors. Y = inv(X') Y'*A*X Y = -511.5000 259.5000 252.0000 616.0000 -346.0000 -270.0000 159.5000 -86.5000 -72.0000 ans = 3.0000 -0.0000 -0.0000 0.0000 1.0000 0.0000 0 -0.0000 2.0000 We are now in a position to compute the sensitivities of the individual eigenvalues. for j = 1:3, c(j) = norm(Y(:,j))*norm(X(:,j)); end c c = 833.1092 450.7228 383.7564 These three numbers are the reciprocals of the cosines of the angles between the left and right eigenvectors. It can be shown that perturbation of the elements of A can result in a perturbation of the j-th eigenvalue which is c(j) times as large. In this example, the first eigenvalue has the largest sensitivity. We now proceed to show that A is close to a matrix with a double eigenvalue. The direction of the required perturbation is given by E = -1.e-6*Y(:,1)*X(:,1)' E = 1.0e-03 * 0.0465 -0.0930 0.5115 -0.0560 0.1120 -0.6160 -0.0145 0.0290 -0.1595 With some trial and error which we do not show, we bracket the point where two eigenvalues of a perturbed A coalesce and then become complex. eig(A + .4*E) eig(A + .5*E) ans = 1.1500 2.5996 2.2504 ans = 2.4067 + 0.1753i 2.4067 - 0.1753i 1.1866 + 0.0000i Now, a bisecting search, driven by the imaginary part of one of the eigenvalues, finds the point where two eigenvalues are nearly equal. r = .4; s = .5; while s-r > 1.e-14 t = (r+s)/2; d = eig(A+t*E); if imag(d(1)) == 0, r = t; else, s = t; end end format long t t = 0.450380734135428 Finally, we display the perturbed matrix, which is obviously close to the original, and its pair of nearly equal eigenvalues. A+t*E eig(A+t*E) ans = 1.0e+02 * -0.639999790572959 0.819999581145917 0.210002303697455 1.439999747786789 -1.779999495573578 -0.460002774345322 -7.710000065305207 9.620000130610412 2.479999281642729 ans = 2.415743144226897 2.415738627741217 1.168517777651195 The first two eigenvectors of A + t*E are almost indistinguishable indicating that the perturbed matrix is almost defective. [X,D] = eig(A+t*E) format short cond(X) X = 0.094108719644215 -0.094108788388666 -0.070056238584537 -0.174780805492871 0.174780755585658 0.194872753681838 0.980099596427929 -0.980099598727050 -0.978323429806240 D = 2.415743144226897 0 0 0 2.415738627741217 0 0 0 1.168517777651195 ans = 3.9853e+08 Published with MATLAB® R2018b |
# Linux – Dual boot Arch and Windows 10 with GRUB arch linuxgrublinuxmulti-bootwindows I've been using Ubuntu for a while and recently decided to start using Arch. I have both a 120GB SSD and 1TB HDD in my system. When installing Arch on my SSD, I created partitions for /boot /home / (root) as well as a swap partition. I also have Windows 10 installed on my HDD using the default partitions. I would like to be able to dual boot between the 2 operating systems. I installed GRUB to my SSD on /dev/sda, but now when I boot into GRUB, I only see the option to boot into Arch, not Windows. I was wondering how I could boot into Windows via GRUB. I have a default "/etc/grub.d/40_custom" file: #!/bin/sh exec tail -n +3 $0 # This file provides an easy way to add custom menu entries . Simply type the # menu entries you want to add after this comment. Be care ful not to change # the 'exec tail' line above. When I run "lsblk", I get: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 111.8G 0 disk ├─sda1 8:1 0 200M 0 part /boot ├─sda2 8:2 0 12G 0 part [SWAP] ├─sda3 8:3 0 25G 0 part / └─sda4 8:4 0 74.6G 0 part /home sdb 8:16 0 931.5G 0 disk ├─sdb1 8:17 0 499M 0 part ├─sdb2 8:18 0 100M 0 part ├─sdb3 8:19 0 16M 0 part └─sdb4 8:20 0 930.9G 0 part Running "fdisk -l" gives me: Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 47A5839B-C531-4BEE-A083-BD0C5CF4524A Device Start End Sectors Size Type /dev/sdb1 2048 1023999 1021952 499M Windows rec /dev/sdb2 1024000 1228799 204800 100M EFI System /dev/sdb3 1228800 1261567 32768 16M Microsoft r /dev/sdb4 1261568 1953523711 1952262144 930.9G Microsoft b Disk /dev/sda: 111.8 GiB, 120034123776 bytes, 234441648 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x1c797ba1 Device Boot Start End Sectors Size Id Type /dev/sda1 2048 411647 409600 200M 83 Linux /dev/sda2 411648 25577471 25165824 12G 83 Linux /dev/sda3 25577472 78006271 52428800 25G 83 Linux /dev/sda4 78006272 234441647 156435376 74.6G 83 Linux Any help would be appreciated, thanks! #### Best Answer As described here, you need to do the following (The following must be done as root on your Arch OS): As I can assume from your Output /dev/sdb2 seems to be your Windows-Bootloader so the First step will be: $ mkdir /mnt/windows $mount /dev/sdb2 /mnt/windows$ grub-probe --target=fs_uuid /mnt/windows/EFI/Microsoft/Boot/bootmgfw.efi Copy the Output of the last Command to a File and proceed with this: $grub-probe --target=hints_string /mnt/windows/EFI/Microsoft/Boot/bootmgfw.efi Also copy the output to the File. After that run the following to unmount the Partition $ umount /mnt/windows $rmdir /mnt/windows After that open the File /boot/grub/custom.cfg with your preferred editor and add the Following Lines: if [ "${grub_platform}" == "efi" ]; then menuentry "Microsoft Windows Vista/7/8/8.1 UEFI/GPT" { insmod part_gpt insmod fat insmod search_fs_uuid insmod chain search --fs-uuid --set=root $hints_string$fs_uuid } fi Where $hints_string is the second Output and $fs_uuid is the first one. At least run this to update your Grub: \$ grub-mkconfig -o /boot/grub/grub.cfg
# Double integral of a Log (natural) ## Homework Statement I wish to find the following integral over the rectangle [-a,a] in u and [-b,b] in v using Mathematica. The constants a and b are positive (and non-zero). The variables x and y are in Reals. $$A(x,y)=\int_{-a}^a{\int_{-b}^b{\log{\left[(u-x)^2+(v-y)^2\right]}{\rm d}v}{\rm d}u}$$ see above ## The Attempt at a Solution I find using Mathematica a different integral depending if I integrate on u or v first. I use the following code: To integrate on v first: Code: Integrate[Log[(u - x)^2 + (v - y)^2], v] (% /. v -> b) - (% /. v -> -b) Integrate[%, u] (% /. u -> a) - (% /. u -> -a) To integrate on u first: Code: Integrate[Log[(u - x)^2 + (v - y)^2], u] (% /. u -> a) - (% /. u -> -a) Integrate[%, v] (% /. v -> b) - (% /. v -> -b) The difference between these two (definite) double integrals is: $$(x-y) (x+y) \left(\text{ArcTan}\left[\frac{a-x}{b-y}\right]+\text{ArcTan}\left[\frac{a+x}{b-y}\right]+\text{ArcTan}\left[\frac{b-y}{a-x}\right]+\text{ArcTan}\left[\frac{b-y}{a+x}\right]+\text{ArcTan}\left[\frac{a-x}{b+y}\right]+\text{ArcTan}\left[\frac{a+x}{b+y}\right]+\text{ArcTan}\left[\frac{b+y}{a-x}\right]+\text{ArcTan}\left[\frac{b+y}{a+x}\right]\right)$$ Could someone please tell me what I am doing wrong? You know, there is a relation $$\arctan(\frac{1}{x}) = \frac{\pi}{2} sgn(x) - \arctan(x)$$ By looking at your fractions, you will see that there is always one that is reciprocal to another. So I'm guessing that under certain conditions, you will get 0. I'm saying under certain conditions, because I suspect the integral to diverge for -a<y<a and -b<x<b and then you can't expect to get 0 anymore, but it's just a guessing, I could be wrong, you should apply the relation and see if and when you get 0. Thank you. Wouldn't it diverge for y=+ or - b and x=+ or - a? Why would it diverge for values in between? I think this should be defined up to the boundary and the term sgn(x) depends on which side of the boundary we are approaching the limit. Well, my idea was just that, since $$\log 0 = -\infty$$ the integral might diverge in case the argument of the logarithm becomes zero. And as long as $-a \leq y \leq a$ and $-b \leq x \leq b$, you have this zero in your logarithm. Anyhow, I'm not saying that it is definately divergent. If you want to be sure, you should do the integration manually. Cheers It seems to give 2*pi inside (−b<y<b and −a<x<a) and Indeterminate on the boundary. Outside the boundary, it gives 0. Does that mean that if I am interested in the domain (−b<y<b and −a<x<a), I can just add the difference between the 2 definite double integrals (which is 2*pi*(x−y)*(x+y))? But then, if I integrate the other double integral first, the difference between them is 2*pi*(y−x)*(y+x)... How can I make the definite double integral give the same result for (−b<y<b and −a<x<a) whether I integrate u or v first? Well, it certainly seems like there is something fishy if x,y are inside the boundary. Can you check if you get a finite result for x,y inside the boundary? The result of both integrals inside the boundary is finite. The result inside the boundary of the difference between them is 2*pi*(x−y)*(x+y) or 2*pi*(y−x)*(y+x), depending on which is integrated first. Can you post what you got for one of the integrals? \begin{align*}A(x,y,a,b)=-12 a b &+(a-x) (b-y) \text{Log}\left[(a-x)^2+(b-y)^2\right]\\ &+(a+x) (b-y) \text{Log}\left[(a+x)^2+(b-y)^2\right]\\ &+(a-x) (b+y) \text{Log}\left[(a-x)^2+(b+y)^2\right]\\ &+(a+x) (b+y) \text{Log}\left[(a+x)^2+(b+y)^2\right]\\ &+(a-x)^2 \left(-\text{ArcTan}\left[\frac{a-x}{b-y}\right]+\text{ArcTan}\left[\frac{-a+x}{b+y}\right]\right)\\ &+(a+x)^2 \left(-\text{ArcTan}\left[\frac{a+x}{b-y}\right]-\text{ArcTan}\left[\frac{a+x}{b+y}\right]\right)\\ &-(b-y)^2 \left(\text{ArcTan}\left[\frac{b-y}{a-x}\right]+\text{ArcTan}\left[\frac{b-y}{a+x}\right]\right)\\ &-(b+y)^2 \left(\text{ArcTan}\left[\frac{b+y}{a-x}\right]+\text{ArcTan}\left[\frac{b+y}{a+x}\right]\right)\\ &+x^2\left(-\text{ArcTan}\left[\frac{a-x}{b-y}\right]-\text{ArcTan}\left[\frac{a+x}{b-y}\right]-\text{ArcTan}\left[\frac{b-y}{a-x}\right]-\text{ArcTan}\left[\frac{b-y}{a+x}\right]+\text{ArcTan}\left[\frac{-a+x}{b+y}\right]-\text{ArcTan}\left[\frac{a+x}{b+y}\right]-\text{ArcTan}\left[\frac{b+y}{a-x}\right]-\text{ArcTan}\left[\frac{b+y}{a+x}\right]\right)\end{align*} If I integrate with respect to the other variable first, the last term is multiplied by y^2 instead of x^2.
Article | Open | Published: # Quantum wave mixing and visualisation of coherent and superposed photonic states in a waveguide ## Abstract Superconducting quantum systems (artificial atoms) have been recently successfully used to demonstrate on-chip effects of quantum optics with single atoms in the microwave range. In particular, a well-known effect of four wave mixing could reveal a series of features beyond classical physics, when a non-linear medium is scaled down to a single quantum scatterer. Here we demonstrate the phenomenon of quantum wave mixing (QWM) on a single superconducting artificial atom. In the QWM, the spectrum of elastically scattered radiation is a direct map of the interacting superposed and coherent photonic states. Moreover, the artificial atom visualises photon-state statistics, distinguishing coherent, one- and two-photon superposed states with the finite (quantised) number of peaks in the quantum regime. Our results may give a new insight into nonlinear quantum effects in microwave optics with artificial atoms. ## Introduction In systems with superconducting quantum circuits—artificial atoms—strongly coupled to harmonic oscillators, many amazing phenomena of on-chip quantum optics have been recently demonstrated establishing the direction of circuit quantum electrodynamics1,2,3, particularly, in such systems one is able to resolve photon number states in harmonic oscillators4, manipulate with individual photons5,6,7, generate photon (Fock) states8 and arbitrary quantum states of light9, demonstrate the lasing effect from a single artificial atom10, study nonlinear effects11, 12. The artificial atoms can also be coupled to open space13(microwave transmission lines) and also reveal many interesting effects such as resonance fluorescence of continuous waves14, 15, elastic and inelastic scattering of single-frequency electromagnetic waves16, 17, amplification18, single-photon reflection and routing19, non-reciprocal transport of microwaves20, coupling of distant artificial atoms by exchanging virtual photons21, superradiance of coupled artificial atoms22. All these effects require strong coupling to propagating waves and therefore are hard to demonstrate in quantum optics with natural atoms due to low-spatial mode matching of propagating light. In our work, we focus on the effect of wave mixing. Particularly, the four wave mixing is a textbook optical effect manifesting itself in a pair of frequency side peaks from two driving tones on a classical Kerr-nonlinearity23, 24. Ultimate scaling down of the nonlinear medium to a single artificial atom, strongly interacting with the incident waves, results in time resolution of instant multi-photon interactions and reveals effects beyond classical physics. Here, we demonstrate the physical phenomenon of quantum wave mixing (QWM) on a superconducting artificial atom in the open one-dimensional (1D) space (coplanar transmission line on-chip). We show two regimes of QWM comprising different degrees of ‘quantumness’: the first and most remarkable one is QWM with nonclassical superposed states, which are mapped into a finite number of frequency peaks. In another regime, we investigate the different orders of wave mixing of classical coherent waves on the artificial atom. The dynamics of the peaks exhibits a series of Bessel-function Rabi oscillations, different from the usually observed harmonic ones, with orders determined by the number of interacting photons. Therefore, the device utilising QWM visualises photon-state statistics of classical and non-classical photonic states in the open space. The spectra are fingerprints of interacting photonic states, where the number of peaks due to the atomic emission always exceeds by one the number of absorption peaks. Below, we summarise several specific findings of this work: (1) demonstration of the wave mixing on a single quantum system; (2) in the quantum regime of mixing, the peak pattern and the number of the observed peaks is a map of coherent and superposed photonic states, where the number of peaks N peaks is related to the number of interacting photons N ph as N peaks = 2N ph + 1. Namely, the one-photon state (in two-level atoms) results in precisely three emission peaks; the two-photon state (in three-level atoms) results in five emission peaks; and the classical coherent states, consisting of infinite number of photons, produce a spectrum with an infinite number of peaks; (3) Bessel function Rabi oscillations are observed and the order of the Bessel functions depends on the peak position and is determined by the number of interacting photons. ## Results ### Coherent and zero-one photon superposed state To evaluate the system, we consider electromagnetic waves propagating in a 1D transmission line with an embedded two-level artificial atom15 (see also Supplementary Methods, Supplementary Fig. 1) shown in Fig. 1a. In this work, we are interested in photon statistics, which will be revealed by QWM, therefore, we will consider our system in the photon basis. The coherent wave in the photon (Fock) basis$$\left| N \right\rangle$$ is presented as $$\left| \alpha \right\rangle = {e^{ - \frac{{{{\left| \alpha \right|}^2}}}{2}}}\left( {\left| 0 \right\rangle + \alpha \left| 1 \right\rangle + \frac{{{\alpha ^2}}}{{\sqrt {2!} }}\left| 2 \right\rangle + \frac{{{\alpha ^3}}}{{\sqrt {3!} }}\left| 3 \right\rangle + \ldots } \right)$$ (1) and consists of an infinite number of photonic states. A two-level atom with ground and excited states $$\left| g \right\rangle$$ and $$\left| e \right\rangle$$ driven by the field can be prepared in superposed state $$\Psi = {\rm{cos}}\frac{\theta }{2}\left| g \right\rangle + {\rm{sin}}\frac{\theta }{2}\left| e \right\rangle$$ and, if coupled to the external photonic modes, transfers the excitation to the mode, creating zero-one photon superposed state $$\left| \beta \right\rangle = \left| {{\rm{cos}}\frac{\theta }{2}} \right|\left( {\left| 0 \right\rangle + \beta \left| 1 \right\rangle } \right),$$ (2) where $$\beta = {\rm{tan}}\frac{\theta }{2}$$ (Supplementary Note 1). The superposed state comprises coherence, however $$\left| \beta \right\rangle$$ state is different from classical coherent state $$\left| \alpha \right\rangle$$, consisting of an infinite number of Fock states. The energy exchange process is described by the operator $${b^ - }{b^ + }\left| g \right\rangle \left\langle g \right| + {b^ + }\left| g \right\rangle \left\langle e \right|$$, which maps the atomic to photonic states, where $${b^ + } = \left| 1 \right\rangle \left\langle 0 \right|$$ and $${b^ - } = \left| 0 \right\rangle \left\langle 1 \right|$$ are creation/annihilation operators of the zero-one photon state. The operator is a result of a half-period oscillation in the evolution of the atom coupled to the quantised photonic mode and we keep only relevant for the discussed case (an excited atom and an empty photonic mode) terms (Supplementary Note 1). We discuss and demonstrate experimentally an elastic scattering of two waves with frequencies ω  = ω 0 − δω and ω + = ω 0 + δω, where δω is a small detuning, on a two-level artificial atom with energy splitting $$\hbar {\omega _0}$$. The scattering, taking place on a single artificial atom, allows us to resolve instant multi-photon interactions and statistics of the processes. Dealing with the final photonic states, the system Hamiltonian is convenient to present as the one, which couples the input and output fields $$H = i\hbar g\left( {b_ - ^ + {a_ - } - b_ - ^ - a_ - ^\dag + b_ + ^ + {a_ + } - b_ + ^ - a_ + ^\dag } \right),$$ (3) using creation and annihilation operators $$a_ \pm ^\dag$$ (a ±) of photon states $${\left| N \right\rangle _ \pm }$$ (N is an integer number) and $$b_ \pm ^ +$$ and $$b_ \pm ^ -$$ are creation/annihilation operators of single-photon output states at frequencies ω ±. Here $$\hbar g$$ is the field-atom coupling energy. Operators $$b_ \pm ^ +$$ and $$b_ \pm ^ -$$ also describe the atomic excitation/relaxation, using substitutions $$b_ \pm ^ + \leftrightarrow {e^{ \mp i\varphi }}\left| e \right\rangle \left\langle g \right|$$ and $$b_ \pm ^ - \leftrightarrow {e^{ \pm i\varphi }}\left| g \right\rangle \left\langle e \right|$$, where φ = δωt is a slowly varying phase (Supplementary Note 2). The phase rotation results in the frequency shift according to ω ± t = ω 0 t ± δωt and more generally for $$b_m^ \pm$$ (with integer m) the varied phase mδφ results in the frequency shift ω m  = ω 0 + mδω. The system evolution over the time interval [t, t′] (t′ = t + Δt and $$\delta \omega \Delta t \ll 1$$) described by the operator U(t, t′) = exp(−iHΔt/$$\hbar$$) can be presented as a series expansion of different order atom–photon interaction processes $$a_ \pm ^\dag b_ \pm ^ -$$ and $${a_ \pm }b_ \pm ^ +$$—sequential absorption-emission accompanied by atomic excitations/relaxations (Supplementary Note 2). Operators b describe the atomic states (instant interaction of the photons in the atom) and, therefore, satisfy the following identities: $$b_p^ - b_m^ + = {\left| 0 \right\rangle _{m - p}}\left\langle 0 \right|$$, $$b_j^ \pm b_p^ \mp b_m^ \pm = b_{j - p + m}^ \pm$$, $$b_p^ \pm b_m^ \pm = 0$$. The excited atom eventually relaxes producing zero-one superposied photon field $${\left| \beta \right\rangle _m}$$ at frequency ω m  = ω 0 + mδω according to $$b_m^ + \left| 0 \right\rangle = {\left| 1 \right\rangle _m}$$. We repeat the evolution and average the emission on the time interval t>δω −1 and observe narrow emission lines. In the general case, the atom in a superposed state generates coherent electromagnetic waves of amplitude $${V_m} = - \frac{{\hbar \Gamma _1}}{\mu }\left\langle {b_m^ + } \right\rangle$$ (4) at frequency ω m , where Γ 1 is the atomic relaxation rate and μ is the atomic dipole moment15, 17. ### Elastic scattering and Bessel function Rabi oscillations To study QWM, we couple the single artificial atom (a superconducting loop with four Josephson junctions) to a transmission line via a capacitance (Supplementary Methods). The atom relaxes with the photon emission rate found to be Γ 1/2π ≈ 20 MHz. The coupling is strong, which means that any non-radiative atom relaxation is suppressed and almost all photons from the atom are emitted into the line. The sample is held in a dilution refrigerator with base temperature 15 mK. We apply periodically two simultaneous microwave pulses with equal amplitudes at frequencies ω and ω +, length Δt = 2 ns and period T r = 100 ns (much longer than the atomic relaxation time $$\Gamma _1^{ - 1} \approx 8$$ ns). A typical emission power spectrum integrated over many periods (bandwidth is 1 kHz) is shown in Fig. 2a. The pattern is symmetric with many narrow peaks (as narrow as the excitation microwaves), which appeared at frequencies ω 0 ± (2k + 1)δω, where k ≥ 0 is an integer number. We linearly change driving amplitude (Rabi frequency) Ω, which is defined from the measurement of harmonic Rabi oscillations under single-frequency excitation. The dynamics of several side peaks versus linearly changed ΩΔt (here we vary Ω, however, equivalently Δt can be varied) is shown on plots of Fig. 2b. Note that the peaks exhibit anharmonic oscillations well fitted by the corresponding 2k + 1-order Bessel functions of the first kind. The first maxima are delayed with the peak order, appearing at ΩΔt k + 1. Note also that detuning δω should be within tens of megahertz (≤Γ1). However, in this work, we use δω/2π = 10 kHz to be able to quickly span over several δω of the spectrum analyser (SA) with the narrow bandwidth. Figure 1b examplifies the third-order process (known as the four-wave mixing in the case of two side peaks), resulting in the creation of the right hand-side peak at ω 3 = 2ω + − ω . The process consists of the absorption of two photons of frequency ω + and the emission of one photon at ω . More generally, the 2k + 1-order peak at frequency ω 2k+1 = (k + 1)ω + −  (≡ω 0 + (2k + 1)δω) is described by the multi-photon process $${({a_ + }a_ - ^\dag )^k}{a_ + }b_{2k + 1}^ +$$, which involves the absorption of k + 1 photons from ω + and the emission of k photons at ω ; and the excited atom eventually generates a photon at ω 2k+1. The symmetric left hand-side peaks at ω 0 − (2k + 1)δω are described by a similar processes with swapped indexes (+ ↔ −). The peak amplitudes from Eq. (4) are described by expectation values of b-operators, which at frequency ω 2k+1 can be written in the form of $$\left\langle {b_{2k + 1}^ + } \right\rangle = {D_{2k + 1}}\langle {{{( {{a_ + }a_ - ^\dag } )}^k}{a_ + }} \rangle$$. The prefactor D 2k+1 depends on the driving conditions and can be calculated summing up all virtual photon processes (e.g., $$a_ + ^\dag {a_ + }$$, $$a_ - ^\dag {a_ - }$$, etc.) not changing frequencies (Supplementary Note 2). For instance, the creation of a photon at 2ω + − ω is described by $$\left\langle {b_3^ + } \right\rangle = {D_3}\left\langle {{a_ + }a_ - ^\dag {a_ + }} \right\rangle$$. As the number of required photons increases with k, the emission maximum takes longer time to appear (Fig. 2b). To derive the dependence observed in our experiment, we consider the case with initial state $$\Psi = \left| 0 \right\rangle \otimes \left( {{{\left| \alpha \right\rangle }_ - } + {{\left| \alpha \right\rangle }_ + }} \right)$$ and $$\alpha \gg 1$$. We find then that the peaks exhibit Rabi oscillations described by $$\left\langle {{b_{2k + 1}}} \right\rangle = {\left( { - 1} \right)^k}/2 \times {J_{2k + 1}}\left( {2 \Omega \Delta t} \right)$$ (Supplementary Note 2, Eq. (29)) and the mean number of generated photons per cycle in 2k + 1-mode is $$\left\langle {{N_{ \pm \left( {2k + 1} \right)}}} \right\rangle = \frac{{J_{ \pm \left( {2k + 1} \right)}^2\left( {2 \Omega \Delta t} \right)}}{4}.$$ (5) The symmetric multi-peak pattern in the spectrum is a map of an infinite number of interacting classical coherent states. The dependence from the parameter 2ΩΔt observed in our experiment can also be derived using a semiclassical approach, where the driving field is given by Ω e iδωt + Ω e iδωt = 2Ωcos δωt. As shown in Supplementary Note 2, a classical description can be mathematically more straightforward and leads to the same result, but fails to provide a qualitative picture of QWM discussed below. The Bessel function dependencies have been earlier observed in multi-photon processes, however in frequency domain25,26,27. ### QWM and dynamics of non-classical photonic states Next, we demonstrate one of the most interesting results: QWM with non-classical photonic states. We further develop the two-pulse technique separating the excitation pulses in time. Breaking time-symmetry in the evolution of the quantum system should result in asymmetric spectra and the observation of series of spectacular quantum phenomena. The upper panel in Fig. 3a demonstrates such a spectrum, when the pulse at frequency ω + is applied after a pulse at ω . Notably, the spectrum is asymmetric and contains only one side peak at frequency 2ω + − ω . There is no any signature of other peaks, which is in striking contrast with Fig. 2a. Reversing the pulse sequence mirror reflects the pattern revealing the single side peak at 2ω  − ω + (not shown here). The quantitative explanation of the process is provided on the left panel of Fig. 1c. The first pulse prepares superposed zero-one photon state $${\left| \beta \right\rangle _ - }$$ in the atom, which contains not more than one photon (N ph = 1). Therefore, only a single-positive side peak 2ω + − ω due to the emission of the ω -photon, described by $${a_ + }a_ - ^\dag {a_ + }$$, is allowed. See Supplementary Note 3 for details. To prove that there are no signatures of other peaks, except for the observed three peaks, we vary the peak amplitudes and compare the classical and QWM regimes with the same conditions. Figure 3b demonstrates the side peak power dependencies in different mixing regimes: classical (two simultaneous pulses) (left panels) and quantum (two consecutive pulses) (right panels). The two cases reveal a very similar behaviour of the right hand-side four-wave mixing peak at 2ω + − ω , however the other peaks appear only in the classical wave mixing, proving the absence of other peaks in the mixing with the quantum state. The asymmetry of the output mixed signals, in principle, can be demonstrated in purely classical systems. It can be achieved in several ways, e.g., with destructive interference, phase-sensitive detection/amplification28, filtering. All these effects are not applicable to our system of two mixed waves on a single point-like scatterer in the open (wide frequency band) space. What is more important than the asymmetry is that the whole pattern consists of only three peaks without any signature of others. This demonstrates another remarkable property of our device: it probes photonic states, distinguishing the coherent, $$\left| \alpha \right\rangle$$, and superposed states with the finite number of the photon states. Moreover, the single peak at ω 3 shows that the probed state was $$\left| \beta \right\rangle$$ with N ph = 1. This statement can be generalised for an arbitrary state. According to the picture in Fig. 1c, adding a photon increases the number of peaks from the left- and right-hand side by one, resulting in the total number of peaks N peaks = 2N ph + 1. ### Probing the two-photon superposed state To have a deeper insight into the state-sensing properties and to demonstrate QWM with different photon statistics, we extended our experiment to deal with two-photon states (N ph = 2). The two lowest transitions in our system can be tuned by adjusting external magnetic fields to be equal to $$\hbar$$ ω 0, though higher transitions are off-resonant ($$\ne \hbar {\omega _0}$$, See Supplementary Fig. 2). In the three-level atom, the microwave pulse at ω creates the superposed two-photon state $${\left| \gamma \right\rangle _ - } = C\left( {{{\left| 0 \right\rangle }_ - } + {\gamma _1}{{\left| 1 \right\rangle }_ - } + {\gamma _2}{{\left| 2 \right\rangle }_ - }} \right),$$ (6) where $$C = \sqrt {1 + {{\left| {{\gamma _1}} \right|}^2} + {{\left| {{\gamma _2}} \right|}^2}}$$. The plot in Fig. 4 shows the modified spectrum. As expected, the spectrum reveals only peaks at frequencies consisting of one or two photons of ω . The frequencies are ω 3 = 2ω + − ω , ω −3 = 2ω  − ω +, and ω 5 = 3ω + − 2ω corresponding, for instance, to processes $${a_ + }a_ - ^\dag {a_ + }c_3^ +$$, $${a_ - }{a_ - }a_ + ^\dag c_{ - 3}^ +$$ and $${a_ + }a_ - ^\dag a_ - ^\dag {a_ + }{a_ + }c_5^ +$$, where $$c_m^ +$$ and $$c_m^ -$$ are creation and annihilation operators defined on the two-photon space ($$\left| n \right\rangle$$, where n takes 0, 1 or 2). The intuitive picture of the two-photon state mixing is shown on the central and right-hand side panels of Fig. 1c. The two photon state (N ph = 2) results in the five peaks. This additionally confirms that the atom resolves the two-photon state. See Supplementary Note 4 for the details. The QWM can be also understood as a transformation of the quantum states into quantised frequencies similar to the Fourier transformation. The summarised two-dimensional plots with N ph are presented in Fig. 5. The mixing with quantum states is particularly revealed in the asymmetry. Note that for arbitrary N ph coherent states, the spectrum asymmetry will remain, giving N ph and N ph−1 peaks at the emission and absorption sides. According to our understanding, QWM has not been demonstrated in systems other than superconducting quantum ones due to the following reasons. First, the effect requires a single quantum system because individual interaction processes have to be separated in time29 and it will be washed out in multiple scattering on an atomic ensemble in matter. Next, although photon counters easily detect single photons, in the visible optical range, it might be more difficult to detect amplitudes and phases of weak power waves30, 31. On the other hand, microwave techniques allow one to amplify and measure weak coherent emission from a single quantum system17, 32, due to strong coupling of the single artificial atom; the confinement of the radiation in the transmission line; and due to an extremely high phase stability of microwave sources. The radiation can be selectively detected by either SAs or vector network analysers with narrow frequency bandwidths, efficiently rejecting the background noise. In summary, we have demonstrated QWM—an interesting phenomenon of quantum optics. We explore different regimes of QWM and prove that the superposed and coherent states of light are mapped into a quantised spectrum of narrow peaks. The number of peaks is determined by the number of interacting photons. QWM could serve as a powerful tool for building new types of on-chip quantum electronics. ### Data availability Relevant data is available from A.Yu.D. upon request. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Clarke, J. & Wilhelm, F. K. Superconducting quantum bits. Nature 453, 1031–1042 (2008). 2. 2. Wallraff, A. et al. Strong coupling of a single photon to a superconducting qubit using circuit quantum electrodynamics. Nature 431, 162–167 (2004). 3. 3. You, J. Q. & Nori, F. Atomic physics and quantum optics using superconducting circuits. Nature 474, 589597 (2011). 4. 4. Schuster, D. I. et al. Resolving photon number states in a superconducting circuit. Nature 445, 515–518 (2007). 5. 5. Peng, Z., De Graaf, S., Tsai, J. & Astafiev, O. Tuneable on-demand single-photon source in the microwave range. Nat. Commun. 7, 12588 (2016). 6. 6. Houck, A. A. et al. Generating single microwave photons in a circuit. Nature 449, 328–331 (2007). 7. 7. Lang, C. et al. Correlations, indistinguishability and entanglement in Hong-Ou-Mandel experiments at microwave frequencies. Nat. Phys. 9, 345–348 (2013). 8. 8. Hofheinz, M. et al. Generation of Fock states in a superconducting quantum circuit. Nature 454, 310–314 (2008). 9. 9. Hofheinz, M. et al. Synthesizing arbitrary quantum states in a superconducting resonator. Nature 459, 546–549 (2009). 10. 10. Astafiev, O. et al. Single artificial-atom lasing. Nature 449, 588–590 (2007). 11. 11. Hoi, I.-C. et al. Giant cross-Kerr effect for propagating microwaves induced by an artificial atom. Phys. Rev. Lett. 111, 053601 (2013). 12. 12. Kirchmair, G. et al. Observation of quantum state collapse and revival due to the single-photon Kerr effect. Nature 495, 205 (2013). 13. 13. Roy, D., Wilson, C. M. & Firstenberg, O. Colloquium: strongly interacting photons in one-dimensional continuum. Rev. Mod. Phys. 89, 021001 (2017). 14. 14. Hoi, I.-C. et al. Microwave quantum optics with an artifficial atom in one-dimensional open space. New J. Phys. 15, 025011 (2013). 15. 15. Astafiev, O. et al. Resonance fluorescence of a single artificial atom. Science 327, 840–843 (2010). 16. 16. Toyli, D. M. et al. Resonance fluorescence from an artificial atom in squeezed vacuum. Phys. Rev. X 6, 031004 (2016). 17. 17. Abdumalikov, A. A. Jr, Astafiev, O. V., Pashkin, Y. A., Nakamura, Y. & Tsai, J. Dynamics of coherent and incoherent emission from an artificial atom in a 1D space. Phys. Rev. Lett. 107, 043604 (2011). 18. 18. Astafiev, O. V. et al. Ultimate on-chip quantum amplifier. Phys. Rev. Lett. 104, 183603 (2010). 19. 19. Hoi, I.-C. et al. Demonstration of a singlephoton router in the microwave regime. Phys. Rev. Lett. 107, 073601 (2011). 20. 20. Fang, Y.-L. L. & Baranger, H. U. Multiple emitters in a waveguide: nonreciprocity and correlated photons at perfect elastic transmission. Phys. Rev. A 96, 013842 (2017). 21. 21. van Loo, A. F. et al. Photon-mediated interactions between distant artificial atoms. Science 342, 1494–1496 (2013). 22. 22. Mlynek, J., Abdumalikov, A. A., Eichler, C. & Wallraff, A. Observation of Dicke superradiance for two artificial atoms in a cavity with high decay rate. Nat. Commun. 5, 5186 (2014). 23. 23. Boyd, R. W. Nonlinear Optics (Academic press, New York, 2003). 24. 24. Scully, M. O. & Zubairy, M. Quantum Optics (Cambridge University Press, Cambridge, 1997). 25. 25. Oliver, W. D. et al. Mach-Zehnder interferometry in a strongly driven superconducting qubit. Science 310, 1653–1657 (2005). 26. 26. Sillanpää, M., Lehtinen, T., Paila, A., Makhlin, Y. & Hakonen, P. Continuous-time monitoring of Landau-Zener interference in a cooper-pair box. Phys. Rev. Lett. 96, 187002 (2006). 27. 27. Neilinger, P. et al. Landau-Zener-Stückelberg-Majorana lasing in circuit quantum electrodynamics. Phys. Rev. B 94, 094519 (2016). 28. 28. Schackert, F., Roy, A., Hatridge, M., Devoret, M. H. & Stone, A. D. Three-wave mixing with three incoming waves: signal-idler coherent attenuation and gain enhancement in a parametric amplifier. Phys. Rev. Lett. 111, 073903 (2013). 29. 29. Maser, A., Gmeiner, B., Utikal, T., Götzinger, S. & Sandoghdar, V. Few-photon coherent nonlinear optics with a single molecule. Nat. Photon. 10, 450–453 (2016). 30. 30. Lvovsky, A. I. & Raymer, M. G. Continuous-variable optical quantum-state tomography. Rev. Mod. Phys. 81, 299–332 (2009). 31. 31. Ip, E., Lau, A. P. T., Barros, D. J. & Kahn, J. M. Coherent detection in optical fiber systems. Opt. Express 16, 753–791 (2008). 32. 32. Shen, J.-T. & Fan, S. Coherent single photon transport in a one-dimensional waveguide coupled with superconducting quantum bits. Phys. Rev. Lett. 95, 213001 (2005). ## Acknowledgements We acknowledge Russian Science Foundation (grant N 16-12-00070) for supporting the work. We thank A. Semenov and E. Ilichev for useful discussions. ## Author information O.V.A. planned and designed the experiment, R.S., A.Yu.D. and T.H.-D. fabricated the sample and built the set-up for measurements. A.Yu.D., R.S. and T.H.-D. measured the raw data. A.Yu.D., V.N.A. and O.V.A. made calculations, analysed and processed the data and wrote the manuscript, with important contributions from all the authors. ### Competing interests The authors declare no competing financial interests. Correspondence to A. Yu. Dmitriev or O. V. Astafiev.
Home > Absolute Error > Absolute Error Calculations # Absolute Error Calculations ## Contents Solution: Given: The measured value of metal ball xo = 3.14 The true value of ball x = 3.142 Absolute error $\Delta$ x = True value - Measured value = A measuring instrument shows the length to be 508 feet. The Relative Error is the Absolute Error divided by the actual measurement. Learn how. this contact form Absolute and relative errors The absolute error in a measured quantity is the uncertainty in the quantity and has the same units as the quantity itself. In plain English: 4. Examples: 1. Create an account EXPLORE Community DashboardRandom ArticleAbout UsCategoriesRecent Changes HELP US Write an ArticleRequest a New ArticleAnswer a RequestMore Ideas... ## How To Calculate Absolute Error In Chemistry Such fluctuations are the main reason why, no matter how skilled the player, no individual can toss a basketball from the free throw line through the hoop each and every time, So you might see that the measurement of a building is 357±.5ft{\displaystyle 357\pm .5ft}. For the mass we should divide 1 kg by 20 kg and get 0.05. So, you would use 360 as the actual value:Δx=x0−360{\displaystyle \Delta x=x_{0}-360}. 3 Find the measured value. The formula is δx=x0−xx{\displaystyle \delta x={\frac {x_{0}-x}{x}}}, where δx{\displaystyle \delta x} equals the relative error (the ratio of the absolute error to the actual value), x0{\displaystyle x_{0}} equals the measured value, Looking at the measuring device from a left or right angle will give an incorrect value. 3. How To Calculate Absolute Error And Percent Error Once you understand the difference between Absolute and Relative Error, there is really no reason to do everything all by itself. But, if you are measuring a small machine part (< 3cm), an absolute error of 1 cm is very significant. How To Calculate Absolute Error In Excel Ways to Improve Accuracy in Measurement 1. For example, if you were to measure the period of a pendulum many times with a stop watch, you would find that your measurements were not always the same. If you are measuring a 200 foot boat, and miss the measurement by 2 feet, your percentage error will be much lower than missing the 20 foot tree measurement by 2 The difference between two measurements is called a variation in the measurements. How To Calculate Absolute Error And Relative Error This is from bad measurements, faulty premises, or mistakes in the lab. Basically, this is the most precise, common measurement to come up with, usually for common equations or reactions. Moreover, you should be able to convert one way of writing into another. ## How To Calculate Absolute Error In Excel An expected value is usually found on tests and school labs. Tolerance intervals: Error in measurement may be represented by a tolerance interval (margin of error). How To Calculate Absolute Error In Chemistry For example 5.00 has 3 significant figures; the number 0.0005 has only one significant figure, and 1.0005 has 5 significant figures. How To Calculate Absolute Error In Physics which is the absolute error? If you tried to measure something that was 12 inches long and your measurement was off by 6 inches, the relative error would be very large. http://neoxfiles.com/absolute-error/absolute-error-mean.php Method 2 Using the Actual Value and Relative Error 1 Set up the formula for relative error. EditRelated wikiHows How to Factor a Cubic Polynomial How to Find the Maximum or Minimum Value of a Quadratic Function Easily How to Do a Short Goalseeking Neutral Operations Problem in Sometimes the quantity you measure is well defined but is subject to inherent random fluctuations. How To Calculate Absolute Error In Statistics wikiHow relies on ad money to give you our free how-to guides. What if some of the experimental values are negative? For example, if you know a length is 3.535 m + 0.004 m, then 0.004 m is an absolute error. navigate here Substitute this value for x{\displaystyle x}. Absolute errors do not always give an indication of how important the error may be. Calculate Absolute Deviation Avoid the error called "parallax" -- always take readings by looking straight down (or ahead) at the measuring device. In plain English: The absolute error is the difference between the measured value and the actual value. (The absolute error will have the same unit label as the measured quantity.) Relative ## When the accepted or true measurement is known, the relative error is found using which is considered to be a measure of accuracy. The general formula, for your information, is the following; It is discussed in detail in many texts on the theory of errors and the analysis of experimental data. Measuring instruments are not exact! For example if you know a length is 0.428 m ± 0.002 m, the 0.002 m is an absolute error. Absolute Error Formula Chemistry This would be a conservative assumption, but it overestimates the uncertainty in the result. This works for any measurement system. The greatest possible error when measuring is considered to be one half of that measuring unit. No ... his comment is here When the accepted or true measurement is known, the relative error is found using which is considered to be a measure of accuracy. Your last reading for the dog's mass M, with absolute error included, is Which measurement is more precise? Then find the absolute deviation using formulaAbsolute deviation $\Delta$ x = True value - measured value = x - xoThen substitute the absolute deviation value $\Delta$ x in relative error formula Absolute, Relative and Percentage Error The Absolute Error is the difference between the actual and measured value But ... To determine the tolerance interval in a measurement, add and subtract one-half of the precision of the measuring instrument to the measurement. Get the best of About Education in your inbox. Example: Sam measured the box to the nearest 2 cm, and got 24 cm × 24 cm × 20 cm Measuring to the nearest 2 cm means the true value could Chemistry Expert Share Pin Tweet Submit Stumble Post Share By Anne Marie Helmenstine, Ph.D. To determine the measuring unit, just look at what place value the measurement is rounded to.[7] For example, if the measured length of a building is stated as 357 feet, you About Today Living Healthy Chemistry You might also enjoy: Health Tip of the Day Recipe of the Day Sign up There was an error. The error comes from the measurement inaccuracy or the approximation used instead of the real data, for example use 3.14 instead of π. When weighed on a defective scale, he weighed 38 pounds. (a) What is the percent of error in measurement of the defective scale to the nearest tenth? (b) If Millie, the Please try again. Paper Boat Creative, Getty Images By Anne Marie Helmenstine, Ph.D. The absolute error is 1 mm. For example:.025=x0−360360{\displaystyle .025={\frac {x_{0}-360}{360}}}.025×360=x0−360360×360{\displaystyle .025\times 360={\frac {x_{0}-360}{360}}\times 360}9=x0−360{\displaystyle 9=x_{0}-360} 5 Add the actual value to each side of the equation. The rule is: If the zero has a non-zero digit anywhere to its left, then the zero is significant, otherwise it is not.
Searching.... # B.2.1 Overview (February 4, 2022) Prior to the 1950s, the most difficult step in the systematic application of Schrödinger wave mechanics to chemistry was the calculation of the notorious two-electron integrals that measure the repulsion between electrons. Boys126 showed that this step can be made easier (although still time consuming) if Gaussian, rather than Slater, orbitals are used in the basis set. Following the landmark paper of computational chemistry125 (again due to Boys) programs were constructed that could calculate all the ERIs that arise in the treatment of a general polyatomic molecule with $s$ and $p$ orbitals. However, the programs were painfully slow and could only be applied to the smallest of molecular systems. In 1969, Pople constructed a breakthrough ERI algorithm, a hundred time faster than its predecessors. The algorithm remains the fastest available for its associated integral classes and is now referred to as the Pople-Hehre axis-switch method.916 Over the two decades following Pople’s initial development, an enormous amount of research effort into the construction of ERIs was documented, which built on Pople’s original success. Essentially, the advances of the newer algorithms could be identified as either better coping with angular momentum ($L$) or, contraction ($K$); each new method increasing the speed and application of quantum mechanics to solving real chemical problems. By 1990, another barrier had been reached. The contemporary programs had become sophisticated and both academia and industry had begun to recognize and use the power of ab initio quantum chemistry, but the software was struggling with “dusty deck syndrome” and it had become increasingly difficult for it to keep up with the rapid advances in hardware development. Vector processors, parallel architectures and the advent of the graphical user interface were all demanding radically different approaches to programming and it had become clear that a fresh start, with a clean slate, was both inevitable and desirable. Furthermore, the integral bottleneck had re-emerged in a new guise and the standard programs were now hitting the $N^{2}$ wall. Irrespective of the speed at which ERIs could be computed, the unforgiving fact remained that the number of ERIs required scaled quadratically with the size of the system. The Q-Chem project was established to tackle this problem and to seek new methods that circumvent the $N^{2}$ wall. Fundamentally new approaches to integral theory were sought and the ongoing advances that have resulted1210, 23, 289, 193, 1009 have now placed Q-Chem firmly at the vanguard of the field. It should be emphasized, however, that the ${\cal{O}}({N})$ methods that we have developed still require short-range ERIs to treat interactions between nearby electrons, thus the importance of contemporary ERI code remains. The chronological development and evolution of integral methods can be summarized by considering a time line showing the years in which important new algorithms were first introduced. These are best discussed in terms of the type of ERI or matrix elements that the algorithm can compute efficiently. 1950 Boys 126 ERIs with low $L$ and low $K$ 1969 Pople 916 ERIs with low $L$ and high $K$ 1976 Dupuis 297 Integrals with any $L$ and low $K$ 1978 McMurchie 773 Integrals with any $L$ and low $K$ 1982 Almlöf 36 Introduction of the direct SCF approach 1986 Obara 826 Integrals with any $L$ and low $K$ 1988 Head-Gordon 439 Integrals with any $L$ and low $K$ 1991 Gill 363, 368 Integrals with any $L$ and any $K$ 1994 White 1210 J matrix in linear work 1996 Schwegler 1009, 1010 HF exchange matrix in linear work 1997 Challacombe 193 Fock matrix in linear work
# Math Help - Closed, Bounded but not Compact. 1. ## Closed, Bounded but not Compact. The set of rationals Q forms a metric spce by $d(p,q) = | p - q |$ Then a subset E of Q is defined by $E = \{ p \in Q : 2 < p^2 < 3 \}$ So I am trying to show that E is closed and bounded, but not compact. To me, it is clear than E is bounded (by 2 and 3?!). I am having trouble showing E is closed. I believe 2 and 3 are limit points of E but E does not contain them, so E is not closed? also, E does not contain e = 2.718281828.... but that is also a limit point of E, is it not? Maybe I am confused with my definition of limit point.... are any of these (2, 3, e) actually limit points of E? I think I can show E is not compact, since the open cover of E, $\bigcup\ (2 + \frac{1}{n}, 3 - \frac{1}{n})$ for n = 2, 3, 4, .... has no finite subcover? I am worried I am perhaps misunderstanding some definitions... Any help or direction would be greatly appriciated. Thank you in advanced for you time! 2. Do you understand that $E = \left( {\sqrt 2 ,\sqrt 3 } \right) \cap \mathbb{Q}~?$ Consider this cover $O_n = \left( {\sqrt 2 + \frac{{\sqrt 3 - \sqrt 2 }}{{3n}},\sqrt 3 - \frac{{\sqrt 3 - \sqrt 2 }}{{3n}}} \right)$ 3. Wow, thank you for pointing that out. I somehow goofed on what the set E was! However, even in this new light, E does not appear to be closed... $\sqrt 2$ is not in E, but it is a limit point of E? Thanks! 4. You are still missing the point $\sqrt2\notin \mathbb{Q}$. Does the set contain all of its limit points in $\mathbb{Q}~?$ 5. Originally Posted by matt.qmar The set of rationals Q forms a metric spce by $d(p,q) = | p - q |$ Then a subset E of Q is defined by $E = \{ p \in Q : 2 < p^2 < 3 \}$ So I am trying to show that E is closed and bounded, but not compact. To me, it is clear than E is bounded (by 2 and 3?!). I am having trouble showing E is closed. I believe 2 and 3 are limit points of E but E does not contain them, so E is not closed? also, E does not contain e = 2.718281828.... but that is also a limit point of E, is it not? Maybe I am confused with my definition of limit point.... are any of these (2, 3, e) actually limit points of E? I think I can show E is not compact, since the open cover of E, $\bigcup\ (2 + \frac{1}{n}, 3 - \frac{1}{n})$ for n = 2, 3, 4, .... has no finite subcover? I am worried I am perhaps misunderstanding some definitions... Any help or direction would be greatly appriciated. Thank you in advanced for you time! Another simplistic theorem tells you that if $X,Y,Z$ are metric spaces then $Z$ is a compact subspaces of $Y$ if and only if $Z$ is a compact subspaces of $Y$. Note though that with this in hand $E$ will be a compact subspaces of $\mathbb{Q}$ if and only if it's a compact subspace of $\mathbb{R}$. But, considering that it isn't closed ( $\sqrt{2}$ is a limit point not in the set) it follows that $E$ is not compact.
# Motion In A Straight Line A B C D none of these Easy View solution> ## The area under acceleration-time graph gives A change in acceleration B change in velocity. C change in displacement. D change in deceleration Easy View solution> A Unity B Negative C Zero D Infinite Easy View solution> ## The velocity of a body can change A if its acceleration is zero. B if its acceleration is non-zero. C Both A & B D None of the above. Easy View solution> ## Define velocity. State its unit. A Velocity is defined as the rate of change of distance of a body with respect to time. Its unit in SI is . B Velocity is defined as the rate of change of displacement of a body with respect to time. Its unit in SI is . C Velocity is defined as the rate of change of distance of a body with respect to time. Its unit in SI is . D Velocity is defined as the rate of change of displacement of a body with respect to time. Its unit in SI is . Easy View solution> A speed B acceleration C retardation D velocity Easy View solution>
# Modeling an electron in a Magnetic Field 1. Dec 2, 2006 ### Noone1982 I'm not sure where to place this, so please forgive me. As you know, an electron experiences a force of F = qVxB in a magnetic field and equating this to centripetral force, we can find the radius of the electron's path to be R = mv/qB Ok, that's simple enough. Now, I want to model an electron via 3D graphing in a magnetic field. For this, I need to model every part of its trajectory. This is proving tricky. Say we have: Bx = 0 By = 0 Bz = 1 micro tesla The initial electron is coming in at Vx = 0 Vy = 1.5e8 m/s Vz = 0 The cross product is Fx = q(VyBz - ByVz) Fy = q(VxBz - VzBx) Fz = q(VxBy - VyBx) Now the acceleration is just a = F / m However, For ax I'm getting 4.23e13 m/s^2! which is a wee bit high. Ok, just plain wrong. How would you generate an animation of an electon in a magnetic field? I would like to extend it so the magnetic field osccilates and varies with amplitude versus time. 2. Dec 2, 2006 ### marlon What is the mass value you used ? Besides, how do you know the value you got is too high ? What exactly did you do. Anyhow, the approach and formula's are OK. If you wanna incorporate a t-dependent B field, you just need to integrate over time (once and twice) to get velocity and then the trajectory. How does the B field vary (sine, cosine) and along which direction (x,y,z) ? if you know this, just add the components into the right hand side of Fx, Fy and Fz. divide by m to get a and then you start the integrations... marlon 3. Dec 2, 2006 ### Noone1982 I used the charge of an electron as 1.6022e-19 C and the mass as 9.1e-31 kg which is a charge to mass ratio of 1.76e11!!!! My plan was to do it iteratively. To calculate vx, vy, and vz and use rx = rx + vx*dt etc to plot the positions. 4. Dec 2, 2006 ### marlon You are forgetting the $$t^2$$ term. There is a force acting on your system so you also need the $$a_x$$ part in your equation for $$r_x$$ !!! marlon
# Using Wien's radiation law to derive the Stephan-Boltzmann law and Wien's distributio 1. ### RedMech 13 1. The problem statement: Using Wien's law ρ(λ,T)=f(λ,T)/λ^5, show the following: (a) The total emissive power is given by R = aT4 (the Stefan-Boltzmann law), where a is a constant. (b) The wavelength λmax at which ρ(λ,T) - or R(λ,T) - has its maximum is such that λ*T = b (Wien's displacement law), where b is a constant. 2. Relevant equations: ρ(λ,T)=f(λ,T)/λ^5 ρ(λ,T)=c1/(λ^5*exp{c2/λT}) 3. The attempt at a solution: So I tried integrating Wien's equation from zero to infinity ρ(total)dλ=c/4∫ρ(λ,T)dλ=c/4∫[f(λ,T)/λ^5]dλ. But I got nowhere. Then I used the full expression of wien's law and tried the integration again ρ(total)dλ=c/4∫[c1/(λ^5*exp{c2/λT})]dλ Last edited: Aug 15, 2012 2. ### vela 12,842 Staff Emeritus Re: Using Wien's radiation law to derive the Stephan-Boltzmann law and Wien's distrib Without an explicit form for f(λ,T), you can't integrate this, as you probably realized. This approach should work. How did you try to integrate this? I'd try a substitution like u=1/λ and see where it goes. 3. ### RedMech 13 Re: Using Wien's radiation law to derive the Stephan-Boltzmann law and Wien's distrib I substituted x=c2/λT for the sake of the exponential term. dx=[-c2/λ^2T]dλ. The integral has become w=(c1*c*T^4)/4c2^4∫[x^3/e^x]dx (Please note that for c1 and c2, the 1 and 2 are subscripts of c. The independent c is the speed of light) How is this equation looking? Last edited: Aug 15, 2012 4. ### vela 12,842 Staff Emeritus Re: Using Wien's radiation law to derive the Stephan-Boltzmann law and Wien's distrib Do you recognize that integral? Think gamma function. In any case, it's a definite integral, so it's just some number. 5. ### RedMech 13 Re: Using Wien's radiation law to derive the Stephan-Boltzmann law and Wien's distrib I'll compute the integral and then leave the final expression for my instructor. Thanks a million for your help. 6. ### TSny 6,455 Re: Using Wien's radiation law to derive the Stephan-Boltzmann law and Wien's distrib Wien's law is actually ρ(λ,T)=f(λT)/λ5 where f is an undetermined function of the product of λ and T. Using this, see if you can get the integral to yield a constant times T4. 7. ### Humdinger 2 Re: Using Wien's radiation law to derive the Stephan-Boltzmann law and Wien's distrib @TSny, I was wondering if you might be able to give me a small hint in regards to how to proceed with this problem only using the ρ(λ,T)=f(λT)/λ5 form of Wien's law. I tried integration by parts but that just led to a more convoluted expression. I see that you underlined the phrase "product of λ and T" but I'm still not sure how to handle the f(λT) term in the integral. 8. ### dextercioby 12,298 Re: Using Wien's radiation law to derive the Stephan-Boltzmann law and Wien's distrib That calls for a substitution (change of variable) which would throw out of the integral exactly T to the power of 4. 9. ### Humdinger 2 Re: Using Wien's radiation law to derive the Stephan-Boltzmann law and Wien's distrib Thank you dextercioby, my mistake was in assuming that I need to find the unknown function f(λT). I was able to figure out the answer based on your hint.
# beardcoder/sitemap_generator ### Showing 21 of 21 total issues #### Function mapToEntries has a Cognitive Complexity of 26 (exceeds 5 allowed). Consider refactoring. Open protected function mapToEntries(array $typoScriptUrlEntry) { if ($typoScriptUrlEntry['table'] && $typoScriptUrlEntry['active'] == 1) {$records = $this->getRecordsFromDatabase($typoScriptUrlEntry); if ($this->getDatabaseConnection()->sql_num_rows($records)) { Found in Classes/Domain/Repository/SitemapRepository.php - About 3 hrs to fix # Cognitive Complexity Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend. ### A method's cognitive complexity is based on a few simple rules: • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one • Code is considered more complex for each "break in the linear flow of the code" • Code is considered more complex when "flow breaking structures are nested" #### File SitemapRepository.php has 316 lines of code (exceeds 250 allowed). Consider refactoring. Open <?php namespace Markussom\SitemapGenerator\Domain\Repository; /** * This file is part of the TYPO3 CMS project. Found in Classes/Domain/Repository/SitemapRepository.php - About 3 hrs to fix #### The class SitemapRepository has an overall complexity of 68 which is very high. The configured complexity threshold is 50. Open class SitemapRepository { /** * @var FieldValueService */ Since: PHPMD 0.2.5 The Weighted Method Count (WMC) of a class is a good indicator of how much time and effort is required to modify and maintain this class. The WMC metric is defined as the sum of complexities of all methods declared in a class. A large number of methods also means that this class has a greater potential impact on derived classes. Example: class Foo { public function bar() { if ($a ==$b) { if ($a1 ==$b1) { fiddle(); } elseif ($a2 ==$b2) { fiddle(); } else { } } } public function baz() { if ($a ==$b) { if ($a1 ==$b1) { fiddle(); } elseif ($a2 ==$b2) { fiddle(); } else { } } } // Several other complex methods } #### Function mapGoogleNewsEntries has a Cognitive Complexity of 21 (exceeds 5 allowed). Consider refactoring. Open protected function mapGoogleNewsEntries(array $typoScriptUrlEntry) {$records = $this->getRecordsFromDatabase($typoScriptUrlEntry); $tableColumns =$GLOBALS['TCA'][$tablename]['columns']; Found in Classes/Service/OrderByService.php - About 2 hrs to fix # Cognitive Complexity Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend. ### A method's cognitive complexity is based on a few simple rules: • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one • Code is considered more complex for each "break in the linear flow of the code" • Code is considered more complex when "flow breaking structures are nested" ### Further reading #### SitemapRepository has 22 functions (exceeds 20 allowed). Consider refactoring. Open class SitemapRepository { /** * @var FieldValueService */ Found in Classes/Domain/Repository/SitemapRepository.php - About 2 hrs to fix #### Method mapToEntries has 38 lines of code (exceeds 25 allowed). Consider refactoring. Open protected function mapToEntries(array$typoScriptUrlEntry) { if ($typoScriptUrlEntry['table'] &&$typoScriptUrlEntry['active'] == 1) { $records =$this->getRecordsFromDatabase($typoScriptUrlEntry); if ($this->getDatabaseConnection()->sql_num_rows($records)) { Found in Classes/Domain/Repository/SitemapRepository.php - About 1 hr to fix #### Method mapGoogleNewsEntries has 33 lines of code (exceeds 25 allowed). Consider refactoring. Open protected function mapGoogleNewsEntries(array$typoScriptUrlEntry) { $records =$this->getRecordsFromDatabase($typoScriptUrlEntry);$urlEntries = []; if ($this->getDatabaseConnection()->sql_num_rows($records)) { Found in Classes/Domain/Repository/SitemapRepository.php - About 1 hr to fix private function getRecordsFromDatabase($typoScriptUrlEntry) { if (!isset($GLOBALS['TCA'][$typoScriptUrlEntry['table']]) || !is_array($GLOBALS['TCA'][$typoScriptUrlEntry['table']]['ctrl']) ) { Found in Classes/Domain/Repository/SitemapRepository.php - About 1 hr to fix #### Function getEntriesFromPages has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring. Open public function getEntriesFromPages($pages) { foreach ($pages as$page) { if ($this->hasPageAnAllowedDoktype($page)) { $urlEntry = GeneralUtility::makeInstance(UrlEntry::class); Found in Classes/Domain/Repository/SitemapRepository.php - About 55 mins to fix # Cognitive Complexity Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend. ### A method's cognitive complexity is based on a few simple rules: • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one • Code is considered more complex for each "break in the linear flow of the code" • Code is considered more complex when "flow breaking structures are nested" ### Further reading #### Avoid deeply nested control flow statements. Open if (isset($typoScriptUrlEntry['changefreq'])) { $urlEntry->setChangefreq($this->fieldValueService->getFieldValue('changefreq', $typoScriptUrlEntry,$row) ); } Found in Classes/Domain/Repository/SitemapRepository.php - About 45 mins to fix #### Avoid deeply nested control flow statements. Open if (isset($typoScriptUrlEntry['lastmod'])) {$urlEntry->setLastmod( date( 'Y-m-d', $this->fieldValueService->getFieldValue('lastmod',$typoScriptUrlEntry, $row) Found in Classes/Domain/Repository/SitemapRepository.php - About 45 mins to fix #### Avoid deeply nested control flow statements. Open if (isset($typoScriptUrlEntry['priority'])) { $urlEntry->setPriority( number_format($this->fieldValueService->getFieldValue('priority', $typoScriptUrlEntry,$row) / 10, 1, Found in Classes/Domain/Repository/SitemapRepository.php - About 45 mins to fix #### The class SitemapRepository has a coupling between objects value of 19. Consider to reduce the number of dependencies under 13. Open class SitemapRepository { /** * @var FieldValueService */ Since: PHPMD 1.1.0 A class with too many dependencies has negative impacts on several quality aspects of a class. This includes quality criteria like stability, maintainability and understandability Example: class Foo { /** * @var \foo\bar\X */ private $x = null; /** * @var \foo\bar\Y */ private$y = null; /** * @var \foo\bar\Z */ private $z = null; public function setFoo(\Foo$foo) {} public function setBar(\Bar $bar) {} public function setBaz(\Baz$baz) {} /** * @return \SplObjectStorage * @throws \OutOfRangeException * @throws \InvalidArgumentException * @throws \ErrorException */ public function process(\Iterator $it) {} // ... } #### Function hidePagesIfNotTranslated has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring. Open private function hidePagesIfNotTranslated($pages) { $language = GeneralUtility::_GET('L'); if ($this->isPageNotTranslated($language)) { foreach ($pages as $key =>$page) { Found in Classes/Domain/Repository/SitemapRepository.php - About 25 mins to fix # Cognitive Complexity Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend. ### A method's cognitive complexity is based on a few simple rules: • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one • Code is considered more complex for each "break in the linear flow of the code" • Code is considered more complex when "flow breaking structures are nested" #### Function getLimitString has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring. Open public static function getLimitString($limit) { if (isset($limit) && !empty($limit)) {$limitParts = GeneralUtility::trimExplode(',', $limit); if (count($limitParts) === 1) { Found in Classes/Service/LimitService.php - About 25 mins to fix # Cognitive Complexity Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend. ### A method's cognitive complexity is based on a few simple rules: • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one • Code is considered more complex for each "break in the linear flow of the code" • Code is considered more complex when "flow breaking structures are nested" #### The method getPages uses an else expression. Else is never necessary and you can simplify the code to work without else. Open } else { $pages =$this->getSubPagesRecursive($rootPageId);$this->cacheInstance->set($cacheIdentifier,$pages, ['pagesForSitemap']); } Since: PHPMD 1.4.0 An if expression with an else branch is never necessary. You can rewrite the conditions in a way that the else is not necessary and the code becomes simpler to read. To achieve this use early return statements. To achieve this you may need to split the code it several smaller methods. For very simple assignments you could also use the ternary operations. Example: class Foo { public function bar($flag) { if ($flag) { // one branch } else { // another branch } } } $tableColumns =$GLOBALS['TCA'][$tablename]['columns']; Found in Classes/Service/OrderByService.php by phpmd Since: PHPMD 0.2 Accessing a super-global variable directly is considered a bad practice. These variables should be encapsulated in objects that are provided by a framework, for instance. Example: class Foo { public function bar() {$name = $_POST['foo']; } } #### The method getOrderByString() has a Cyclomatic Complexity of 10. The configured cyclomatic complexity threshold is 10. Open public static function getOrderByString($orderBy, $tablename) { if (isset($orderBy) && !empty($orderBy)) {$cleanOrderByParts = []; $tableColumns =$GLOBALS['TCA'][$tablename]['columns']; Found in Classes/Service/OrderByService.php by phpmd Since: PHPMD 0.1 Complexity is determined by the number of decision points in a method plus one for the method entry. The decision points are 'if', 'while', 'for', and 'case labels'. Generally, 1-4 is low complexity, 5-7 indicates moderate complexity, 8-10 is high complexity, and 11+ is very high complexity. Example: // Cyclomatic Complexity = 11 class Foo { 1 public function example() { 2 if ($a == $b) { 3 if ($a1 == $b1) { fiddle(); 4 } elseif ($a2 == $b2) { fiddle(); } else { fiddle(); } 5 } elseif ($c == $d) { 6 while ($c == $d) { fiddle(); } 7 } elseif ($e == $f) { 8 for ($n = 0; $n <$h; $n++) { fiddle(); } } else { switch ($z) { 9 case 1: fiddle(); break; 10 case 2: fiddle(); break; 11 case 3: fiddle(); break; default: fiddle(); break; } } } } #### Line exceeds 120 characters; contains 143 characters Open \$GLOBALS['TYPO3_CONF_VARS']['SC_OPTIONS']['extbase']['commandControllers'][] = Markussom\SitemapGenerator\Command\TaskCommandController::class; Found in ext_localconf.php by phpcodesniffer
# Proving $(0,1)$ and $[0,1]$ have the same cardinality [duplicate] Prove $(0,1)$ and $[0,1]$ have the same cardinality. I've seen questions similar to this but I'm still having trouble. I know that for $2$ sets to have the same cardinality there must exist a bijection function from one set to the other. I think I can create a bijection function from $(0,1)$ to $[0,1]$, but I'm not sure how the opposite. I'm having trouble creating a function that makes $[0,1]$ to $(0,1)$. Best I can think of would be something like $x \over 2$. Help would be great. ## marked as duplicate by Ross Millikan, Carl Mummert, Mark Bennet, Quixotic, gnometoruleNov 5 '14 at 0:01 • If you create a bijection, it goes both ways, so you only need one. This has been answered several times on this site. – Ross Millikan Nov 4 '14 at 21:26 • If you have a bijection $(0,1) \longrightarrow [0,1]$, then its inverse map is a bijection $[0,1] \longrightarrow (0,1)$. Maybe you meant an injection? – Crostul Nov 4 '14 at 21:26 • possible duplicate of How do I define a bijection between $(0,1)$ and $(0,1]$? and this – Ross Millikan Nov 4 '14 at 21:28 Use Hilbert's Hotel. First identify a countable subset of $(0,1)$, say $H = \{ \frac1n : n \in \mathbb N\}$. Then define $f:(0,1) \to [0,1]$ so that $$\frac12 \mapsto 0$$ $$\frac13 \mapsto 1$$ $$\frac{1}{n} \mapsto \frac{1}{n-2}, n \gt 3$$ $$f(x) = x, \text{for } x \notin H$$ • Hotel Hilbert, nice. Hard to believe it's not also an Eagles song. – Simon S Nov 4 '14 at 21:57 • @SimonS Never! Not The Eagles. Please! Hilbert's Hotel is too beautiful. But hey, if that's your thing ;) – Epsilon Nov 4 '14 at 22:04 • We should give more names to examples or constructions like this. I'm convinced I'll remember this one for some time because of the name together with its elegance. Thanks for posting. – Simon S Nov 4 '14 at 22:07 You can trivially find a bijection between $(0,1)$ and $(1/4,3/4)\subset[0,1]$, hence $\mathrm{Card} (0,1) \leq \mathrm{Card} [0,1]$. Likewise, there is a trivial bijection between $[1/4,3/4]\subset(0,1)$ and $[0,1]$, hence $\mathrm{Card} [0,1] \leq \mathrm{Card} (0,1)$. By trivial, I mean a linear function $t\to at+b$ with some numbers $a,b$. Thus $\mathrm{Card} [0,1] = \mathrm{Card} (0,1)$.
partitions: partition of the set pool so that each set in a group has a unique element Suppose I have a bag (or multiple set) of games $$S = {s_1, s_2, dots, s_n }$$ and $$emptyset notin S$$. I want to partition $$S$$ in set groups so that within each group each set has at least one element that is not found in any other set in that group. Formally, the criteria for a group $$G = {g_1, g_2, dots } subseteq S$$ is: $$forall i: left (g_i setminus bigcup_ {j neq i} g_j ; neq ; emptyset right)$$ Partition $$P = { {s_1 }, {s_2 }, dots }$$ It always meets this requirement, so there is always a valid solution. But what is the smallest number of groups needed? Is this problem feasible or NP-complete? Another formulation of this problem is to divide a multiple set of integers into groups so that each integer has a bit set in its binary expansion that no other integer in its group has established. Performance – Optimize Python code for the Euler Project Problem 78: Coin Partitions I was working on this problem and made a code that got the answer in an average of 11 seconds. How can I optimize this code to work in less than 5 seconds? ``````import time def pentagonal(n): return int(n*(3*n - 1) / 2) z = () for i in range(-1, -300, -1): z.append(pentagonal(abs(i))) z.append(pentagonal(i)) part = (1, 1, 2) start = time.time() for i in range(3, 100000): print(i) n = 0 t = 0 while i >= z(n): if n % 4 <= 1: t += part(i - z(n)) elif n % 4 >= 2: t -= part(i- z(n)) n += 1 if t % 1000000 == 0: print("FOUND. N is:", i) break part.append(t) print(time.time()-start) `````` combinatorics – entire partitions and permutations They give me the pair $$(n, lambda)$$ where $$lambda$$ it is a partition of $$n$$ such that 6 is not part of $$lambda$$. They tell me to leave $$lambda ^ *$$ represent the partition of $$n$$ conjugate $$lambda$$. Now we are assuming $$(n, lambda)$$ It has the following property: there is a $$theta in S_n$$, the set of permutations of $${1,2, points, n }$$and $$theta ^ * in S_n$$ such that both $$theta, theta ^ *$$ have order 6, $$theta$$ It has a cycle structure $$lambda$$and $$theta ^ *$$ It has a cycle structure $$theta ^ *$$. I am asked to determine the possible values ​​of n. Not sure where to start here. Any help would be great. ssh – How to mount LVM partitions in sysrcd I have a dedicated Ubuntu 18.04 server, I tried to change the SSH port (the SSH port was 63058 before), I edited / etc / ssh / sshd_config and added # to the 63058 port line. However, after restarting the server, I cannot access the server with 22 or 63058, I think I forgot to allow 22 ports in the UFW Fireware. As it is an unmanaged wholesaleinternet server, they only provide sysrcd to handle the problem, I reload it on sysrcd and try to mount the partitions on my hard drive, but it doesn't work. ``````root@sysresccd /root % lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 ├─sda2 ├─sda5 ext2 8b9e94c0-06f3-4c63-9288-4eeb4991e341 └─sda6 LVM2_mem onKTmN-cX7h-6p2z-3Pib-xKiy-hCwr-FXOqSf ├─vg-root ext4 74e59bb4-a5df-4bb5-b1ac-4c0813d1385b └─vg-swap swap c1c0a0c6-a81c-4f9c-b9fd-7b796a7121f6 loop0 squashfs /livemnt/squas root@sysresccd /root % root@sysresccd /root % mount /dev/sda6 /mnt mount: unknown filesystem type 'LVM2_member' root@sysresccd /root % lvdisplay --- Logical volume --- LV Path /dev/vg/root LV Name root VG Name vg LV UUID B4YUxD-fQiD-sopx-kGl7-kh9H-tRBx-P3e5pB LV Creation host, time s147887, 2019-07-01 10:20:35 +0000 LV Status available # open 0 LV Size 445.13 GiB Current LE 113953 Segments 1 Allocation inherit - currently set to 256 Block device 253:0 --- Logical volume --- LV Path /dev/vg/tmp LV Name tmp VG Name vg LV UUID 86L32t-B98f-Sfg5-8cmo-7y1D-vIyc-Bn9k4R LV Creation host, time s147887, 2019-07-01 10:20:35 +0000 LV Status available # open 0 LV Size 976.00 MiB Current LE 244 Segments 1 Allocation inherit - currently set to 256 Block device 253:1 --- Logical volume --- LV Path /dev/vg/swap LV Name swap VG Name vg LV UUID pdD373-lBCI-583I-FP8C-htu5-B97D-NDR7h0 LV Creation host, time s147887, 2019-07-01 10:20:35 +0000 LV Status available # open 0 LV Size 92.00 MiB Current LE 23 Segments 1 Allocation inherit - currently set to 256 Block device 253:2 root@sysresccd /root % lvscan ACTIVE '/dev/vg/root' (445.13 GiB) inherit ACTIVE '/dev/vg/tmp' (976.00 MiB) inherit ACTIVE '/dev/vg/swap' (92.00 MiB) inherit root@sysresccd /root % root@sysresccd /root % mount /dev/vg/root /mnt missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. root@sysresccd /root % `````` Can anyone help? postgresql: Postgres 12 scalability using table partitions and external data wrappers I have reviewed the files and cannot find any discussion on the following topic. I have a pretty deep question with which I would appreciate some guidance. Current environment • Current version of Postgres: 10 • Os: Ubuntu 14:04 (Soon to update to 18.04) • The hard drive has 2.3 TB of maximum space. (Raid 10 SSD & # 39; s) • Current Postgres data size: 1.6 TB (growing at 100 gb per month) • It currently has 1 master database and 2 replicas. (1 slave ascending and 1 descending using cascade replication) • 1 warehouse with logical replication Based on the above, I am sure that it is quite obvious that I will have some serious problems regarding the available disk space within a few months. Just a couple of things to mention before providing my long-term theoretical solution. Currently, the cloud-based solution is not an option due to costs and complexity The servers are housed in an external DC and the maximum possible disk size that we can achieve using SSD in a Raid 10 configuration is 2.3 TB We are currently handling the load at a reasonable level. Although that could change as our business grows My thoughts on a possible solution I need a scalable solution in the long term and we have been looking to upgrade to Postgres 12. Using the seemingly incredible partition of tables with foreign data wrappers, could we achieve a horizontal scale if we divide the key tables by date? If this is possible, we could have the data of the current years on our primary PostgreSQL master server and our annual partitioned tables on a different server. Therefore, alleviating our space problems and achieving long-term scalability The above seems feasible, but how would this affect my aftershocks? I think that any partitioning change I make in my Master DB would be "replicated" through replications. More importantly, how would this work relate to foreign data containers? Alternative solutions I could get away from using SSD to get more space in a raid configuration 10. (In the long term, I would still encounter the same problems eventually and my application could pay a performance penalty) You could use a different raid configuration to achieve more available space. (The same long-term problems mentioned above) I could look to build a manual archiving process that copies my "cold" data to a different server and deletes it from the master's data. Sorry for the long question. dual boot – Installation issues – Many unknown partitions I am trying to install Ubuntu on my new laptop, but I am having problems when I try to partition my disk. Partition step I don't know what partition I should change to install Ubuntu next to Windows. I guess I should change the 500 go partition because that's the size of my SSD but the type of this partition is unknown … is it important? And what partition should I delete to put Ubuntu … Thank you ! co.combinatorics – Possible supervision in the role of Greene and Kleitman on chains in order of domination in partitions? This question is about a possible loophole in a role of Greene and Kleitman that Zarathustra Brady let me know. The article in question is "Longer chains in the network of entire partitions sorted by wholesale" (available online here). In that document, they calculate the length of the longest chain in the order of dominance in the partitions and, in general, give an algorithm to find the longest chain in any interval of order of domination. In order of domination we have a coverage relationship $$lambda gtrdot mu$$ If and only if $$mu$$ is obtained from $$lambda$$ moving a single box in a row $$i$$ row $$i + 1$$or moving a single frame in the column $$i + 1$$ to the column $$i$$. In the first case, Greene and Kleitman say that $$lambda gtrdot mu$$ is a Step h (Because, perhaps confusingly for modern readers, the box moved a unit horizontally according to its non-standard scheme of drawing partitions with vertical parts, see Figure 2), and in the second case they say that $$lambda gtrdot mu$$ is a V step (because the box moved a unit vertically according to its representation). Keep in mind, as the authors point out, that it is possible that $$lambda gtrdot mu$$ it is both a step H and a step V (and in fact this is the source of the possible lagoon!). Greene and Kleitman say a chain $$lambda ^ 0> lambda ^ 1> cdots> lambda ^ L$$ is a H string yes every step $$lambda ^ i> lambda ^ i + 1} = lambda ^ i gtrdot lambda ^ i + 1}$$ it's a step H, and similarly say that the chain is a V string yes every step $$lambda ^ i> lambda ^ i + 1} = lambda ^ i gtrdot lambda ^ i + 1}$$ It's a step in V. Also, they say $$lambda ^ 0> lambda ^ 1> cdots> lambda ^ L$$ is a HV chain if there is any index $$i$$ such that $$lambda ^ 0> cdots> lambda ^ i$$ it's an H chain and $$lambda ^ i> cdots> lambda ^ L$$ It is a V chain. In a crucial motto of the article, Lemma 3, they affirm that if $$lambda = lambda ^ 0> lambda ^ 1> cdots> lambda ^ L = mu$$ is any chain in order of domination, then there is some HV chain between $$lambda$$ and $$mu$$ in length at least $$L$$. The argument they give is: we can assume that each step in the chain is a hedging relationship; we assure that it is true for the chains $$lambda_0> lambda_1> lambda_2$$ of length 2; then, by repeatedly applying this case of length 2 we can convert any length string $$L$$ to an HV chain of at least length $$L$$. But this last point about the repeated application of the case of length 2 seems suspicious, for the following reason. Suppose we have a chain of length 3 $$lambda_0> lambda_1> lambda_2> lambda_3$$ such that $$lambda_0> lambda_1$$ it's a V step that is not an H step, $$lambda_1> lambda_2$$ it's a step V and H, and $$lambda_2> lambda_3$$ it is a step H that is not a step V. (This situation may arise: $$(5,4,3,2)> (4,4,4,2)> (4,4,3,3)> (4,4,3,2,1)$$.) Then the problem is that, from the perspective of substrings of length 2, things look good: $$lambda_0> lambda_1> lambda_2$$ it is a V chain, so it is in particular an HV chain; Similary $$lambda_1> lambda_2> lambda_3$$ it is an H chain, so in particular it is an HV chain. But $$lambda_0> lambda_1> lambda_2> lambda_3$$ It is obviously not an HV chain. Question: Is this a real oversight in the role of Greene-Kleitman? If so, is Lemma 3 true, and can the test be repaired? co.combinatorics – Asymptotics of the Steenrod algebra / \$ s \$ -partitions? Remember that a $$s$$-partition is a partition of a natural number $$n$$ such that each part has the form $$2 ^ r-1$$. For a fundamental theorem of Milnor, the number $$p_s (n)$$ from $$s$$-partitions of $$n$$ counts the dimension of the algebra of Steenrod mod-2 in degree $$n$$. I am interested in the asymptotic function. $$p_s (n)$$, as well as related functions for odd primary Steenrod algebras. Questions: 1. The number of $$s$$-partitions $$p_s (n)$$ grow sub-exponentially in $$n$$? 2. If so, are there effective constants? $$p_s (n) leq C_ epsilon (1+ epsilon) ^ n$$? 3. What about the dimension of odd primary Steenrod algebras? The OEIS page (here is the link again) leads to this document that offers an asymptotic formula for $$ln p_s (n)$$, and all terms are in fact sublinear in $$n$$, except possibly for the term that implies a craft function $$W (z)$$, whose growth I don't know how to estimate. As for odd primary Steenrod algebras, Milnor showed that for $$p$$ a strange cousin, Steenrod's dual algebra in the first $$p$$ is the tensor product $$P ( xi_1, xi_2, dots) otimes E ( tau_0, tau_1, tau_2, dots)$$ where $$deg ( xi_i) = 2p ^ i – 2$$, $$deg ( tau_i = 2p ^ i – 1)$$and $$P, E$$ denote polynomial and exterior algebras respectively $$mathbb F_p$$. Therefore, counting the dimension is reduced to a combinatorial partition problem of a similar taste. Windows 7 – Merge two NTFS partitions Yes partition 2 is empty And it is right next to partition 1 then simply delete it and resize partition 1 to fill the new empty space. Any tools You can resize partitions without losing data, such as MiniTool Partition Wizard, AOMEI Partition Assistant, EaseUS Partition Master, Macrorit Partition Expert or gparted can do so. Even the Windows disk manager can do the same, although with less flexibility (probably because it tries to avoid moving data as much as possible to avoid data loss) If the partitions are separated from each other, then it is much more complicated. There are 2 solutions. • Convert the disk to dynamic disk that is the Windows logical volume manager and is the analogue of LVM on Linux. Then, partition 1 can be extended to any other empty dynamic volume • Remove partition 2, then move all partitions between partition 2 and partition 1 to fill the unallocated space and change the size of partition 1. This takes much longer and is more risky. partitions: where is the information from / proc / dumchar_info used on MediaTek devices? There are many instructions to change the partition layout on Android, including MediaTek devices, and they say I need to edit MBR, EBR and a "scatter file" and feed the latter to SP Flash or MTKDroidTools. However, as noted in a response to "Where does partition information come from / proc / dumchar_info, on MTK devices?", The MediaTek-specific one `/proc/dumchar_info` You can't change it with those means. Hence the question, where is the information from `/proc/dumchar_info` used? And if it does not reflect the actual partition design and does not agree with MBR, EBR and the "scatter file", what effects should I expect?
# Condition of solvable Lie algebra. I'm studying about Lie algebras using J.E. Humphreys' book ("Introduction to Lie Algebras and Representation Theory"). On page 19 he says: It is obvious that $L$ will be solvable if $[LL]$ is nilpotent. But it is not obvious to me. Why is it true? Suppose that $[L,L]$ is nilpotent. Now $L/[L,L]$ is abelian, hence solvable. It follows that both $[L,L]$ and $L/[L,L]$ are solvable. Then $L$ is solvable, too, because every extension of a solvable Lie algebra by a solvable Lie algebra is itself solvable (we have the extension $0\rightarrow [L,L]\rightarrow L\rightarrow L/[L,L]\rightarrow 0$). The argument is also valid over fields of characteristic $p>0$, whereas the converse statement, i.e., that $L$ solvable implies $[L,L]$ nilpotent, need not be true in that case.
We're sorry Aspose doesn't work properply without JavaScript enabled. # How to inline images with text by using aspose.words for .NET I am not allowed to use this. Document doc = new Document(@“D:\TestOutput\test.docx”); Do you have any method to get the saved document??? Because I think It’s only working with saved word document. Do you have any suggestions? @nethmi Thank you for additional information. The suggested method should be used together with the approach suggested earlier. First you should change the appropriate shapes WrapType and then process the document with the code that uses LayoutCollector and LayoutEnumerator. Also, it is not required to use Document doc = new Document(@"D:\TestOutput\test.docx");. This line of code is used for demonstration purposes to check the output produced by the code. You code should look like this: public List<byte[]> Convert(ContentsDto content, IDictionary<string, string> options, System.Func<DependentContent, Task<ContentsDto>> GetDependency = null) { HtmlDocument htmlDocument = new HtmlDocument(); using (var htmlStream = new MemoryStream(content.Data)) { } var byteArray = new List<byte[]>(); using (var dataStream = new MemoryStream(content.Data)) { var document = new Aspose.Words.Document(dataStream, docOptions); using (var outputStream = new MemoryStream()) { using (var htmlStream = new MemoryStream(content.Data)) { } ApplyImageFormatting(document, htmlDocument); FormatOutput(GetDependency, document, htmlDocument); // Here overlap issue is fixed by layout code. OverlapIssue(document); document.Save(outputStream, SaveFormat.Docx); } } return byteArray; } done. still getting the same issue. @nethmi Could you please crate a simple console application and attach it here, so I can run it on my side and debug it? I will check what is going wrong and provide you more information. Also, please modify OverlapIssue method like this: //Method , which adjust the overlap issue private static void OverlapIssue(Document doc) { LayoutCollector collector = new LayoutCollector(doc); LayoutEnumerator enumerator = new LayoutEnumerator(doc); NodeCollection shapes = doc.GetChildNodes(NodeType.Shape, true); foreach (Shape s in shapes) { // LayoutCollector and LayoutEnumerator do not work with nodes in header and footer. // Skip them. continue; PageSetup ps = ((Section)s.GetAncestor(NodeType.Section)).PageSetup; // Rectangle inside page margin. double top = ps.TopMargin + ps.HeaderDistance; double bottom = ps.BottomMargin + ps.FooterDistance; float width = (float)(ps.PageWidth - ps.LeftMargin - ps.RightMargin); float height = (float)(ps.PageHeight - top - bottom); RectangleF rect = new RectangleF((float)ps.LeftMargin, (float)top, width, height); // Get shape rectangle on the page. enumerator.Current = collector.GetEntity(s); RectangleF shapeRect = enumerator.Rectangle; // Update shape position to place it inside page margins. if (shapeRect.Left < rect.Left) s.Left += (rect.Left - shapeRect.Left); if (shapeRect.Right > rect.Right) s.Left -= (shapeRect.Right - rect.Right); if (shapeRect.Top < rect.Top) s.Top += (rect.Top - shapeRect.Top); if (shapeRect.Bottom > rect.Bottom) s.Top -= (shapeRect.Bottom - rect.Bottom); } } sure I will. Could you please try OverlapIssue(Document doc) method, with these two word documents??? If you can see a difference please let me know. I think both overlap images in the two documents run under same if condition. I think, That’s why we can’t get correct output for one document. sample1.docx (29.6 KB) sample2.docx (30.0 KB) any update?? @nethmi Thank you for additional information. I have modified the code a bit more and now it gives better result. But please note that fonts used in the document must be available on the machine where code is used, because layout code requires fonts to build layout properly. Could you please check on your side: private static void OverlapIssue(Document doc) { LayoutCollector collector = new LayoutCollector(doc); LayoutEnumerator enumerator = new LayoutEnumerator(doc); NodeCollection shapes = doc.GetChildNodes(NodeType.Shape, true); foreach (Shape s in shapes) { // LayoutCollector and LayoutEnumerator do not work with nodes in header and footer. // Skip them. if (s.GetAncestor(NodeType.HeaderFooter) != null || !s.IsTopLevel) continue; PageSetup ps = ((Section)s.GetAncestor(NodeType.Section)).PageSetup; // Rectangle inside page margin. float width = (float)(ps.PageWidth - ps.LeftMargin - ps.RightMargin); float height = (float)(ps.PageHeight - ps.TopMargin - ps.BottomMargin); RectangleF rect = new RectangleF((float)ps.LeftMargin, (float)ps.TopMargin, width, height); // Get shape rectangle on the page. enumerator.Current = collector.GetEntity(s); RectangleF shapeRect = enumerator.Rectangle; double left = 0; double top = 0; // Update shape position to place it inside page margins. if (shapeRect.Left < rect.Left) left = rect.Left; if (shapeRect.Right > rect.Right) left = rect.Right - shapeRect.Width; if (shapeRect.Top < rect.Top) top = rect.Top; if (shapeRect.Bottom > rect.Bottom) top = rect.Bottom - shapeRect.Height; if (!IsZero(left) || !IsZero(top)) { // Set relative shape position to page. s.RelativeHorizontalPosition = RelativeHorizontalPosition.Page; s.RelativeVerticalPosition = RelativeVerticalPosition.Page; s.Top = Math.Max(top, rect.Top); s.Left = Math.Max(left, rect.Left); doc.UpdatePageLayout(); } } } public static bool IsZero(double value) { return (Math.Abs(value) < Double.Epsilon); } 1 Like Hii This is working. I checked several scenarios with this code. It’s working. Thank you very much for your support and effort. I really appreciate it. 1 Like A post was split to a new topic: How to change image wrap type using Aspose.Pdf? Hii I am using below code to wrap my images. private static void SetImageLayout(Document document, HtmlDocument htmlDocument) { HtmlNodeCollection images = htmlDocument.DocumentNode.SelectNodes("//img"); NodeCollection shapes = document.GetChildNodes(NodeType.Shape, true); if (images != null) { var imgIndex = 0; foreach (Shape shape in shapes) { var image = images.ElementAt(imgIndex); if (image.HasClass("fr-fil")) { shape.WrapType = WrapType.Square; shape.Top += shape.Height + 10; } else { shape.HorizontalAlignment = HorizontalAlignment.Center; } shape.AllowOverlap = false; imgIndex++; } } } but sometimes it doesn’t give the expected outcome.( When I reduce content between two images -you can understand this problem by using my expected and current outcomes. It has two wrapped images.) Expected outcome expected.PNG.jpg (163.7 KB) current outcome(this is happening when I reduce paragraph length between images.)current.PNG.jpg (161.8 KB) HTML.zip (2.9 KB) I can’t provide exact information because of the privacy reasons. So I have sent you a dummy HTML . My HTML is like that. You can check my issue with this HTML by changing paragraph length between those two images. images are in the third page. @nethmi I am checking the issue and get back to you soon. 1 Like @nethmi I have checked your HTML and code on my side and unfortunately, I cannot reproduce the same output document you have shared. Could you please attach your output DOCX document with the problem? I suspect the shapes are anchored to the same paragraph and as a solution you can check whether the shapes are in the same paragraph and placed on the same page, if so you can adjust position of such shapes and place them one above another. However, this in only my guess. Once I have your real problematic document I can analyze the issue closer. ok i’ll send you 1 Like Hiii HTML MYHTML.zip (2.1 KB) word document document.docx (17.5 KB) used Image picture.jpg (5.7 KB) In the 3rd page you can see the two wrapped images. current outcomecurrent.PNG.jpg (135.9 KB) expected outcome expected.PNG.jpg (163.7 KB) shapes are in different paragraphs @nethmi Thank you for additional information. I have create a code example that demonstrates the basic technique of adjusting shapes positions to get the required output. But note, the code demonstrates only the basic technique and does not guaranty to work with more complicated cases. For more complicated cases, you have to implement more complicated logic to adjust positions of shapes on the page. Implementation of such logic is out of Aspose.Words scope. Document doc = new Document(@"C:\Temp\in.docx"); CorrectShapesPosition(doc); doc.Save(@"C:\Temp\out.docx"); private static void CorrectShapesPosition(Document doc) { LayoutCollector collector = new LayoutCollector(doc); LayoutEnumerator enumerator = new LayoutEnumerator(doc); // Get all shapes in the document. NodeCollection shapes = doc.GetChildNodes(NodeType.Shape, true); // Collect shapes per page. Dictionary<int, List<ShapeRect>> shapesPerPage = new Dictionary<int, List<ShapeRect>>(); foreach (Shape s in shapes) { enumerator.Current = collector.GetEntity(s); if (!shapesPerPage.ContainsKey(enumerator.PageIndex)) } foreach (int page in shapesPerPage.Keys) { List<ShapeRect> shapesOnPage = shapesPerPage[page]; // If there is only one shape on the page no actiona is required. if (shapesOnPage.Count == 1) continue; // Adjust vertical position of shapes to avoid overlapping. // The code demonstrates the basic technique and does not guaranly to work in all case. // More complicated cases requires implemeting core complicated logic. for (int i = 0; i < shapesOnPage.Count - 1; i++) { ShapeRect current = shapesOnPage[i]; ShapeRect next = shapesOnPage[i + 1]; if (current.Rectangle.Bottom > next.Rectangle.Top) current.Shape.Top -= current.Rectangle.Bottom - next.Rectangle.Top; } } } private class ShapeRect { public ShapeRect(Shape shape, RectangleF rect) { mShape = shape; mRect = rect; } public Shape Shape { get { return mShape; } } public RectangleF Rectangle { get { return mRect; } } private Shape mShape; private RectangleF mRect; } here is the output document produced on my side: out.docx (17.5 KB) 1 Like got it. Thank you very much for your effort. 1 Like Hii When I make changes in the HTML, my wrapped images in the converted word document, go here and there. sometimes those pictures go to another page. why those positions are changing like that?? This happens only for wrapped images. They escape their wrapped content and go everywhere in the document, when I made new changes to the document. changes means -I gave page margins to the document. I added header image to the document. I used aspose.words logics to avoid page breaks, I add extra content to the html, I removed some paragraphs from the html etc. Do you have a solution to bind those wrapped images with the wrapped content to avoid such kind of issue??? @nethmi You should note that both HTML and MS Word documents are flow documents and when you add some content into the document other content is reflowed. Images in MS Word documents are anchored to a paragraph. If anchor point is moved the shape position is also moved and document content is reflowed again. To fix position of shape on the page, you can set its relative vertical and horizontal position to Page, just like I have suggested in this post. // Set relative shape position to page. s.RelativeHorizontalPosition = RelativeHorizontalPosition.Page; s.RelativeVerticalPosition = RelativeVerticalPosition.Page; But in this case you have to calculate absolute position of shape on the page. And anyways if the anchor point the absolutely positioned shape will be moved to the next page, the shape will be moved to the next page too and content will be reflowed. 1 Like got it. I’ll try 1 Like
Innovation in Aging Oxford University Press Creating an Age-Friendly Public Health System Volume: 4, Issue: 1 DOI 10.1093/geroni/igz044 • • • • Altmetric ### Notes Abstract Keywords De Biasi, Wolfe, Carmody, Fulmer, Auerbach, and Albert: Creating an Age-Friendly Public Health System Translational Significance: An age-friendly public health system is one that recognizes aging as a core public health issue and leverages its skills and capacities to improve the health and well-being of older adults. The impacts of the Florida pilot project to be described in the proposed article will be scalable in local and state health departments across the country. The value of this project is its flexibility in design and application, with multilevel policy and programmatic impacts through multisector collaboration. Americans are living longer and more productive lives than ever before. The number of adults aged 65 and older increased by 33% over the past 10 years and is projected to nearly double by 2060 to 98 million, or 24% of the U.S. population (Administration on Aging & Administration for Community Living, 2018). The population of those ages 85 and older is also projected to grow from 6 million to 20 million by 2060. This is good news, as these older adults are also staying in the workforce, contributing their knowledge and skills, as well as their wisdom (Figure 1). Figure 1. The number of Americans aged 65 and older will be more than double by 2060. Source: PRB analysis of data from the U.S. Census Bureau. (https://www.prb.org/wp-content/uploads/2016/01/aging-us-population-bulletin-1.pdf). This rise in the number and proportion of older adults only presents part of the picture—the older adult population is becoming more racially and ethnically diverse. In 2014, 78% of older adults were non-Hispanic White, 9% were African American, 8% were Hispanic of any race, and 4% were Asian. By 2060, the percentage of non-Hispanic Whites is expected to drop to 55%, while the proportion of other racial groups will increase, with 22% of the population Hispanic, 12% African American, and 9% Asian (Federal Interagency Forum on Aging-Related Statistics, 2016). These changes are important because of racial and ethnic inequities in health and access to resources, as well as cultural differences in expectations of informal and formal care. There are also substantial variations in social and economic well-being among the older adult population. Older adults are less likely to live below the poverty line than other age groups, with 10% of those ages 65 and over in poverty in 2014, but this may not be an accurate indicator of economic vulnerability in later life (Federal Interagency Forum on Aging-Related Statistics, 2016). The federal poverty level fails to consider all older adults’ basic living costs, including those for health care and transportation, and thus it underestimates the extent of financial need. Also, since poverty increases with age, the growth of the oldest-old may lead to an increase in the number of older adults living in poverty. Research using a more age-specific measure of financial resources found that in California more than half of older adults living alone and more than one-quarter of older couples lack adequate income to cover basic expenses (Wallace, Padilla-Frausto, & Smith, 2010). Studies show that individuals subject to risk factors like racial, ethnic and socioeconomic disparities that accumulate over the life course are at greater risk for poor health and disease as older adults (Halfon & Hochstein, 2002). Given this and the increasing diversity along health and sociodemographic dimensions means that policies and programs designed to meet the needs of older adults must consider the needs and preferences of different subpopulations (Lehning & De Biasi, 2018). Health Affairs held a recent conference which highlighted that housing and health care costs for older adults in the “middle market” of income will soon be out of reach (Pearson et al., 2019). Millions of older adults have chronic conditions, such as diabetes, hearing loss, arthritis, and heart disease (National Council on Aging, 2018). Eighty percent of Medicare beneficiaries (the insurance program for those ages 65 and older) have one chronic condition and nearly 70% have two or more (Centers for Disease Control and Prevention [CDC], 2011a). Chronic diseases can limit one’s ability to perform daily activities and result in a loss of independence, sometimes resulting in a need for institutional care, in-home caregivers, or other long-term services and supports (CDC, 2013). And chronic diseases are costly, accounting for two thirds of all health care costs and 93% of Medicare spending (Centers for Medicare and Medicaid Services, 2012; Gerteis et al., 2014). Yet, evidence shows that prevention is effective in reducing chronic diseases such as diabetes and heart disease and in preventing injuries (CDC, 2009). Programs such as the Medicare Diabetes Prevention Program have been found to reduce and prevent diabetes in Medicare beneficiaries with an indication of prediabetes (Centers for Medicare and Medicaid Services, 2017). And programs that promote physical activity in older adults can increase mobility and stability to help combat Alzheimer’s disease and avoid frailty and falls (Evidence Based Leadership Council, 2019). Despite the fact that health problems can be prevented, managed, or delayed with a stronger focus on prevention, in 2017, public health represented just 2.5% of all health spending in the country—or $274 per person (McKillip & Ilakkuvan, 2019). Evidence also points to increased isolation and loneliness, with more older adults living away from their families, facing financial problems, having limited transportation and access to healthy food, and challenging housing options. Social isolation can negatively affect quality of life and contribute to an increased risk of morbidity and mortality (Bassuk, Glass, & Berkman, 1999). Research indicates loneliness can be more detrimental to health than smoking or obesity, increasing the risk for early death by 45% and the chance of developing dementia by 64% (Perissinotto, Cenzer & Covinsky, 2012; Worland, 2015). Social isolation has been estimated to account for$6.7 billion in additional Medicaid spending annually (AARP, 2018; Flowers et al., 2017). According to a recent report by the U.S. Congressional Budget Office (CBO) (2019), federal spending on Medicare has been increasing since 2005. And overall, U.S. spending for people ages 65 and older has grown significantly; as a share of gross domestic product, mandatory spending for people in this age group grew from 5.8% in 2005 to 7.5% in 2018 ($1.3 trillion). CBO projects that share would grow to 9.8% ($2.7 trillion) by 2029 and that in 10 years, the federal government will spend half of its budget on mandatory programs for this population—Social Security and Medicare (CBO, 2019). Despite these upward cost trends, growth of medical spending for older adults is actually slowing down, particularly for Medicare patients with chronic conditions (Cutler et al., 2019). Reduced spending on cardiovascular disease and risk factors (such as hypertension and diabetes) have been the biggest contributors to the spending slowdown (Cutler et al., 2019). This is due in part to the advances in science and resultant treatment that addresses heart disease and also to prevention programs that reduce risk factors related to heart disease (Stewart, Manmathan, & Wilkinson, 2017). Medically focused prevention, including preventive cardiovascular disease medications, also contributes to this spending slowdown (Figure 2). Yet there is plenty of room for more prevention-related interventions that will lead to better outcomes for Medicare beneficiaries and even lower spending for the nation. Public health prevention programs that work to prevent disease and complications of disease, improve quality of life, and reduce health care costs include physical activity and nutrition programs, arthritis prevention and control, diabetes prevention and control, early detection of cancer, heart disease, and stroke prevention and chronic disease self-management programs (CDC, 2011b; Hoffman & Mertzlufft, 2018). Many national efforts have prioritized older adult health and well-being, including the Older Americans Act, the recent recognition of the importance of addressing social determinants in Medicare, national convenings such as the White House Conference on Aging and the U.S. Department of Health and Human Services (HHS) Healthy Aging Summit, and reports such as Healthy Aging in Action (National Prevention Council, 2016). Despite this national attention to the importance of health in our later years, programs that address older adult health continue to be siloed and under-resourced, and the United States is not making progress toward a system approach to improve the health and well-being of our older adults. The CDC has no staff dedicated solely to healthy aging programs. Other federal services are scattered across agencies and their leaders are not collaborating. Programs such as Age-Friendly Communities, CDC’s Healthy Brain Initiative, Dementia-Friendly Communities, Age-Friendly Health Systems, and many other efforts are in nascent stages, operating independently. There is a growing recognition that social and environmental factors affect health (Castrucci & Auerbach, 2019), and one of the most effective strategies to influence those factors is through the collective action approach (Tufts Health Plan Foundation, n.d.). However, national, state, and local stakeholders have not broadly adopted an aligned and coordinated older adult health and well-being approach. ## Lack of Public Health Engagement in Older Adult Health Although public health efforts are largely responsible for the dramatic increases in longevity over the 20th century, there have been limited collaborations between the public health and aging sectors (Anderson, Goodman, Holtzman, Posner, & Northridge, 2012; Cutler & Miller, 2005). Older adults were not central to the public health agenda when public health emerged in cities in the 19th century (Kane, 1997). Similarly, in the mid-20th century, many policies designed to support older adult health and independence, including Medicare, Medicaid, and the Older Americans Act, did not explicitly include a role for public health. Over the past 50 years, some steps have been taken toward a more collaborative approach, such as the formation of the Aging and Public Health section of the American Public Health Association in 1978, or the mandated role for CDC in providing disease prevention and health promotion services offered through the Older Americans Act in 1987 (Anderson et al., 2012). However, public health agencies rarely have dedicated funding or initiatives targeting adults ages 65 and older. In recent decades, the aging network, comprising 56 State Units on Aging, 655 Area Agencies on Aging (AAAs), 243 Indian Tribal and Native Hawaiian Organizations, and thousands of service providers and volunteers, has increasingly focused on prevention and wellness. The national association for AAAs, the National Association of Area Agencies on Aging (N4A), is currently working to build the capacity of AAAs to partner with health care providers and payers, like current public health partnership efforts. The 2010 passage of the Affordable Care Act (ACA) is shifting the health care system to one with a broadened focus on prevention, wellness, and health, rather than only disease. As mandated by the ACA, in 2011, the National Prevention Council released the National Prevention Strategy with an overarching goal of increasing the number of Americans who are healthy at every stage of life. In 2016, the Council produced Healthy Aging in Action, highlighting programs that are advancing the National Prevention Strategy specifically for older adults. Central to this report is the need for multisector collaborations to achieve a goal of healthy aging (National Prevention Council, 2016). CDC has historically had a limited role in promoting the health and well-being of older adults. In 2001, CDC established the Prevention Research Centers Healthy Aging Research Network (PRC-HAN) to understand determinants of healthy aging and develop evidence-based community programs (Sleet, Moffett, & Stevens, 2008). The network consists of seven major universities and their affiliated communities, which collectively conduct research, develop and evaluate initiatives promoting healthy aging, and translate and disseminate research findings into evidence-based public health programs (Hunter et al., 2013). CDC has also supported effective falls prevention research and programs (Sleet et al., 2008). In 2005, CDC established the Healthy Brain Initiative to address Alzheimer’s disease and dementia, and in 2007, CDC published The Healthy Brain Initiative: A National Public Health Road Map to Maintaining Cognitive Health (2007). This Road Map led to a better understanding of public’s perception of cognitive health as well as a revision of CDC’s Behavioral Risk Factor Surveillance System (BRFSS) to include questions about confusion, memory loss, and caregivers. The next iteration, The Public Health Road Map for State and National Partnerships: 2013–2018 , emphasized the needs of caregivers and the importance of partnerships between public health and aging services professionals (Anderson & Egge, 2014). ## Public Health’s Potential Roles in Healthy Aging Public health needs to be a critical partner in all efforts to support and promote programs that improve the health and well-being of older adults. Evidence shows that disease prevention and health promotion programs are effective, and they are the domain of public health (Keck School of Medicine, n.d.). Throughout the 20th century, public health played a crucial role in adding years to life. In the 21st century, public health can play a crucial role in adding life to years. Recognizing the significant role the U.S. public health system can play in strengthening older adult health, Trust for America’s Health (TFAH) led a convening in 2017, funded by The John A. Hartford Foundation, to explore potential roles for public health in healthy aging. National, state, and local public health officials, aging experts, advocates, and service providers, and health care officials who participated in the convening strongly endorsed a greater role for public health in aging. Through an examination of case studies of older adults, participants identified gaps in services, supports, and policies needed to improve older adult health and well-being and considered the potential roles public health could play in filling these identified gaps. The resulting Framework for an Age-Friendly Public Health System outlines the functions that public health could fulfill, in collaboration with aging services and the health care sector to address the challenges and opportunities of an aging society. The main takeaway from the convening was the need for an Age-Friendly Public Health (AFPH) system that recognizes aging as a core public health issue. Healthy aging is defined in the Framework as (i) promoting health, preventing disease, injury, and frailty, and managing chronic conditions; (ii) optimizing physical, cognitive, and mental health; and (iii) facilitating social and civic engagement. This definition intentionally does not equate healthy aging with the absence of disease and disability. Instead, it portrays healthy aging as both an adaptive process in response to the challenges that can occur as we age, and a proactive process to reduce the likelihood, intensity, or impact of future challenges. Healthy aging involves maximizing physical, mental, emotional, and social well-being, while recognizing that aging is often accompanied by chronic illnesses and functional limitations. It also emphasizes the importance of meaningful involvement of older adults with others, such as friends, family members, neighbors, organizations, and the wider community. Although the public health sector has experience and skill in addressing these components of health for some populations, it has not traditionally focused attention on older adults. The Framework is not a prescriptive guide to action or a declaration of the public health sector’s oversight of certain activities. Not every community will need public health to assume each of these roles. Agencies and organizations in other sectors are already actively engaged in healthy aging but are not leveraging the expertise of public health professionals. Public health should work in partnership with these organizations to promote healthy aging. Furthermore, public health organizations lack the resources to focus on healthy aging and will thus need to carefully and thoughtfully prioritize their roles. The Framework offers a useful articulation of the potential contributions that public health should consider as it embraces a larger role in optimizing the health of older adults. To explore these functions more fully through actual public health experience, TFAH initiated the Florida-based Age-Friendly Public Health Learning and Action Network (AFPH Network), with funding from The John A. Hartford Foundation. TFAH created an application process to select county health departments (CHDs) to participate in the AFPH Network and conducted interviews with all prospective county teams. With support from the Florida Departments of Health and Elder Affairs, all applicants were invited to participate. The AFPH Network includes teams from 37 of Florida’s 67 CHDs, representing 65% of Florida’s overall population and 65% of the older adult population (Figure 3). Figure 3. Trust for America’s health: age-friendly public health logic model. ## Age-Friendly Health Systems’ Alignment With Age-Friendly Public Health The Age-Friendly Health Systems movement, initiated in 2017, recognizes that an all-in, national response is needed to embrace the health and well-being of the growing older adult population. Like public health, health systems, including payers, hospitals, clinics, community-based organizations, nursing homes, and home health care, need to adopt a new way of thinking that replaces unwanted care and services with aligned interventions that respect older adults’ goals and preferences. Becoming an Age-Friendly Health System entails reliably acting on a set of four evidence-based elements of high-quality care and services, known as the “4Ms,” for all older adults. When implemented together, the 4Ms represent a broad shift to focus on the needs of older adults: • (1) What Matters: Know and align care with each older adult’s specific health outcome goals and care preferences including, but not limited to, end-of-life care and across settings of care; • (2) Medication: If medication is necessary, use Age-Friendly medication that does not interfere with What Matters to the older adult, Mobility, or Mentation across settings of care; • (3) Mentation: Prevent, identify, treat, and manage dementia, depression, and delirium across settings of care; and The initiative to advance AFPH systems strives to create community-wide conditions to improve the health and well-being of older adults that should be seamless across the continuum from clinical care to community. Health care systems are managing their identified populations and expanding their focus beyond their walls and into their communities by addressing the 4Ms with individual patients. The public health system works to create the social and environmental conditions that address the needs of the whole population of older adults associated with housing, food availability, social engagement, transportation, and safety, as well as by providing educational information and promoting healthier behaviors, such as better nutrition and more physical activity. Public health also implements evidence-based programs and policies to prevent disease, frailty, and cognitive decline. Public health can assist in community-based interventions to improve the social determinants of health for older adults and provide services unique to older populations (Ogden, Richards, & Shenson, 2012). Public health already serves in this role for some populations, but typically not for older adults (Figure 5). ## Age-Friendly Public Health Alignment with Age-Friendly Communities In 2006, the World Health Organization (WHO) initiated a movement to create “Age-Friendly Communities,” those that encourage “active aging by optimizing opportunities for health, participation, and security in order to enhance quality of life as people age” (World Health Organization, 2007). Age-Friendly Communities (AFCs) typically focus on eight core community features that address the physical and social infrastructure to support the health across the life span: housing, transportation, social participation, respect and social inclusion, civic participation and employment, communication and information, community support and health services, and outdoor spaces and buildings (Alley, Liebig, Pynoos, Banerjee, & Choi, 2007) The AARP is the U.S. agent for the WHO AFC, and as of this writing, there are four states (Colorado, Massachusetts, Florida, and New York) and 368 U.S. cities and communities that have joined the AARP Network of Age-Friendly States and Communities (Figure 4). Figure 4. AARP: the domains of age-friendly communities. There is considerable alignment between the AFC domains and the social determinants of health, which are core to the work of public health, and public health can play an important role in helping to advance AFC initiatives and complement the associated policy and infrastructure changes that support community health (Figure 5). Figure 5. Used with permission from The John A. Hartford Foundation. ## Framework for an Age-Friendly Public Health System : Five Roles for Public Health’s Engagement in Older Adult Health (Lehning & De Biasi, 2018) Activities and opportunities that demonstrate the roles enumerated in the Framework are described in the following sections. ### Connecting and Convening Multiple Sectors and Professions That Provide the Supports, Services, and Infrastructure to Promote Healthy Aging Addressing the full range of individual and community needs to support healthy aging requires the active contribution of a variety of stakeholders. As mentioned earlier, many different organizations and professionals are already working to improve older adult health and well-being, yet they operate in silos with limited opportunities to communicate and collaborate. The first role of public health is to connect and convene the multiple sectors and professions that provide the supports, services, and infrastructure to promote healthy aging. One example that highlights this role is in promoting and supporting mobility and physical activity. Regular physical activity reduces the risk of chronic conditions, such as diabetes and cardiovascular disease, prevents cognitive and functional decline, and decreases the likelihood of falls and subsequent injury (Nelson et al., 2007); however, only a minority of older adults meet the recommendations outlined in the 2018 Physical Activity Guidelines for Americans (U.S. Department of Health and Human Services, 2018). There are numerous barriers to regular physical activity in later life, including restricted access to indoor and outdoor recreational facilities, concerns about neighborhood safety, limited individual knowledge about the benefits of exercise (Schutzer & Graves, 2004), and the absence of walkable neighborhood features (e.g., well-maintained sidewalks, raised crosswalks, speed bumps, and a variety of food and shopping destinations; Clark, 1999). Public health could bring together the multiple actors needed to collaboratively address these barriers, including law enforcement, public works, parks and recreation, city planning, local businesses, health care systems, senior centers, and other community groups. Public health can promote communication across sectors, facilitate the sharing of knowledge and resources, including identifying relevant scientific evidence and effective evidence-based programs, and, in some cases, advance shared goal setting and action plans. A second example is the need to address social isolation in later life. Social isolation can involve an objective separation from a social network, such as living alone, or more subjective feelings of loneliness (Golden et al., 2009). Approximately 12 million adults over the ages of 65 live alone (West, Cole, Goodkind, & He, 2014), and studies report that 15%–45% of older adults experience loneliness (Golden et al., 2009; Lauder, Sharkey, Mummery, 2004; Pinquart & Sorensen, 2001). Public health can work with community-based organizations to address loneliness and social isolation by providing opportunities for social interaction and the development of new friendships. For example, through the development of volunteer programs such as AARP’s Experience Corps, which engages older adults as volunteer tutors for young people in communities and schools (Martinez et al., 2006). Public health professionals can also partner with “Villages,” grassroots consumer-driven, community-based organizations that aim to promote aging-in-place by combining services, participant engagement, and peer support. First emerging in the early 2000s, currently, there are more than 200 Villages in the United States in operation or development (Village to Village Network, n.d.). Studies suggest that Villages are a promising approach to increasing members’ social engagement and connecting with a variety of formal and informal community supports (including those offered by public health departments) plays a crucial role in their ability to do so (Graham, Scharlach, & Price Wolf, 2014). When convening sectors, professions, and organizations, public health typically leverages its seat at the table to ensure a focus on prevention, including policy, systems, and environmental change, to support population-level health improvement. A greater focus on prevention can help forestall declines in health and well-being, such as falls prevention and initiatives to promote physical activity or brain health. A focus on policy, systems, and environmental change complements the efforts to address the needs of individual older adults by focusing on improvements that affect entire populations or communities. ### Coordinating Existing Supports and Services to Avoid Duplication of Efforts, Identify Gaps, and Increase Access to Services and Supports Navigating the wide variety of supports and services for older adults can be confusing and overwhelming. Supports and services are offered by a range of providers in different locations and settings, with different funding sources and variations in eligibility requirements. A second critical role for public health is, therefore, to coordinate existing supports and services to avoid duplication of efforts, identify gaps, and increase access. If resources are available, health departments can create an aging specialist role to facilitate this coordination and ensure that older adults are considered in any other public health programming or research. ### Collecting Data to Assess Community Health Status (Including Inequities) and Aging Population Needs to Inform the Development of Interventions All sectors are becoming increasingly data driven to ensure they have all the information they need to address their target populations and most pressing problems. A third role for public health is to call attention to the health status, needs and assets of a community’s aging population to inform their community health needs assessment (CHNA), a critical step to set goals and implement strategies for health improvement. Public health can document health status by collecting and analyzing data, including data from multiple sectors and sources. In response to the needs of the Florida AFPH Network CHDs for data on the health of the older adults in their counties, the Florida Department of Health, in collaboration with the Florida Department of Elder Affairs, created new “Aging in Florida” health profiles. Each participating CHD is now analyzing their county data to assess the health status and needs of older adults, with a particular focus on improving health equity. Each AFPH Network team has analyzed the data, prioritized the needs and opportunities, and developed an action plan. CDC’s BRFSS as noted above was augmented to include two voluntary modules for states to assess cognitive decline and caregiver health. These two modules are in use in 35 states (with 21 states using the caregiver module, 21 states measuring cognitive decline, and 7 states implementing both). Public health departments can advocate for wider implementation of these modules and can analyze and disseminate the data in states that have implemented one or both. Public health can also provide important information about older adults using hotspot analysis, a technique to examine the geographic distribution of populations, features, or events. Geographic data can be essential in mapping neighborhoods where older adults are at a higher risk for falls or have less access to a grocery store. Hotspot analyses showing areas with high concentrations of older adults, particularly those living alone or with a health challenge, could also enhance emergency preparedness planning, which is critical because older adults often experience higher rates of injury and death and lower rates of economic recovery following major natural disasters (Bolin, & Klenow, 1983). The Department of Health and Human Services’ Office of the Assistant Secretary for Preparedness and Response developed the emPOWER Initiative through a partnership with the Centers for Medicare and Medicaid Services. This initiative provides federal data and mapping tools to public health departments to help identify vulnerable populations who rely on electricity-dependent medical and assistive devices or certain health care services such as dialysis machines, oxygen tanks, and home health services. The emPOWER Map is a public, interactive map that provides monthly de-identified Medicare data down to the ZIP code level, and near real-time hazard tracking services. Together, this information provides enhanced situational awareness and actionable information for assisting areas and at-risk populations that may be impacted by severe weather, wildfires, earthquakes, and other disasters. Public health and emergency management officials, AAAs, and community planners can use emPOWER to better understand the types of resources that may be needed in an emergency. For instance, these data can inform power restoration prioritization efforts, identify optimal locations for shelters, determine transportation needs, and anticipate potential emergency medical assistance requests. The data are also used to conduct outreach to at-risk older adult populations prior to, during, or after an emergency. Public health can also provide hospitals and health systems with information about their local older adult population in their surrounding communities as part of CHNAs, which are required every 3 years for all tax-exempt hospitals, to assess and prioritize the health needs of their geographic community and develop and implement action steps to address those needs (Office for State, Tribal, Local, and Territorial Support, Public Health Law Program, 2013). At least one local, state, or regional public health department must be involved in this process. Public health can thus call attention to the needs of older adults and ensure programs and resources are dedicated to this population. ### Conducting, Communicating, and Disseminating Research Findings and Best Practices to Support Healthy Aging Public health researchers, policymakers, and practitioners can also play key roles in supporting healthy aging by conducting, communicating, and disseminating research findings and best practices to empower individuals to engage in healthy behaviors; support the provision of effective services; and contribute to the creation of safe and healthy community environments. There is a large body of research concerning healthy aging, yet limited clearinghouses for interested parties to find best practices or resources. Public health plays a key role in translating research findings into practice through education of providers and providing implementation support. Public health organizations could serve as central locations for best practices, toolkits, and research on healthy aging. The ready availability of this information would enhance the capacity of other sectors and professions to address the needs of older adults. Public health is already serving this function in the area of cognitive health. Approximately 10%, or 3.6 million, of all Medicare beneficiaries over the age of 65 living in the community had some form of dementia in 2011 (Federal Interagency Forum on Aging-Related Statistics, 2016). CDC’s Healthy Brain Initiative promotes a role for public health in maintaining or improving cognitive functioning in later life. As part of this initiative, CDC and the Alzheimer’s Association developed a guide as noted above outlining strategies for public health to promote cognitive health, address cognitive impairment, and support dementia caregivers (Alzheimer’s Association & CDC, 2013). A key component of this initiative is supporting applied research and translating evidence into practice. Public health also assists with neurocognitive disorder public awareness campaigns around modifiable risk factors, signs of disease progression, strategies for addressing changes in behavior, and community supports. ### Complementing and Supplementing Existing Supports and Services, Particularly in Terms of Integrating Clinical and Population Health Approaches The fifth role for public health is complementing and supplementing existing supports and services in terms of integrating clinical and population health approaches. Existing public health programs address a wide range of health issues, from infectious disease to chronic disease; from education campaigns that reach the general public, to targeted and focused home visits by educators; from the enforcement of environmental regulations addressing long-term health risks, such as lack of clean air and water, to the response to rare and catastrophic events. Furthermore, public health is focused on the entire life course, providing programs and policies, such as maternal and child health, workplace safety, and tobacco-free initiatives, that ultimately support healthy aging later in life. Each of these current activities could be assessed to determine whether it is adequately meeting the needs of older adults and, when necessary, modified to better do so. For example, aging services are beginning to recognize the value of community health workers (CHWs), a public health approach that has long been working with populations with limited access to formal health and social services. CHWs are trusted members of a community and conduct outreach, provide education, and serve as a liaison to formal systems of support. Preliminary research indicates the promise of CHWs for reducing health care costs, supporting transitions back home from the hospital, and connecting low-income senior housing residents to community services (Rush, 2019). This may be a particularly effective strategy to address health inequities and to address social isolation. Public health can also complement existing programs for informal caregivers that provide assistance to older adults with disabilities. Community-based supports for caregivers are often fragmented from each other and disconnected from the health and long-term care systems (Riggs, 2003). The National Family Caregiver Support Program provides a range of services, including counseling, case management, respite care, and training, particularly in terms of adapting to the caregiver role and developing strategies for self-care, yet gaps remain. Public health can provide critical education and training on performing the tasks needed to support older care recipients, such as safely bathing or transferring from a bed to a chair, or addressing the behavioral changes associated with dementia. ### Conclusion Public health’s mission is to improve the health and safety of our nation. Yet public health is not adequately engaged in efforts to improve the health and well-being of the older adult population, despite the overwhelming evidence of the effectiveness of prevention and health promotion activities that improve older adult health and quality of life. TFAH, with support from The John A. Hartford Foundation, developed a Framework outlining the key roles that public health can fulfill, in alignment with partners in aging and other sectors, to modernize the public health system by translating evidence on healthy aging into public health practice. Increasing the engagement of local, state, and federal public health offers promise to improve the health and well-being of this population—a precious resource for our nation—and bend the rising cost curve by investing more in prevention. ## Acknowledgment Amanda Lehning, PhD, MSW. Dr Lehning is acknowledged for her contributions to the development of the Framework for Creating an Age-Friendly Public Health System (See citations for reference). ## Funding This work is supported by a grant from The John A. Hartford Foundation (2017-0235). None reported. ## References 1 2 3 Alley D., Liebig P., Pynoos J., Banerjee T., & Choi I. H (2007). . Creating elder-friendly communities: Preparations for an aging society. Journal of Gerontological Social Work, 49, , pp.1–18. doi:, doi: 10.1300/J083v49n04_01 4 Alzheimer’s Association & Centers for Disease Control and Prevention. (2013). The healthy brain initiative: The public health road map for state and national partnerships, 2013–2018. Chicago, IL: Alzheimer’s Association. Retrieved from https://www.cdc.gov/aging/healthybrain/roadmap.htm. Accessed May 2019. 5 Anderson L. A., & Egge R (2014). . Expanding efforts to address Alzheimer’s disease: The Healthy Brain Initiative. Alzheimer’s & Dementia, 10, , pp.S453–S456. doi:, doi: 10.1016/j.jalz.2014.05.1748 6 Anderson L. A., Goodman R. A., Holtzman D., Posner S. F., & Northridge M. E (2012). . Aging in the United States: Opportunities and challenges for public health. American Journal of Public Health, 102, , pp.393–395. doi:, doi: 10.2105/AJPH.2011.300617 7 Bassuk S. S., Glass T. A., & Berkman L. F (1999). . Social disengagement and incident cognitive decline in community-dwelling elderly persons. Annals of Internal Medicine, 131, , pp.165–173. doi:, doi: 10.7326/0003-4819-131-3-199908030-00002 8 Bolin R., & Klenow D. J (1983). . Response of the elderly to disaster: An age-stratified analysis. International Journal of Aging & Human Development, 16, , pp.283–296. 9 Castrucci B., & Auerbach J (2019). . Meeting individual social needs falls short of addressing social determinants of health. Health Affairs Blog. Retrieved from https://www.healthaffairs.org/do/10.1377/hblog20190115.234942/full/. Accessed September 2019. 10 Centers for Disease Control and Prevention. (2009). The power of prevention, 2009: Chronic disease, the public health challenge of the 21st century. Atlanta, GA: National Center for Chronic Disease Prevention and Health Promotion, US Department of Health and Human Services. Retrieved from https://www.cdc.gov/chronicdisease/pdf/2009-Power-of-Prevention.pdf. Accessed May 2019. 11 Centers for Disease Control and Prevention. (2011a). Healthy aging at a glance, 2011: Helping people to live long and productive lives and enjoy a good quality of life. Atlanta, GA: Centers for Disease Control and Prevention, US Department of Health and Human Services. Retrieved from https://stacks.cdc.gov/view/cdc/22022. Accessed May 2019. 12 Centers for Disease Control and Prevention. (2011b). Sorting through the evidence for the arthritis self-management program and the chronic disease self-management program. Atlanta, GA: Centers for Disease Control and Prevention, US Department of Health and Human Services. Retrieved from https://www.cdc.gov/arthritis/docs/ASMP-executive-summary.pdf. Accessed May 2019. 13 Centers for Disease Control and Prevention. (2013). The state of aging and health in America in 2013. Atlanta, GA: Centers for Disease Control and Prevention, US Department of Health and Human Services. Retrieved from https://www.cdc.gov/aging/pdf/state-aging-health-in-america-2013.pdf. Accessed May 2019. 14 Centers for Medicare and Medicaid Services. (2012). Chronic conditions among Medicare beneficiaries, chartbook (2012 ed.). Baltimore, MD Retrieved from https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Chronic-Conditions/2012ChartBook.html. Accessed May 2019. 15 Centers for Medicare and Medicaid Services. (2017). Medicare diabetes prevention program. Retrieved from https://innovation.cms.gov/initiatives/medicare-diabetes-prevention-program/. Accessed September 2019. 16 Clark D. O (1999). . Identifying psychological, physiological, and environmental barriers and facilitators to exercise among older low income adults. Journal of Clinical Geropsychology, 5, , pp.51–62. doi:10.1023/A:1022942913555 17 Congressional Budget Office. (2019). The budget and economic outlook: 2019 to 2029. Washington, DC: Congress of the United States, Congressional Budget Office. Retrieved from https://www.cbo.gov/publication/54918. Accessed May 2019. 18 Cutler D. M., Ghosh K., Messer K. L., Raghunathan T. E., Stewart S. T., & Rosen A. B (2019). . Explaining the slowdown in medical spending growth among the elderly, 1999–2012. Health Affairs, 38, , pp.222–229. doi:, doi: 10.1377/hlthaff.2018.05372 19 Cutler D., & Miller G (2005). . The role of public health improvements in health advances: The twentieth-century United States. Demography, 42, , pp.1–22. doi:, doi: 10.1353/dem.2005.0002 20 21 Federal Interagency Forum on Aging-Related Statistics. (2016). Older Americans 2016: Key indicators of well-being. Washington, DC: U.S. Government Printing Office. Retrieved from https://agingstats.gov/docs/LatestReport/Older-Americans-2016-Key-Indicators-of-WellBeing.pdf. Accessed May 2019. 22 Flowers L., Houser A., Noel-Miller C., Shaw J., Bhattacharya J., Schoemaker L., & Farid M (2017). . Medicare spends more on socially isolated older adults. Insight on the Issues, 125, , pp.1119–1143. 23 Gerteis J., Izrael D., Deitz D., LeRoy L., Ricciardi R., Miller T., & Basu J (2014). Multiple chronic conditions chartbook. Rockville, MD: Agency for Healthcare Research and Quality, , pp.7–14. Retrieved from https://www.ahrq.gov/sites/default/files/wysiwyg/professionals/prevention-chronic-care/decision/mcc/mccchartbook.pdf. Accessed June 2019. 24 Golden J., Conroy R. M., Bruce I., Denihan A., Greene E., Kirby M., & Lawlor B. A (2009). . Loneliness, social support networks, mood and wellbeing in community-dwelling elderly. International Journal of Geriatric Psychiatry, 24, , pp.694–700. doi:, doi: 10.1002/gps.2181 25 Graham C. L., Scharlach A. E., & Price Wolf J (2014). . The impact of the “Village” model on health, well-being, service access, and social engagement of older adults. Health Education & Behavior, 41, , pp.91S–97S. doi:, doi: 10.1177/1090198114532290 26 Halfon N., & Hochstein M (2002). . Life course health development: An integrated framework for developing health, policy, and research. The Milbank Quarterly, 80, , pp.433–79, iii. 27 Halloran L (2013). . Health promotion and disability prevention in older adults. The Journal for Nurse Practitioners, 9, , pp.546–547. doi:, doi: 10.1016/j.nurpra.2013.05.023 28 Hoffman D., & Mertzlufft J (2018). Why preventing chronic disease is essential – Prevention works. Atlanta, GA: National Association of Chronic Disease Directors2018 Retrieved from https://cdn.ymaws.com/www.chronicdisease.org/resource/resmgr/website-2018/government_affairs_/comms_wp_investingincd2018fa.pdf. Accessed September 2019. 29 Hunter R. H., Anderson L. A., Belza B., Bodiford K., Hooker S. P., Kochtitzky C. S.,…Satariano W. A (2013). . Environments for healthy aging: Linking prevention research and public health practice. Preventing Chronic Disease, 10, , pp.E55. doi:, doi: 10.5888/pcd10.120244 30 Kane R. L (1997). The public health paradigm. Public health and aging. Baltimore, MD: Johns Hopkins University, , pp.3 Retrieved from https://jhupbooks.press.jhu.edu/title/public-health-and-aging. Accessed June 2019. 31 Keck School of Medicine (n.d). Prevention and public health: The connection. University of Southern California Retrieved from https://mphdegree.usc.edu/blog/prevention-and-public-health-the-connection/. Accessed September 2019. 32 Lauder W., Sharkey S., & Mummery K (2004). . A community survey of loneliness. Journal of Advanced Nursing, 46, , pp.88–94. doi:, doi: 10.1111/j.1365-2648.2003.02968.x 33 Lehning A. J., & De Biasi A (2018). Creating an age-friendly public health system: Challenges, opportunities and next steps. Washington, DC: Trust for America’s Health. Retrieved from https://www.tfah.org/wp-content/uploads/2018/09/Age_Friendly_Public_Health_Convening_Report_FINAL__1___1_.pdf. Accessed April 2019. 34 Martinez I. L., Frick K., Glass T. A., Carlson M., Tanner E., Ricks M., & Fried L. P (2006). . Engaging older adults in high impact volunteering that enhances health: Recruitment and retention in the Experience Corps Baltimore. Journal of Urban Health, 83, , pp.941–953. doi:, doi: 10.1007/s11524-006-9058-1 35 Mate K. S., Berman A., Laderman M., Kabcenell A., & Fulmer T (2018). . Creating age-friendly health systems – A vision for better care of older adults. Healthcare, 6, , pp.4–6. doi:, doi: 10.1016/j.hjdsi.2017.05.005 36 McKillip M., & Ilakkuvan V (2019). The impact of chronic underfunding of America’s public health system: Trends, risks, and recommendations, 2019. Washington, DC: Trust for America’s Health Retrieved from https://www.tfah.org/report-details/2019-funding-report/. Accessed April 2019. 37 National Council on Aging. (2018). Healthy aging. Arlington, VA: National Council on Aging Retrieved from https://www.ncoa.org/wp-content/uploads/2018-Healthy-Aging-Fact-Sheet-7.10.18-1.pdf. Accessed May 2019. 38 National Prevention Council. (2016). Healthy aging in action: Advancing the national prevention strategy. Washington, DC: U.S. Department of Health and Human Services, Office of the Surgeon General. Retrieved from https://www.cdc.gov/aging/pdf/healthy-aging-in-action508.pdf. Accessed April 2019. 39 Nelson M. E., Rejeski W. J., Blair S. N., Duncan P. W., Judge J. O., King A. C.,…Castaneda-Sceppa C.; American College of Sports Medicine; American Heart Association (2007). . Physical activity and public health in older adults: Recommendation from the American College of Sports Medicine and the American Heart Association. Circulation, 116, , pp.1094–1105. doi:, doi: 10.1161/CIRCULATIONAHA.107.185650 40 Office of Disease Prevention and Health Promotion. (n.d). Older adults: Prevention, Healthy People 2020 (OA-1 and OA-2, respectively) Retrieved from https://www.healthypeople.gov/2020/topics-objectives/topic/older-adults/objectives, Accessed April 2019. 41 Office for State, Tribal, Local, and Territorial Support, Public Health Law Program. (2013). Summary of the Internal Revenue Service’s April 5, 2013, Notice of Proposed Rulemaking on Community Health Needs Assessments for Charitable Hospitals. Retrieved from http://www.astho.org/Summary-of-IRS-Proposed-Rulemaking-on-CHNA/. Accessed May 2019. 42 Ogden L. L., Richards C. L., & Shenson D (2012). . Clinical preventive services for older adults: The interface between personal health care and public health services. American Journal of Public Health, 102, , pp.419–425. doi:, doi: 10.2105/AJPH.2011.300353 43 Pearson C. F., Quinn C. C., Loganathan S., Datta A. R., Mace B. B., & Grabowski D. C (2019). . The forgotten middle: Many middle-income seniors will have insufficient resources for housing and health care. Health Affairs (Project Hope), 38, , pp.851–859. 44 Perissinotto C. M., Stijacic Cenzer I., & Covinsky K. E (2012). . Loneliness in older persons: A predictor of functional decline and death. Archives of Internal Medicine, 172, , pp.1078–1083. doi:, doi: 10.1001/archinternmed.2012.1993 45 Pinquart M., & Sorensen S (2001). . Influences on loneliness in older adults: A meta-analysis. Basic and Applied Social Psychology, 23, , pp.245–266. doi:, doi: 10.1207/S15324834BASP2304_2 46 Riggs J (2003). . A family caregiver policy agenda for the twenty-first century. Generations, 27, , pp.68–73. https://www.questia.com/library/journal/1P3-601389291/a-family-caregiver-policy-agenda-for-the-twenty-first. Accessed May 2019. 47 Rush C. H (2019). . Community health workers moving to new roles as more seek to age in place. AgeBlog. Retrieved from http://www.asaging.org/blog/community-health-workers-moving-new-roles-more-seek-age-place. Accessed May 2019. 48 Schutzer K. A., & Graves B. S (2004). . Barriers and motivations to exercise in older adults. Preventive Medicine, 39, , pp.1056–1061. doi:, doi: 10.1016/j.ypmed.2004.04.003 49 Shenson D., Moore R. T., Benson W., & Anderson L. A (2015). . Polling places, pharmacies, and public health: Vote & Vax 2012. American Journal of Public Health, 105, , pp.e12–e15. doi:, doi: 10.2105/AJPH.2015.302628 50 Sleet D. A., Moffett D. B., & Stevens J (2008). . CDC’s research portfolio in older adult fall prevention: A review of progress, 1985–2005, and future research directions. Journal of Safety Research, 39, , pp.259–267. doi:, doi: 10.1016/j.jsr.2008.05.003 51 Stewart J., Manmathan G., & Wilkinson P (2017). . Primary prevention of cardiovascular disease: A review of contemporary guidance and literature. JRSM Cardiovascular Disease, 6: , pp.1–9. 52 Tufts Health Plan Foundation. (n.d). Age-friendly communities conference Retrieved from http://www.tuftshealthplanfoundation.org/community-impact.php?page=community-impact/communities-conference. Accessed May 2019. 53 U.S. Department of Health and Human Services. (2018). Physical activity guidelines for Americans (2nd ed.). Washington, DC: U.S. Department of Health and Human Services. Retrieved from https://health.gov/paguidelines/second-edition/. Accessed June 2019. 54 Village to Village Network. (n.d). Village map. Retrieved from http://www.vtvnetwork.org/content.aspx?page_id=1905&club_id=691012. Accessed May 2019. 55 Wallace S. P., Padilla-Frausto D. I., & Smith S. E (2010). Older adults need twice the federal poverty level to make ends meet in California (pp. , pp.1–8). PB2010-8. Policy Brief (UCLA Center for Health Policy Research) Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/20860105. Accessed May 2019. 56 West L. A., Cole S., Goodkind D., & He W (2014). 65+ in the United States: 2010 (P , pp.23–212). US Census Bureau Retrieved from https://www.census.gov/library/publications/2014/demo/p23-212.html. Accessed April 2019. 57 Worland J (2015). . Why loneliness may be the next big public-health issue. Time Magazine. Retrieved from http://time.com/3747784/loneliness-mortality/. Accessed April 2019. 58 World Health Organization. (2007). Global age-friendly cities: A guide. World Health Organization. Retrieved from https://www.who.int/ageing/publications/Global_age_friendly_cities_Guide_English.pdf. Accessed April 2019. Citing articles via https://www.researchpad.co/tools/openurl?pubtype=article&doi=10.1093/geroni/igz044&title=Creating an Age-Friendly Public Health System&author=Anne De Biasi,Megan Wolfe,Jane Carmody,Terry Fulmer,John Auerbach,Steven M Albert,&keyword=Evidence-based practice,Longevity,Medicaid/Medicare,Organizational/institutional issues,Public health system,Social movement,Successful aging,&subject=Special Issue: Aging and Public Health,Original Research Article,AcademicSubjects/SOC02600,
## A community for students. Sign up today Here's the question you clicked on: ## anonymous one year ago help will give medal Evaluate the following expression using the values given: Find 3x − y − 3z if x = −2, y = 1, and z = −2. (1 point) −13 −1 1 13 I got 0 as answer but it not up there explain what I am doing wrong plz • This Question is Closed 1. whovianchick 3x − y − 3z 3(-2) − (1) − 3(-2) -6-1--6 -6-1+6 -7+6 -1 B 2. anonymous heyy. i thought we talked about this 3. whovianchick ok dad 4. anonymous |dw:1442601860263:dw| 5. anonymous I forgot the multipulcation my bad 6. anonymous $(3\times-2) - 1 - (3\times-2)$ 7. anonymous need any more help? #### Ask your own question Sign Up Find more explanations on OpenStudy Privacy Policy
# Are the marginals of the multivariate t distribution univariate Student t distributions? Are the marginals of the Multivariate t distribution with $\nu$ degrees of freedom univariate Student t distributions with $\nu$ degrees of freedom?
Location of Repository ## Topological Interaction between Loop Structures in Polymer Networks and the Nonlinear Rubber Elasticity ### Abstract We numerically examine the nonlinear rubber elasticity of topologically constrained polymer networks. We propose a simple and effective model based on Graessley and Pearson's topological model (GP model) for describing the topological effect. The main point is to take account of a nonequilibrium effect in the synthesis process of the polymer network. We introduce a new parameter $\gamma$ to describe entropic contributions from the entanglement of polymer loops, which may be determined from the structural characteristics of the sample. The model is evaluated in the light of experimental data under uniaxial and biaxial deformations. As a result, our model exhibits uniaxial behaviors which are common to many elastomers in various deformation regimes such as Mooney-Rivlin's relation in small extension, stress divergence in the elongation limit and the declined stress in compression. Furthermore, it is also qualitatively consistent with biaxial experiments, which can be explained by few theoretical models.Comment: 7 page Topics: Condensed Matter - Soft Condensed Matter, Condensed Matter - Statistical Mechanics Year: 2011 OAI identifier: oai:arXiv.org:1104.3277
1. ## trigo again Find the minimum value of the following expression and the value of x between 0 and 360 when the minimum occurs . 3 cos 2x + sin 2x --- this is the expression 3 cos 2x + sin 2x = r cos(2x-a) $r=\sqrt{10}$ tan a=1/3 , a =18.43 When minimum occurs , \sqrt{10}cos(2x-18.43)=-\sqrt{10} cos(2x-18.43)=-1 2x-18.43=180 or 2x-18.43=270 Then i get my 2 values of x from there . But i am wrong . Where is my mistake >? 2. Originally Posted by thereddevils Find the minimum value of the following expression and the value of x between 0 and 360 when the minimum occurs . 3 cos 2x + sin 2x = r cos(2x-a) $r=\sqrt{10}$ tan a=1/3 , a =18.43 When minimum occurs , \sqrt{10}cos(2x-18.43)=-\sqrt{10} cos(2x-18.43)=-1 2x-18.43=180 or 2x-18.43=270 Then i get my 2 values of x from there . But i am wrong . Where is my mistake >? Which expression are you trying to optimize? You have posted an equation with two expressions, one on each side. 3. I have not looked at the first part of your work, but this caught my eye: cos(2x-18.43)=-1 2x-18.43=180 or 2x-18.43=270 That should be 540°, not 270°. The minimum values of the cosine function occur at odd multiples of 180°. Now, although we have the restriction $0^{\circ} \le x < 360^{\circ}$, you have a 2x, which means that $0^{\circ} \le 2x <\; {\color{red}720}^{\circ}$. That's why the 2nd number above should be 540°. 01 4. Originally Posted by apcalculus Which expression are you trying to optimize? You have posted an equation with two expressions, one on each side. edited 5. Originally Posted by yeongil I have not looked at the first part of your work, but this caught my eye: That should be 540°, not 270°. The minimum values of the cosine function occur at odd multiples of 180°. Now, although we have the restriction $0^{\circ} \le x < 360^{\circ}$, you have a 2x, which means that $0^{\circ} \le 2x <\; {\color{red}720}^{\circ}$. That's why the 2nd number above should be 540°. 01 thanks .. i should hv noticed that ....