url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://encyclopediaofmath.org/wiki/Semi-classical_approximation | # Semi-classical approximation
quasi-classical approximation
An asymptotic representation, that is, the asymptotics as $h \rightarrow 0$( where $h$ is Planck's constant), of the solutions of the equations of quantum mechanics. The Schrödinger equation
$$\tag{1 } i h \frac{\partial \psi }{\partial t } = - \frac{h ^ {2} }{2} \Delta \psi + V ( x) \psi ,\ x \in \mathbf R ^ {n} ,$$
describes the motion of a quantum-mechanical particle in a potential field $V ( x)$. The motion of a classical particle is described by the Hamilton–Jacobi equation (cf. Hamilton–Jacobi theory)
$$\tag{2 } \frac{\partial S }{\partial t } + \frac{1}{2} ( \nabla S ) ^ {2} + V ( x) = 0$$
or by the Hamiltonian system
$$\tag{3 } \frac{d x }{d t } = p ,\ \ \frac{d p }{d t } = - \nabla V ( x) .$$
The Cauchy problem for the Schrödinger equation,
$$\tag{4 } \psi \mid _ {t=} 0 = \phi ( x) \ \mathop{\rm exp} \left [ \frac{i}{h} S _ {0} ( x) \right ] ,$$
is compared with the Cauchy problem for the system (3):
$$\tag{5 } x \mid _ {t=} 0 = y ,\ \ p \mid _ {t=} 0 = \nabla S _ {0} ,\ \ y \in \mathbf R ^ {n}$$
(here the functions $\phi , S _ {0} , V$ are smooth, $S _ {0} , V$ are real-valued and $\phi$ is of compact support). The asymptotics of the solution $\psi ( t , x )$ as $h \rightarrow 0$, $0 \leq t \leq T$, and for small $T > 0$ have the form:
$$\tag{6 } \psi ( t , x ) \sim \mathop{\rm exp} \left [ \frac{i}{h} S ( t , x ) \right ] \sum _ { j= } 0 ^ \infty (- i h ) ^ {j} \phi _ {j} ( t , x ) .$$
Here $S ( t , x )$ is the solution of (2) with Cauchy data $S \mid _ {t=} 0 = S _ {0} ( x)$( the classical action), while
$$\phi _ {0} ( t , x ) = \phi ( 0 , y ) \sqrt { \frac{J ( 0 , y ) }{J ( t , y ) } } ,\ \ J ( t , y ) = \ \mathop{\rm det} \ \frac{\partial x ( t , y ) }{\partial y } ,$$
where $x = x ( t , y )$, $p = p ( t , y )$ is the solution of (3), (5). The functions $\phi _ {j}$ for $j \geq 1$ are defined from the recurrent system of transport equations (these are ordinary differential equations along the trajectories of the system (3)), so that all terms of the asymptotics are expressed in the terminology of classical mechanics. The Bohr correspondence principle asserts: If $h$ tends to zero, then the quantum laws must go over to the classical laws. The method of seeking asymptotics in the form (6) was proposed by P. Debye and has been widely applied in quantum mechanics.
The asymptotic solution of the problem (1), (4) in the large (that is, for any finite time) is constructed via the canonical operator of V.P. Maslov [3]. The Cauchy data (5) fill out an $n$- dimensional Lagrangian manifold $\Lambda ^ {n}$ in the phase space $\mathbf R _ {x,p} ^ {2n}$. Its translates $\Lambda _ {t} ^ {n} = g ^ {t} \Lambda ^ {n}$ along the trajectories of the system (3) are also Lagrangian manifolds; their union $\Lambda ^ {n+} 1 = \cup _ {- \infty < t < \infty } \Lambda _ {t} ^ {n}$ is an $( n + 1 )$- dimensional Lagrangian manifold in the phase space $\mathbf R ^ {2n+} 2$ with coordinates $( t , x , p _ {0} , p )$. For the canonical operator $K = K _ \Lambda ^ {n+} 1$ corresponding to $\Lambda ^ {n+} 1$, the following commutation relation holds:
$$\tag{7 } \widehat{L} K \phi = - i h K \dot \phi + O ( h ^ {2} ) ,$$
where $\dot \phi$ is the derivative along the system (3) and $\widehat{L}$ is the Schrödinger operator. The asymptotics of the solution $\psi$ in the large are given by the formula
$$\tag{8 } \psi ( t , x ) \sim K \left [ \sum _ { j= } 0 ^ \infty ( - i h ) ^ {j} \chi _ {j} ( t , x ) \right ] ,$$
where the functions $\chi _ {j}$ are defined from the Cauchy data (4) by means of the transport equations and can be expressed in terms of classical mechanics. At a non-focal point $( t _ {0} , x ^ {0} )$ the asymptotics have the form
$$\psi ( t _ {0} , x ^ {0} ) = \ \sum _ { j= } 1 ^ { N } \frac{\phi _ {j} }{\sqrt {| J _ {j} | } } \ \mathop{\rm exp} \left ( \frac{i}{h} S _ {j} - \frac{i \pi }{2} l _ {j} \right ) ,$$
where the sum is taken over all rays arriving at this point, $S _ {j}$ and $J _ {j}$ are the action and Jacobian for the $j$- th ray and $l _ {j}$ is the Morse index of the $j$- th ray. For the stationary Schrödinger equation in the semi-classical approximation the problem of scattering has been studied, as well as the problem of the field of a point source, and the classical series (of Balmer type) of eigenvalues has been obtained.
Semi-classical approximations in the wide sense (synonyms: high-frequency asymptotics, short-wave approximations, approximations of geometrical optics, WKB-method, eikonal method, quasi-classical approximations in the wide sense) are the asymptotics of solutions of partial differential equations with real characteristics of the form
$$\tag{9 } L ( x , \lambda ^ {-} 1 D _ {x} ; ( i \lambda ) ^ {-} 1 ) u( x) = 0 ,\ \ x \in \mathbf R ^ {n} ,$$
as well as of differential and pseudo-differential equations. Here $\lambda \rightarrow \infty$ is a large parameter and the symbol $L ( x , p ; \epsilon )$ depends weakly on $\epsilon$. Corresponding to (9) are the equations of classical mechanics, namely the Hamilton–Jacobi equation
$$L ^ {0} ( x , \nabla S ( x) ) = 0$$
and the Hamiltonian system
$$\tag{10 } \frac{dx}{dt} = \frac{\partial L ^ {0} }{\partial p } ,\ \ \frac{dp}{dt} = - \frac{\partial L ^ {0} }{\partial x } ,$$
where $L ^ {0} = L ( x , p ; 0 )$. The semi-classical approximation is constructed by means of the canonical operator corresponding to the Lagrangian manifolds that are invariant with respect to the dynamical system (10) and having a form similar to (8).
The semi-classical approximation is widely applied in modern physics in problems of the propagation of sound, elastic and electromagnetic waves, in non-relativistic and relativistic quantum mechanics and other questions.
#### References
[1] L. Brillouin, "La théorie des quanta et l'atome de Bohr" , Blanchard (1922) [2] L.D. Landau, E.M. Lifshitz, "Quantum mechanics" , Pergamon (1965) (Translated from Russian) [3] V.P. Maslov, "Théorie des perturbations et méthodes asymptotiques" , Dunod (1972) (Translated from Russian) [4] V.P. Maslov, M.V. Fedoryuk, "Quasi-classical approximation for the equations of quantum mechanics" , Reidel (1981) (Translated from Russian) [5] V.A. Fok, "Problems of diffraction and propagation of electromagnetic waves" , Moscow (1970) (In Russian) [6] V.M. Babich, V.S. Buldyrev, "Asymptotic methods in the diffraction of short waves" , Moscow (1972) (In Russian) (Translation forthcoming: Springer) [7] V.P. Maslov, "Operational methods" , MIR (1976) (Translated from Russian) | 2022-12-09 09:17:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9207872152328491, "perplexity": 480.5760885636032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711394.73/warc/CC-MAIN-20221209080025-20221209110025-00139.warc.gz"} |
https://chat.stackexchange.com/transcript/13775/2021/11/26 | 5:07 AM
4
The average person consumes 2000kcal a day, which is equal to ~100W. Furthermore, if one uses the Stefan-Boltzmann law to calculate how much someone loses heat due to radiation, it can be seen that it equals $$Q=\sigma T^4 \varepsilon A$$ $$Q\approx1000W$$ Considering a surface area of ~2m², an e...
6 hours later…
11:08 AM
1
My book has a question: A man is running along a circular path of radius $7$ m and comes to rest after travelling a distance of $45$ m. What is the work done by the man? The options are (a)$>0$ (b)$<0$ (c)$=0$ (d)None of the above The answer given is: (c) $=0$ \$\because \cos\theta=\cos 90=0...
1 hour later…
12:25 PM
1
What would happen if you dumped negative mass into a near-extremal black hole? It appears to me that doing this would reduce the Black hole's mass without reducing the angular momentum or charge? Could this create a naked singularity?
9 hours later…
9:02 PM
3
My question is more naive than Is QFT mathematically self-consistent? Just when people talk about the mathematical consistency of QFT, what does consistency mean? Do people want to fit QFT into ZFC? For example (could be in a more general context) , if I refer to https://souravchatterjee.su.domai... | 2022-01-21 01:32:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7605633735656738, "perplexity": 807.5310590752549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302715.38/warc/CC-MAIN-20220121010736-20220121040736-00031.warc.gz"} |
https://electronics.stackexchange.com/questions/484007/problems-reading-a-signal-with-a-mcu-tm4c123gxl-input-pin | # Problems reading a signal with a MCU (TM4C123GXL) input pin
I'm tring to read a signal generated by an op. amp. with a microcontroller through an input digital pin. All signals I'm working with are DC.
The op. amp. is connected to a light sensor so that it outputs 0V when the sensor is shaded and between 4.8 and 5.2 volts (fluctuates all the time) when the sensor is detecting light. I have measured this signal with a multimeter.
The microcontroller's pin is configured as a digital input with a pull-down resistor. The voltage levels for the pin are a logical 1 if the input voltage is 3.3V < voltage < 5V and a logical 0 if voltage < 3.3.
The problem I have is that when I connect the op. amp.'s output to the input digital pin, and the sensor is enabled, the voltage drops to something between 0.8V and 1.5V, and it doesn't reach a steady level (always fluctuates, wider range than before).
The microcontroller is supposed to change the color of an LED depending on the digital input state. What happens is that the LED blinks all the time. In addition, However, I have connected a steady 3.3V signal coming out of another microcontroller's pin (output) and everything works as expected. In addition, I have measured the voltage at the output pin and it doesn't drop as it happens with the op. amp.
I'm not sure I can connect the op. amp. output directly to a digital input pin. My questions are:
Is the op. amp. output signal considered analog?
If so, if an analog signal has two known 'steady' states, can it be considered a digital signal?
Can I connect that signal directly to the input pin, given that it is still DC? Do I need to connect it to an ADC?
Note: The microcontroller is a TM4C123GXL. I'm configuring the pin PB4 as a digital input and PB6 as a digital output (3.3V). When connection PB6 to PB4, everything works as expected.
## EDIT
The schematics is here:
Notice that R3 is a photocell. The op. amp. is a OPA2344
Currently my MCU is powered over USB, from my laptop. Notice that I'm connecting the op. amp. output directly to the digital input pin. Not sure if this can be done.
As you can see I'm using a voltage divider. This is because, if I power the op. amp. with the 3.3V coming out of the regulator, its output is always below 3.3V, so the MCU will never interpret it as a logic 1.
I would like to do this without the help of a transistor.
## EDIT 2
I have tried it again powering the op. amp. with the steady 3.3V coming out of the regulator. The output of the op. amp. is steady at 3.3V when the photocell is not shaded, as soon as I connect it to the MCU digital input it drops to 3.29V, so the MCU doesn't detect it as a logic 1.
I have observerd one listake I was doing: I was connecting the MCU GND pin to the common ground. That makes everything 'go mental', I think that is what caused the voltage drop at the op. amp.'s output I saw before. I deduce that it's wrong to connect the MCU GND to the common ground, why?
• What op amp are you using? Please include a schematic. – Ron Beyer Mar 1 '20 at 22:00
• And what's the value of the pulldown resistor on the input of the microcontroller. The fluctuations of 4.8V to 5.2V that you measured with a multimeter may be an indication that the op amp is oscillating. Like Ron Beyer said, need to a schematic to go further. – SteveSh Mar 1 '20 at 22:14
• Martel, why don't you just use a couple of BJTs and some passives to create a circuit with hysteresis and a clean digital signal? For example, not unlike this. – jonk Mar 1 '20 at 22:31
• 3.3V as digital output... what voltage is your uC (not the launchpad) working on? 3.3V? And can you apply 5V to the ADC? (Its on page 1350 or so of the datasheet, way tl;dr on this phone) – Huisman Mar 1 '20 at 23:02
• Look up the data sheet, which voltages are recognized as "high" and "low", resp. A voltage of 3.29V will still be read as "1" on a 3.3V microcontroller, I'm sure. – the busybee Mar 2 '20 at 7:45
From your explanation, it sounds like you have configured the Op-Amp to output 0-5 V and the microcontroller's digital pin is trying to read a 0-5 V signal. Please note that the microcontroller TM4C123GXL, which you are using, operates with I/O logic levels of 3.3 V and you are using a 5 V logic to drive the microcontroller's digital pin. This appears to be the cause of what you are seeing at the output. Have a look at the microcontroller TM4C123GXL's datasheet here : http://www.ti.com/lit/pdf/spmu296&ved=2ahUKEwjP1-DeqPrnAhVkTxUIHYoLDiEQFjAAegQIBxAC&usg=AOvVaw0XulafQFE2JHx9LJ_xC5p-&cshid=1583103090057 As @Ron and @Steve suggested, please provide a schematic of your circuit.
First of all you need a common ground. If you only connect the output from the op-amp to PB4 of the MCU, you have established a common reference voltage. But no current will flow before you make a second connection between the two so that there is a loop between the two circuits.
Also you can use one of the ADC channels of the MCU and read the photocell directly from it without the op-amp. Use the photoresistor in a voltage divider.
I would also have have used only one power source. So either power everything from the regulator or from USB.
The problem was that I was powering the whole circuit with a 9V battery (converted to 3.3V through the regulator) but the MCU through my laptop's USB port.
Disconnecting the op. amp. output from the MCU input pin I could measure 3.3V steady. In the moment I connect the MCU GND (ground) pin to the bradboard ground (which is connected to the battery's negative terminal) the op. amp. output goes mental (starts to jump all the time from 1.5 to 2.3 V aprox.)
What I have done is disconnect the MCU from the laptop and connect its GND pin to the breadboard common ground, the VBUS pin to the regulator's output (3.3V) and the op. amps. output to PB4 (input pin). By doing so, everything is working as expected.
## EDIT
I have tried a different thing: connect the MCU to my laptop and power the board with its 3.3V pin. At the beginning it didn't work (the op. amp. output dropped to 0V), and I have discovered that that was caused by the multimeter. I have disconencted it and everything works. | 2021-01-27 07:16:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26714590191841125, "perplexity": 1987.1432350449813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821253.82/warc/CC-MAIN-20210127055122-20210127085122-00618.warc.gz"} |
https://4cnv.in/quantum-pixel-representations-and-compression-for-n-dimensional-images/ | # Quantum pixel representations and compression for N-dimensional images
In this section, we extend our new circuit implementation to (U_text {FRQI}) for grayscale data at different image representations which correspond to definitions 1 and 2. The main difference between all the representations is the definition of the color encoding in the quantum state (|c_krangle) of Definition 2. As long as we express this color mapping in terms of a combination of (R_y) rotations, we can use our compressed implementation for uniform control (R_y) rotations.
### IFRQI
The improved FRQI method introduced by Khan 7 combines ideas from FRQI and NEQR representations. It improves the measurement problem for FRQI by allowing only 4 discrete overlaps which stand out at most during projective measurement in the calculation base. The IFRQI color mapping for a grayscale image with a bit depth of 2p is defined as follows.
### Definition 5
(IFRQI mapping) For a grayscale image of NOT pixels where each pixel (pack) has a grayscale value (g_k in [0, 2^{2p}-1]) with binary representation (b^0_k b^1_k cdots b^{2p-1}_k)the IFRQI state (|I_text {IFRQI}rangle) is defined by definition 2 with the color mapping used in (2) given by
begin{aligned} |c_krangle = |c_k^0 c_k^1 cdots c_k^{p-1}rangle , end{aligned}
(19)
or for (i = 0, ldots , p-1)
begin{aligned} |c_k^irangle&= cos (theta _k^i) |0rangle + sin (theta _k^i)|1rangle ,&theta _k^i&= { left{ begin{array}{ll} 0, &{} text {if } b^{2i}_k b^{2i+1}_k = 00 frac{pi }{5}, &{} text {if } b^{2i}_k b^{2i+1}_k = 01 frac{pi }{2} – frac{pi }{5}, &{} text {if } b^{2i}_k b^{2i+1}_k = 10 frac{pi }{2}, &{} text {if } b^{2i}_k b^{2i +1}_k = 11 end{array}right. }. end{aligned}
We observe that the IFRQI mapping combines two bits of color information into one rotation. It follows that for an image of bit depth 2pwe can prepare (|I_text {IFRQI}rangle) using the circuit shown in Fig. 3a with p evenly controlled (R_y) rotations. Rotation angles (varvec{theta}^i) correspond to 2 bitsI and (2i+1) of all NOT pixels according to the values defined in Definition 5. These uniformly controlled rotations can be compressed independently with our compression algorithm. The gate and qubit complexity for IFRQI with our method compared to Khan 7 are listed in Table 2.
### NEQR
The idea of NEQR is to use a color mapping that directly encodes the length (to one) bit string for grayscale information in computation base states on (to one) qubits. The NEQR states for different colors are therefore orthogonal and can be distinguished with a single projective measurement in the calculation basis. In our QPIXL framework, the NEQR mapping can be defined as follows.
### Definition 6
(NEQR mapping) For a grayscale image of NOT pixels where each pixel (pack) has a value (g_k in [0, 2^{ell }-1]) with binary representation (b^0_k b^1_k cdots b^{ell -1}_k)the NEQR state (|I_text {NEQR}rangle) is defined by definition 2 with the color mapping used in (2) given by
begin{aligned} |c_krangle = |c_k^0 c_k^1 cdots c_k^{ell -1}rangle , end{aligned}
(20)
or
begin{aligned} |c_k^irangle&= cos (theta _k^i) |0rangle + sin (theta _k^i)|1rangle ,&theta _k^i&= { left{ begin{array}{ll} 0, &{} text {if } b^{i}_k = 0 frac{pi }{2}, &{} text {if } b^{i}_k = 1 end{array}right. }. end{aligned}
By choosing the angles of rotation (theta ^i_k) orthogonal, we ensure that the color information in (|I_text {NEQR}rangle) can be recovered by a single projective measurement. The NEQR state can be prepared through the circuit shown in Fig. 3b, where uniformly controlled rotations can again be compressed with our method. The gate complexities for uncompressed circuits are listed in Table 2.
### MCRQI
If we want to extend FRQI applicability from grayscale image data to color image data, we need to allow different color channels. This approach has been dubbed multi-channel representation of quantum images (MCRQI).11. We adapt their definition for RGB image data to our formalism and make some minor changes.
### Definition 7
(MCRQI mapping) For a color image of NOT RGB pixels, where the color of each pixel (pack) is given by an RGB triple ((r_k,g_k,b_k) in left[ 0, Kright])the MCRQI state (|I_{text {MCRQI}}rangle) is defined by definition 2 with the color mapping used in (2) given by
begin{aligned} |c_krangle = |r_k g_k b_krangle , end{aligned}
(21)
or
begin{aligned} |r_krangle&= cos (theta _k) |0rangle + sin (theta _k)|1rangle ,&theta _k&= frac{pi /2}{ K} , r_k, |g_krangle&= cos (phi _k) |0rangle + sin (phi _k)|1rangle ,&phi _k&= frac{pi /2} {K}, g_k, |b_krangle&= cos (gamma _k) |0rangle + sin (gamma _k)|1rangle ,&gamma _k&= frac{pi /2 }{K}, b_k. end{aligned}
We see that to encode the color information for an RGB image, we only need 2 more qubits than the grayscale data, which is a significant improvement over the classical case. Additionally, we encode the color mapping as a tensor product of three qubit states, while Sun et al.11 encodes information into the coefficients of color qubits, which entangles their state. Our implementation has the advantage that the different color channels are easily processed separately, while the color information can still be retrieved thanks to the normalization constraint.
The implementation of the circuit (|I_{text {MCRQI}}rangle) for the RGB mapping defined in Definition 7, then simply combines three uniformly controlled rotation circuits with different target qubits and coefficient vectors determined by the respective color intensities, as shown in Figure 3c. As the RGB color channels are independent of each other and the uniform control (R_y) gates have different target qubits, each of them can be compressed separately. The asymptotic gate complexity of our method compared to the work of Sun et al.11 is listed in Table 2. As this work essentially uses the construction of Le et al.5we obtain a quadratic improvement before compression.
### INCQI
Like the NEQR, the (I)NCQI uses color mapping directly encoding the length (to one) bit string for each color value in an RGB(alpha) image in the calculation base indicated on (to one) qbits. Therefore, this QIR can also be easily represented by our QPIXL framework through the mapping defined as follows.
### Definition 8
(INCQI mapping) For a color image of N RGB(alpha) pixels, where the color of each pixel (pack) is given by a tuple ((r_k,g_k,b_k,alpha _k)) and each channel value in the range ([0, 2^{ell }-1]) has a binary representation, the INCQI state (|I_{text {INCQI}}rangle) is defined by definition 2 with the color mapping used in (2) given by
begin{aligned} |c_krangle = |r_kg_kb_kalpha _krangle = |r_k^0r_k^1dots r_k^{ell -1}g_k^0g_k^1dots g_k^{ell -1 }b_k^0b_k^1dots b_k^{ell -1}alpha _k^0alpha _k^1dots alpha _k^{ell -1}rangle end{aligned}
(22)
or
begin{aligned} |r_k^irangle&= cos (theta _k^i) |0rangle + sin (theta _k^i)|1rangle ,&theta _k^i&= { left{ begin{array}{ll} 0, &{} text {if } b^{i}_k = 0 frac{pi }{2}, &{} text {if } b^{i}_k = 1 end{array}right. }. |g_k^irangle&= cos (phi _k^i) |0rangle + sin (phi _k^i)|1rangle ,&phi _k^i&= {left { begin{array}{ll} 0, &{} text {if } b^{i}_k = 0 frac{pi }{2}, &{} text {if } b^{ i}_k = 1 end{array}right. }. |b_k^irangle&= cos (gamma _k^i) |0rangle + sin (gamma _k^i)|1rangle ,&gamma _k^i&= {left { begin{array}{ll} 0, &{} text {if } b^{i}_k = 0 frac{pi }{2}, &{} text {if } b^{ i}_k = 1 end{array}right. }. |alpha _k^irangle&= cos (psi ^i) |0rangle + sin (psi _k^i)|1rangle ,&psi _k^i&= {left { begin{array}{ll} 0, &{} text {if } b^{i}_k = 0 frac{pi }{2}, &{} text {if } b^ {i}_k = 1 end{array}right. }. end{aligned}
The above definition applies very similarly to the NCQI12removing only the channel (alpha) of the equation. The INCQI state can be prepared via the circuit shown in Figure 3d. This circuit is constructed using a NEQR circuit for each channel of the ICNQI. As with previous QIRs, the uniformly controlled rotations used here can also be compressed with our method. The gate complexities for uncompressed circuits are listed in Table 2.
### Other plugins
We note that multiple extensions and combinations of the ideas presented in this section are possible. For example, when MCRQI is a color version of FRQI and (I)NCQI is a color version of NEQR, we can define a color version of IFRQI in the same way. We can also adapt IFRQI to group an arbitrary number of bits instead of the two-bit pairing of Definition 5. This reduces the required number of qubits and gates at the cost of quantum states which are less distinguishable and therefore require more of measures. It is even possible to use different QPIXL mappings for different RGB color channels. For example, we can use FRQI mapping for the red channel, IFRQI mapping for the green channel, and NEQR mapping for the blue channel. Moreover, a generalized version of NEQR (GNEQR) has been proposed by Li et al.46, which is based on NEQR, INEQR and NCQI. GNEQR uses (n+4el +2) qubits to represent an image with (2^n) pixels and bit depth of (to one) for 4 color channels. Using similar ideas described in this section, a QPIXL-based GNEQR would need (n+4ell) total number of qubits.
Finally, although we presented this discussion for image data in an RGB ((alpha)) space, as in the work of Sun et al.11, our approach can be easily adapted to different color spaces and even to multi-spectral or hyper-spectral data. In fact, different scientific applications frequently use images in different color spaces depending on the type of analysis needed. For example, the Y’CbCr space is known for its applicability to image compression. The I1I2I3 was created specifically targeting image segmentation. The HED space is advantageous in the medical field for the analysis of specific tissues. Similarly, multi-spectral and hyper-spectral data are used in fields such as geosciences and biology, for example, where experts acquire different satellite images and mass spectrometry images respectively. In all these cases, our general definition of quantum representations of pixels can be directly applied. | 2022-05-17 08:19:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.741951048374176, "perplexity": 5457.399975771757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00536.warc.gz"} |
https://www.beatthegmat.com/graphic-interpretation-t288279.html | • FREE GMAT Exam
Know how you'd score today for $0 Available with Beat the GMAT members only code • 1 Hour Free BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Magoosh Study with Magoosh GMAT prep Available with Beat the GMAT members only code • Free Veritas GMAT Class Experience Lesson 1 Live Free Available with Beat the GMAT members only code • Free Trial & Practice Exam BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Free Practice Test & Review How would you score if you took the GMAT Available with Beat the GMAT members only code • 5-Day Free Trial 5-day free, full-access trial TTP Quant Available with Beat the GMAT members only code • Get 300+ Practice Questions 25 Video lessons and 6 Webinars for FREE Available with Beat the GMAT members only code • 5 Day FREE Trial Study Smarter, Not Harder Available with Beat the GMAT members only code • Award-winning private GMAT tutoring Register now and save up to$200
Available with Beat the GMAT members only code
graphic interpretation
This topic has 1 member reply
ash4gmat Senior | Next Rank: 100 Posts
Joined
17 Sep 2015
Posted:
57 messages
graphic interpretation
Wed Dec 16, 2015 8:46 am
HI
Can someone help understand last 2 questions in this link from veritas.
http://www.veritasprep.com/sample-problems/?question=14
Top Member
Marty Murray Legendary Member
Joined
03 Feb 2014
Posted:
2054 messages
Followed by:
132 members
955
GMAT Score:
800
Mon Dec 21, 2015 9:13 pm
As it stands, according to the graph, the relationship between S and L is the following. As S goes up, L, as indicated by the size of the circles, goes up as well. This is a positive correlation. As one goes up the other goes up.
So, in answer to the fourth question, if the relationship were inversed (Is that a word? Probably it should be reversed.), it would become the following. As S goes up, L goes down. So one would go up as the other goes down. That is how a negative correlation goes.
To answer the fifth question, you need to see that the relationship between F and S is not affected by changes in the relationship between L and S. The relationship between F and S is illustrated by the position of the circles, while L is illustrated by the size of the circles. In terms of the logic of what is going on, the relationship between F and S is the relationship between amount of fast-driving content and perception of sportiness. That relationship is unaffected by the answers to the separate question of how likely people are to buy the car.
From the graph we can see that as F increases S increases. So whether the relationship between L and S is positively or negatively correlated, the relationship between F and S remains positively correlated.
_________________
Marty Murray
GMAT Coach
[email protected]
https://infinitemindprep.com/
In Person in the New York Area and Online Worldwide
Top First Responders*
1 GMATGuruNY 67 first replies
2 Rich.C@EMPOWERgma... 44 first replies
3 Brent@GMATPrepNow 40 first replies
4 Jay@ManhattanReview 25 first replies
5 Terry@ThePrinceto... 10 first replies
* Only counts replies to topics started in last 30 days
See More Top Beat The GMAT Members
Most Active Experts
1 GMATGuruNY
The Princeton Review Teacher
132 posts
2 Rich.C@EMPOWERgma...
EMPOWERgmat
112 posts
3 Jeff@TargetTestPrep
Target Test Prep
95 posts
4 Scott@TargetTestPrep
Target Test Prep
92 posts
5 Max@Math Revolution
Math Revolution
91 posts
See More Top Beat The GMAT Experts | 2018-04-24 07:42:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22301772236824036, "perplexity": 2774.5439696823532}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946565.64/warc/CC-MAIN-20180424061343-20180424081343-00308.warc.gz"} |
https://www.flashfxp.com/forum/7591/p41722-post11.html | View Single Post
01-18-2004, 05:25 PM
St0rm
Disabled
Join Date: Nov 2003
Posts: 105
Quote:
Originally posted by Mr_X You may think that's useless but I think that will be cool if it automatically delete 0byte files (script called with 'OnUploadError')
What do you mean exactly? It deletes the 0-byte '-missing' files when files are being uploaded. | 2019-05-24 15:48:58 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9004247784614563, "perplexity": 13820.109198949694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257660.45/warc/CC-MAIN-20190524144504-20190524170504-00042.warc.gz"} |
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=pdm&paperid=742&option_lang=eng | RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Prikl. Diskr. Mat.: Year: Volume: Issue: Page: Find
Prikl. Diskr. Mat., 2021, Number 52, Pages 114–125 (Mi pdm742)
Applied Graph Theory
Discrete closed one-particle chain of contours
P. A. Myshkis, A. G. Tatashev, M. V. Yashina
Abstract: A discrete dynamical system called a closed chain of contours is considered. This system belongs to the class of the contour networks introduced by A. P. Buslaev. The closed chain contains $N$ contours. There are $2m$ cells and a particle at each contour. There are two points on any contour called a node such that each of these points is common for this contour and one of two adjacent contours located on the left and right. The nodes divide each contour into equal parts. At any time $t=0,1,2,…$ any particle moves onto a cell forward in the prescribed direction. If two particles simultaneously try to cross the same node, then only the particle of the left contour moves. The time function is introduced, that is equal to $0$ or $1$. This function is called the potential delay of the particle. For $t\ge m$, the equality of this function to $1$ implies that the time before the delay of the particle is not greater than $m$. The sum of all particles potential delays is called the potential of delays. From a certain moment, the states of the system are periodically repeated (limit cycles). Suppose the number of transitions of a particle on the limit cycle is equal to $S(T)$ and the period is equal to $T$. The ratio $S(T)$ to $T$ is called the average velocity of the particle. The following theorem have been proved. 1) The delay potential is a non-increasing function of time, and the delay potential does not change in any limit cycle, and the value of the delay potential is equal to a non-negative integer and does not exceed $2N/3$. 2) If the average velocity of particles is less than 1 for a limit cycle, then the period of the cycle (this period may not be minimal) is equal to $(m+1)N$. 3) The average velocity of particles is equal to $v=1-{H}/({(m+1)N})$, where $H$ is the potential of delays on the limit cycle. 4) For any $m$, there exists a value $N$ such that there exists a limit cycle with $H>0$ and, therefore, $v<1$.
Keywords: dynamical system, contour network, limit cycle, potential of delays.
DOI: https://doi.org/10.17223/20710410/52/8
Full text: PDF file (975 kB)
References: PDF file HTML file
Bibliographic databases:
UDC: 519.7
Citation: P. A. Myshkis, A. G. Tatashev, M. V. Yashina, “Discrete closed one-particle chain of contours”, Prikl. Diskr. Mat., 2021, no. 52, 114–125
Citation in format AMSBIB
\Bibitem{MysTatYas21} \by P.~A.~Myshkis, A.~G.~Tatashev, M.~V.~Yashina \paper Discrete closed one-particle chain of contours \jour Prikl. Diskr. Mat. \yr 2021 \issue 52 \pages 114--125 \mathnet{http://mi.mathnet.ru/pdm742} \crossref{https://doi.org/10.17223/20710410/52/8} | 2021-10-19 05:14:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.683249831199646, "perplexity": 1184.358741945811}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00212.warc.gz"} |
https://math.libretexts.org/Bookshelves/Precalculus/Precalculus_(OpenStax)/zz%3A_Back_Matter/20%3A_Glossary |
# Glossary
Example and Directions
Words (or words that have the same definition) The definition is case sensitive (Optional) Image to display with the definition [Not displayed in Glossary, only in pop-up on pages] (Optional) Caption for Image (Optional) External or Internal Link (Optional) Source for Definition
(Eg. "Genetic, Hereditary, DNA ...") (Eg. "Relating to genes or heredity") The infamous double helix https://bio.libretexts.org/ CC-BY-SA; Delmar Larsen
Glossary Entries
Word(s) Definition Image Caption Link Source
dependent variable an output variable
domain the set of all possible input values for a relation
function a relation in which each input value yields a unique output value
horizontal line test a method of testing whether a function is one-to-one by determining whether any horizontal line intersects the graph more than once
independent variable an input variable
input each object or value in a domain that relates to another object or value by a relationship known as a function
one-to-one function a function for which each value of the output is associated with a unique input value
output each object or value in the range that is produced when an input value is entered into a function
range the set of output values that result from the input values in a relation
relation a set of ordered pairs
vertical line test a method of testing whether a graph represents a function by determining whether a vertical line intersects the graph no more than once
even function a function whose graph is unchanged by horizontal reflection, $$f(x)=f(−x)$$, and is symmetric about the y-axis
horizontal compression a transformation that compresses a function’s graph horizontally, by multiplying the input by a constant b>1
horizontal reflection a transformation that reflects a function’s graph across the y-axis by multiplying the input by −1
horizontal shift a transformation that shifts a function’s graph left or right by adding a positive or negative constant to the input
horizontal stretch a transformation that stretches a function’s graph horizontally by multiplying the input by a constant 0<b<1
odd function a function whose graph is unchanged by combined horizontal and vertical reflection, $$f(x)=−f(−x)$$, and is symmetric about the origin
vertical compression a function transformation that compresses the function’s graph vertically by multiplying the output by a constant 0<a<1
vertical reflection a transformation that reflects a function’s graph across the x-axis by multiplying the output by −1
vertical shift a transformation that shifts a function’s graph up or down by adding a positive or negative constant to the output
vertical stretch a transformation that stretches a function’s graph vertically by multiplying the output by a constant a>1
absolute value equation an equation of the form $$|A|=B$$, with $$B\geq0$$; it will have solutions when $$A=B$$ or $$A=−B$$
absolute value inequality a relationship in the form $$|A|<B$$, $$|A|{\leq}B$$, $$|A|>B$$, or $$|A|{\geq}B$$
decreasing linear function a function with a negative slope: If $$f(x)=mx+b$$, then $$m<0$$.
increasing linear function a function with a positive slope: If $$f(x)=mx+b$$, then $$m>0$$.
linear function a function with a constant rate of change that is a polynomial of degree 1, and whose graph is a straight line
point-slope form the equation for a line that represents a linear function of the form $$y−y_1=m(x−x_1) slope the ratio of the change in output values to the change in input values; a measure of the steepness of a line slope-intercept form the equation for a line that represents a linear function in the form \(f(x)=mx+b$$
y-intercept the value of a function when the input value is zero; also known as initial value
horizontal line a line defined by $$f(x)=b$$, where $$b$$ is a real number. The slope of a horizontal line is 0.
parallel lines two or more lines with the same slope
perpendicular lines two lines that intersect at right angles and have slopes that are negative reciprocals of each other
vertical line a line defined by $$x=a$$, where a is a real number. The slope of a vertical line is undefined.
x-intercept the point on the graph of a linear function when the output value is 0; the point at which the graph crosses the horizontal axis
complex conjugate the complex number in which the sign of the imaginary part is changed and the real part of the number is left unchanged; when added to or multiplied by the original complex number, the result is a real number
complex number the sum of a real number and an imaginary number, written in the standard form $$a+bi$$, where $$a$$ is the real part, and $$bi$$ is the imaginary part
complex plane a coordinate system in which the horizontal axis is used to represent the real part of a complex number and the vertical axis is used to represent the imaginary part of a complex number
imaginary number a number in the form bi where $$i=\sqrt{−1}$$
axis of symmetry a vertical line drawn through the vertex of a parabola around which the parabola is symmetric; it is defined by $$x=−\frac{b}{2a}$$.
general form of a quadratic function the function that describes a parabola, written in the form $$f(x)=ax^2+bx+c$$, where $$a,b,$$ and $$c$$ are real numbers and a≠0.
standard form of a quadratic function the function that describes a parabola, written in the form $$f(x)=a(x−h)^2+k$$, where $$(h, k)$$ is the vertex.
vertex the point at which a parabola changes direction, corresponding to the minimum or maximum value of the quadratic function
vertex form of a quadratic function another name for the standard form of a quadratic function
zeros in a given function, the values of $$x$$ at which $$y=0$$, also called roots
coefficient a nonzero real number that is multiplied by a variable raised to an exponent (only the number factor is the coefficient)
continuous function a function whose graph can be drawn without lifting the pen from the paper because there are no breaks in the graph
degree the highest power of the variable that occurs in a polynomial
end behavior the behavior of the graph of a function as the input decreases without bound and increases without bound
leading term the term containing the highest power of the variable
polynomial function a function that consists of either zero or the sum of a finite number of non-zero terms, each of which is a product of a number, called the coefficient of the term, and a variable raised to a non-negative integer power.
power function a function that can be represented in the form $$f(x)=kx^p$$ where $$k$$ is a constant, the base is a variable, and the exponent, $$p$$, is a constant
smooth curve a graph with no sharp corners
term of a polynomial function any $$a_ix^i$$ of a polynomial function in the form $$f(x)=a_nx^n+a_{n-1}x^{n-1}...+a_2x^2+a_1x+a_0$$
turning point the location at which the graph of a function changes direction
global maximum highest turning point on a graph; $$f(a)$$ where $$f(a){\geq}f(x)$$ for all $$x$$.
global minimum lowest turning point on a graph; $$f(a)$$ where $$f(a){\leq}f(x)$$ for all $$x$$.
Intermediate Value Theorem for two numbers $$a$$ and $$b$$ in the domain of $$f$$, if $$a<b$$ and $$f(a){\neq}f(b)$$, then the functionf takes on every value between $$f(a)$$ and $$f(b)$$; specifically, when a polynomial function changes from a negative value to a positive value, the function must cross the x-axis
multiplicity the number of times a given factor appears in the factored form of the equation of a polynomial; if a polynomial contains a factor of the form $$(x−h)^p$$, $$x=h$$ is a zero of multiplicity $$p$$.
Division Algorithm given a polynomial dividend $$f(x)$$ and a non-zero polynomial divisor $$d(x)$$ where the degree of $$d(x)$$ is less than or equal to the degree of $$f(x)$$, there exist unique polynomials $$q(x)$$ and $$r(x)$$ such that $$f(x)=d(x)q(x)+r(x)$$ where $$q(x)$$ is the quotient and $$r(x)$$ is the remainder. The remainder is either equal to zero or has degree strictly less than $$d(x)$$.
synthetic division a shortcut method that can be used to divide a polynomial by a binomial of the form $$x−k$$
Descartes’ Rule of Signs a rule that determines the maximum possible numbers of positive and negative real zeros based on the number of sign changes of $$f(x)$$ and $$f(−x)$$
Factor Theorem $$k$$ is a zero of polynomial function $$f(x)$$ if and only if $$(x−k)$$ is a factor of $$f(x)$$
Fundamental Theorem of Algebra a polynomial function with degree greater than 0 has at least one complex zero
Linear Factorization Theorem allowing for multiplicities, a polynomial function will have the same number of factors as its degree, and each factor will be in the form $$(x−c)$$, where $$c$$ is a complex number
Rational Zero Theorem the possible rational zeros of a polynomial function have the form $$\frac{p}{q}$$ where $$p$$ is a factor of the constant term and $$q$$ is a factor of the leading coefficient.
Remainder Theorem if a polynomial $$f(x)$$ is divided by $$x−k$$,then the remainder is equal to the value $$f(k)$$ | 2022-06-27 15:36:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.761628270149231, "perplexity": 205.16368793628848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103334753.21/warc/CC-MAIN-20220627134424-20220627164424-00417.warc.gz"} |
http://jayvijay.co/ent-doctor-ddhc/b5f403-difference-between-canonical-and-non-canonical-literature | # difference between canonical and non canonical literature
Table 1 shows that the seven agrammatic subjects as a group performed very low on all of the non-canonical sentence types as reflected by the mean values of correct responses (obj-questions: 2.7/40; obj-relatives: 6.6/40; passives: 9.6/40). We demonstrated that TAMs mediate a âswitchâ between canonical and non-canonical Wnt signaling pathways in canine mammary tumors, leading to increased tumor invasion and metastasis. I code a program that can communicate by canonical and non-canonical form. This means that the canonical gospels were received by the churches of the East and the West as the genuine apostolic tradition in the generation immediately after the apostles; 1998, etc.) As I understand, non-canonical pathways are those that deviate from the canonical paradigm, or that derive to alternative biogenesis pathways and only partially meet the classical defnition. Chinese is canonically an SVO language. The difference between these two categories is the presence or absence of β-catenin. Thus, there is an established link between non-canonical Wnt signaling, RhoA regulation, cytoskeletal organization and NTDs. As a noun canonical is (roman catholicism) the formal robes of a priest. Download books for free. Another significant point about them is that: the kinetic momentum is a gauge invariant quantity; while the canonical momentum depend explicitly on the gauge choice. Also, are there any (theoretical) advantages of using one over the other? Footnote 1 Sun and Givónâs survey of contemporary written and spoken Mandarin Chinese reports that over 90% of the direct objects occurred in the canonical position after the verb.At the same time, the non-canonical SOV and OSV word orders, with bare objects being placed in the sentence-medial or sentence-initial positions, are also possible ⦠Canonical analysis is a multivariate technique which is concerned with determining the relationships between groups of variables in a data set. Find books Scriptures in non-Western religions. Introduction Canonical and non-canonical Wnt pathways are involved in the genesis of multiple tumors; however, their role in pituitary tumorigenesis is mostly unknown. The canonical Wnt pathway involves the multifunctional protein, while the non-canonical pathway operates independently of it. Objective This study evaluated gene and protein expression of Wnt pathways in pituitary tumors and whether these expression correlate to clinical outcome. Again, the difference between the canonical and extracanonical gospels, when it comes to Jesus as the fulfilment of the scriptures, is stark. Neither the canonical $\hat p=-i\hbar\nabla$ nor the kinetic momentum $\hat{P}=-i\hbar\nabla-q\vec{A}$ is a ⦠The chief difference between a CNAME record and an ALIAS record is not in the resultâboth point to another DNS recordâbut in how they resolve the target DNS record when queried. The argument also draws too thick a line between canonical and non-canonical texts , as if the elites confined their reading to only books of the canon and the average Christian delighted in secret, forbidden gospels, and assumes A striking instance of making a distinction between canonical and semicanonical scriptures occurs in Hinduism.The Hindu sacred literature is voluminous and varied; it contains ancient elements and every type of religious literature that has been listed, except historical details on the lives of the seers and sages who produced it. In the basic level block, the false alarms for the non-canonical test views were 35.46% (âoldâ objects) vs. 4.23% (ânewâ objects), t (56) = 11.86, p < 0.001. The data set is split into two groups X and Y, based on some common characteristics. Interestingly, similar changes in neoplastic cells were observed in the presence of macrophage-conditioned medium or live macrophages. So, I create two functions to configure the parameters of the communication. The math for these plots will be posted at a later time. A canonical data model refers to a logical data model which is the accepted standard within a business or industry for a process / system etc.. In fiction and literature, the canon is the collection of works considered representative of a period or genre. The divine council in late canonical and non-canonical second temple Jewish literature | Michael S. Heiser | download | Z-Library. So if the current directory was /usr/local, then: Still, Canonical is responsible for delivering six-monthly milestone releases and regular LTS releases for enterprise production use, as well as security updates, support and the entire online infrastructure for community interaction. This new canonical model, which is a force-based approach with a basis in fundamental molecular quantum mechanics, confirms much earlier assertions that in fact there are no fundamental distinctions among covalent bonds, ionic bonds, and intermolecular interactions including the hydrogen bond, the halogen bond, and van der Waals interactions. Further variety is introduced to the IFNγ pathway by association between STAT1 and other proteins, i.e., non-canonical complexes (Figure 1). As pointed out in the introductory section, it has been argued in the literature (Rösler et al. Canonical Wnt Pathway: Generally, vital difference between Canonical Wnt pathway and Non-canonical is that a canonical pathway includes the protein -catenin whereas a non-canonical pathway works self-sufficiently. Good evening, I am experiencing an odd behaviour with the cartesian_ddg application (Rosetta version 3.11) when trying to specify multiple simultaneous mutations to non-canonical residues in the mut_file. Do you think that is good way to do that? The reason behind all this was obviously political because just like the colonial powers â France, Germany or England, the canonical work acts as the center â the center of values, the center of the field where it can be interpreted, the center of interest and the ⦠that processing non-canonical word order requires more memory resources (manifested as sustained anterior negativity, usually left lateralized) than processing canonical sequences. Describe the difference between canonical and non-canonical works. THE ANGELOLOGY OF THE NON-CANONICAL JEWISH APOCALYPSES HAROLD B. KUHN ASBURY THEOLOGICAL SEMINARY T HE development of the doctrine of angels in the apocalyptic literature of Judaism occurs chiefly in the non-canonical writings produced in the period c. 165 B. C. to A. D. 100. A hallmark of Canonical Wnt signaling pathway activation is the enhanced level of cytoplasmic β-catenin protein. Strunk & White condemn existential clauses (Thereâs a man outside) negative clauses (p. 19: âPut statements in positive formâ) other non-canonical clause types barred by Strunkâs dicta Canonical Form â In Boolean algebra,Boolean function can be expressed as Canonical Disjunctive Normal Form known as minterm and some are expressed as Canonical Conjunctive Normal Form known as maxterm . Apocryphal is an antonym of canonical. Differences Between barely and hardly Before we focus on the negative reading of hardly, it is important to distinguish this adverb from its near-synonym barely, as well as from its use as a stand-alone response particle. As a result of this difference, one is safe to use at the zone apex (e.g., naked domain, such as example.com) and the other is not. Enterprises count on Canonical to support, secure and manage Ubuntu infrastructure and devices. Understanding the Canon Understanding the canon can help readers recognize many cultural touchpoints used in everyday life. â Jose Marques Junior Nov 13 '17 at 14:52 In programming, canonical means âaccording to the rules.â The term canonical is the adjective for canon, literally a âruleâ, and has come to mean also standard, authorized, recognized, or accepted.. Take accounting systems. Of variables in a data set what 's the difference between these two sets objects. Multivariate technique which is concerned with determining the relationships between groups of variables in a data set split. With determining the relationships between groups of variables in a data set between non-canonical Wnt signaling, regulation! Whether these expression correlate to clinical outcome literature ( Rösler et al alternative less known pathways canonical test in. ( theoretical ) advantages of using one over the other non-canonical clauses Interestingly, similar in! Presence of macrophage-conditioned medium or live macrophages in each block create two to. An established link between non-canonical Wnt signaling, RhoA regulation, cytoskeletal organization and NTDs the math these... The New Testament been targets difference between canonical and non canonical literature prescriptivist prejudice that can communicate by canonical and non-canonical clauses Interestingly similar. The Apocryphal gospels are part of the biblical canon and the Apocryphal are... Introductory section, it has been argued in the literature ( Rösler et al noun canonical (!, a binary response variable can be modeled using many link functions such as logit probit! Enterprises count on canonical to support, secure and manage Ubuntu infrastructure and devices, there difference between canonical and non canonical literature an established between! Parameters of the biblical canon and the Apocryphal gospels are not in each block purpose canonical. Configure the parameters of the communication evaluated gene and protein expression of Wnt pathways in pituitary and... Count on canonical to support, secure and manage Ubuntu infrastructure and devices ( Rösler et.! This study evaluated gene and protein expression of Wnt pathways in pituitary tumors whether... Into two groups X and Y, i.e any ( theoretical ) advantages of using over! Could also be observed on an individual basis groups X and Y i.e! Cells were observed in the New Testament relationships between groups of variables in a data.. Correlate to clinical outcome find the relationship between X and Y, based on some common characteristics or. The other multifunctional protein, while the non-canonical pathway operates independently of it council in late canonical non-canonical. Pointed out in the literature ( Rösler et al in non-canonical Negation 369 1.1 ( roman catholicism ) the robes! Each block pointed out in the presence of macrophage-conditioned medium or live macrophages X and Y, i.e collection... Clauses Interestingly, similar changes in neoplastic cells were observed in the literature ( Rösler al. Has been argued in the presence of macrophage-conditioned medium or live macrophages bit... Modeled using many link functions such as logit, probit, etc terms 'link function ' and 'canonical function! Of a priest correlate to clinical outcome, RhoA regulation, cytoskeletal organization and NTDs technique which concerned! The data set is split into two groups X and Y, on. 'Canonical link function introductory section, it has been argued in the New Testament for prescriptivist prejudice to find relationship. And manage Ubuntu infrastructure and devices is the collection of works considered representative of a period or genre 's... A noun canonical is ( roman catholicism ) the formal robes of a priest for both non-canonical and test. Hallmark of canonical Wnt pathway involves the multifunctional protein, while the pathway. Section, it has been argued in the New Testament logit, probit, etc Jewish |. Non-Canonical clauses Interestingly, similar changes in neoplastic cells were observed in the (! Used in everyday life functions such as logit, probit, etc Wnt signaling, RhoA regulation, cytoskeletal and... Council in late canonical and non-canonical form pathway operates independently of it an individual basis canonical as the books... Pathways are those which are alternative less known pathways live macrophages you that! Usually left lateralized ) than processing canonical sequences i code a program that can by... Is ( roman catholicism ) the formal robes of a priest protein of... Canonical sequences to find the relationship between X and Y, i.e negativity, left. Non-Canonical second temple Jewish literature | Michael S. Heiser | download |.... In fiction and literature, the canon understanding the canon understanding the canon understanding the canon is the enhanced of... Negation 369 1.1 Marques Junior Nov 13 '17 at 14:52 Apocryphal is an established link between non-canonical Wnt,! Order requires more memory resources ( manifested as sustained anterior negativity, left... A multivariate technique which is concerned with determining the relationships between groups of variables a! Michael S. Heiser | download | Z-Library multivariate technique which is concerned determining... Help readers recognize many cultural touchpoints used in everyday life and canonical test views in each block was a difference! Set is split into two groups X and Y, based on some common characteristics or... Configure the parameters of the communication is a multivariate technique which is concerned with determining the relationships between groups variables. Observed on difference between canonical and non canonical literature individual basis pathway involves the multifunctional protein, while the non-canonical operates! A multivariate technique which is concerned with determining the relationships between groups of variables in a data set is into... Plots will be posted at a later time individual basis been targets for prejudice! Using one over the other difference between canonical and non canonical literature order requires more memory resources ( manifested as sustained negativity... The relationship between X and Y, i.e in the presence of macrophage-conditioned medium or live macrophages Structure in Negation... Also, are there any ( theoretical ) advantages of using one the... Signaling pathway activation is the collection of works considered representative of a period or.. Analysis is then to find the relationship between X and Y,.! Temple Jewish literature | Michael S. Heiser | download | Z-Library analysis a! Using one over the other the Apocryphal gospels are part of the biblical canon and the Apocryphal are... Data set between non-canonical Wnt signaling, RhoA regulation, cytoskeletal organization and NTDs protocanonical books, just as are. Canon understanding the canon can help readers recognize many cultural touchpoints used in everyday life council. The difference between these two sets of objects for both non-canonical and canonical test views in each block the between! Cytoplasmic β-catenin protein touchpoints used in everyday life the canonical '' link function ' and 'canonical link function Marques. Good way to do that of using one over the other council in late canonical and non-canonical second Jewish. As pointed out in the literature ( Rösler et al | download | Z-Library literature! Non-Canonical and canonical test views in each block some common characteristics based some...  Jose Marques Junior Nov 13 '17 at 14:52 Apocryphal is an established link between Wnt! On some common characteristics the Apocryphal gospels are part of the communication technique is! As much canonical as the protocanonical books, just as they are in the literature ( Rösler et al protein... As much canonical as the protocanonical books, just as they are in literature... Catholicism ) the formal robes of a period or genre, etc less known pathways secure and manage Ubuntu and! In fiction and literature, the canon can help readers recognize many cultural used... To find the relationship between X and Y, i.e | Michael S. Heiser | download Z-Library. Used in everyday life similar changes in neoplastic cells were observed in the literature ( Rösler et.. Than processing canonical sequences just as they are in the introductory section, has! Enhanced level of cytoplasmic β-catenin protein lateralized ) than processing canonical sequences, various non-canonical clause constructions been! To clinical outcome, secure and manage Ubuntu infrastructure and devices link functions such as logit, probit,.. And manage Ubuntu infrastructure and devices β-catenin protein expression correlate to clinical outcome enterprises count on canonical to support secure. Using one over the other ' and 'canonical difference between canonical and non canonical literature function ' and 'canonical function... An antonym of canonical Wnt pathway involves the multifunctional protein, while the non-canonical pathway operates independently of it second. Some common characteristics ( manifested as sustained anterior negativity, usually left lateralized ) processing... Functions to configure the parameters of the biblical canon and the Apocryphal are... A binary response variable can be modeled using many link functions such as logit, probit,.! Canonical is ( roman catholicism ) the formal robes of a period or genre books Thus, there is established... Period or genre is a difference between canonical and non canonical literature technique which is concerned with determining the relationships groups... Objects for both non-canonical and canonical test views in each block '' link function second!, various non-canonical clause constructions have been targets for prescriptivist prejudice whether these expression to... Than processing canonical sequences New Testament processing non-canonical word order requires more memory resources ( manifested as sustained anterior,! Each block do that binary response variable can be modeled using many link functions such as logit probit. Canon is the collection of works considered representative of a period or genre gene! The enhanced level of cytoplasmic β-catenin protein, etc canon and the Apocryphal gospels are not Marques Junior 13. Could also be observed on an individual basis level of cytoplasmic β-catenin protein as they are in New..., i create two functions to configure the parameters of the biblical canon and the Apocryphal gospels not... In non-canonical Negation 369 1.1 council in late canonical and non-canonical form protocanonical books, just as are... Non-Canonical pathways are those which are alternative less known pathways there was significant! At 14:52 Apocryphal is an antonym of canonical analysis is a multivariate technique which is concerned determining... Between groups of variables in a data set is split into two groups X and,... Here is considered the canonical '' link function ' section, it has been argued in introductory. Sets of objects for both non-canonical and canonical test views in each block catholicism the... Manifested as sustained anterior negativity, usually left lateralized ) than processing sequences. | 2021-04-11 01:27:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3388369679450989, "perplexity": 4365.7462504076475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060603.10/warc/CC-MAIN-20210411000036-20210411030036-00064.warc.gz"} |
http://math.stackexchange.com/questions/24209/presentation-about-orientable-surfaces | Presentation about orientable surfaces
I'm giving a presentation about orientable surfaces (as a student project) and I was wondering what I should talk about. The presentation should be 20-30 min long.
I've been thinking maybe something like this:
1) definition of a surface
2) what does orientable mean (orientable vs. oriented)
3)
And then I'm a bit lost. In the general topology course last year, classification of compact surfaces and covering maps were mentioned but I don't much about either and I don't know if it has anything to do with orientability of surfaces.
In this class, algebraic topology, we're going to learn about cell complexes and homology, but I don't think that has anything to do with orientability either.
Question: what is an interesting fact or theorem that I should definitely mention in this presentation?
Many thanks for your help!
Edit:
I did
1) definition of surface and manifold
2) definition of orientable (with paper moebius strip)
3) classification of compact orientable surfaces
Just in case anyone reads this post later, they might use this to get inspiration. I had also prepared material about homology but the professor said homology was going to be discussed towards the very end of the lecture so I did not present what I had prepared.
-
I think the classification of compact connected orientable surfaces is an appropriate topic. – Grumpy Parsnip Feb 28 '11 at 11:37
It also wouldn't hurt to ask your professor directly what he or she is expecting! – Grumpy Parsnip Feb 28 '11 at 15:38
@Jim: Yes, I was going to do that but I don't like people who ask questions without first thinking so I wanted to first think about what he could be expecting and then ask him if what I'm planning to do is meeting his expectations. Thanks for the hint about the classification of compact orientable surfaces! I think I'll post what I'll be doing when I finish the preparation. – Rudy the Reindeer Feb 28 '11 at 16:44
It seems silly to talk about the notion of orientability without mentioning that non-orientable surfaces exist. The Moebius band would be an efficient example. Another topic you could mention is the Alexander-Schoenflies theorem, that a there are no "exotic" ways of putting a sphere in 3-dimensional space. In contrast, there are infinitely-many "knotted" ways of putting a genus $n$ orientable surface in space (for each $n$) provided $n \geq 1$. – Ryan Budney Mar 4 '11 at 15:58
A nice definition of orientable surface would be that it does not contain a Moebius band. Similarly, for an orientable surface you could say a neighbourhood of a simple closed path has to "look like" an annulus. – Ryan Budney Mar 4 '11 at 15:59
1 Answer
Homology certainly has something to do with orientability, but if you are not yet familiar, I wouldn't recommend diving into the subject for this. You should definitely speak about non-orientable surfaces (e.g. the Klein bottle); this will really help understanding the idea of orientability. Cutting up the Möbius band in front of class is also very insightful.
-
thanks for the hint about cutting up the Moebius band in front of the class. I'm not sure though I should mention non-orientable surfaces because the professor explicitly said, "not non-orientable surfaces, orientable ssurfaces". I wonder what to do : ( – Rudy the Reindeer Feb 28 '11 at 9:37
You could do it in the style of: this is an orientable surface (take a sphere or something), and this is non-orientable (the mobius band). Then explicitly define orientability, and why the sphere satisfies this property, and the mobius band doesn't. – Thomas Rot Feb 28 '11 at 9:53
actually, your first answer pointed me into the right direction. I think the professor expects me to talk about the connection between homology and orientability! I will probably accept your answer later, I just thought I could leave this question open for some more time, to see what other people will suggest. – Rudy the Reindeer Feb 28 '11 at 9:58
@Matt: Great! Do make sure that you pick an easy version for homology (there are many things called homology...). Simplicial homology is the thing you want. A nice reference is the free book "Algebraic Topology" by Allen Hatcher – Thomas Rot Feb 28 '11 at 13:31
Again, many thanks! Hatcher is the book he recommended for the lecture. – Rudy the Reindeer Feb 28 '11 at 16:41 | 2015-08-27 21:38:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7915285229682922, "perplexity": 460.9587844456941}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059993.5/warc/CC-MAIN-20150827025419-00029-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://zbmath.org/?q=an:0773.34036 | zbMATH — the first resource for mathematics
A characterization of the uniquely ergodic endomorphisms of the circle. (English) Zbl 0773.34036
A continuous endomorphism of the circle $$f$$ is said to be uniquely ergodic if there exists a unique $$f$$-invariant probability measure on the circle. It is known that every homeomorphism of the circle with irrational rotation number is uniquely ergodic. This paper is to extend this result to the endomorphism of the circle. The main result of the paper is the following Theorem: A circle endomorphism is uniquely ergodic if and only if it has at most one periodic orbit.
MSC:
37-XX Dynamical systems and ergodic theory 54H20 Topological dynamics (MSC2010)
Full Text: | 2021-06-20 12:49:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886917233467102, "perplexity": 161.2718055337821}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487662882.61/warc/CC-MAIN-20210620114611-20210620144611-00226.warc.gz"} |
https://gamedev.stackexchange.com/questions/116958/player-keeps-bumping-when-walking-over-to-another-sprite | # Player keeps bumping when walking over to another sprite
So, my problem is that my play when he walks over to another sprite he will stop.
I fixed that by adding a physics material with 0 density and 0 friction to the player
and adding a circle collider instead of a box collider to the player.
However now I have another problem where my character keeps bumping when it enters another sprite.
The sprites are at the exact same height, the objects are just as big and the colliders have the exact size.
I would like to get a fix for this problem, which still allows me to use Rigidbody2D.AddForce for movement.
Video of my problem :
Video :>
How to recreate the problem :
Make a sprite, add a Circle Collider 2D to it and a Rigidbody2D.
That will be the player.
Next, make some platforms and add a Box Collider 2D
Now, if you let your player walk over the platforms it will (or should..) start bumping.
• I saw your video, but can't replicate the problem. If you have an online repo of your project, I could download it and start it up in Unity. – user79422 Feb 20 '16 at 20:41
• @codeepic This happens all the time to me, here's what I do to create my problem : Create a 2D circle collider and attach it to a sprite GameObject with a rigidbody2D, add 3 sprites to your scene, all 3 should have 2D box colliders, make the player walk over them using AddForce, and it should start bumping when the player "crosses" a sprite. Hopefully this made sense. – BiiX Feb 21 '16 at 1:17
• I'm having the same issue. Plus my design requires some of the object to have box colliders. I am very interested if someone found a real solution that still uses Unity's rigidbodies – tyjkenn Feb 21 '16 at 23:43
• I can see a box collider too on player, larger in height, is it so or I am just confused? – Hamza Hasan Feb 23 '16 at 5:40
• If it is so, then that is the only problem – Hamza Hasan Feb 23 '16 at 5:50
Short answer: Don't build a 2D jump and run using a physics engine. I worked on such a project and we were mostly fighting the physics engine.
Long answer: Have less, bigger colliders. We built our levels using tiled. We placed sprites and colliders seperately so one block of floor got only one, big collider. It still has issues at ramps, when falling on the edge of a platform, and so on. There were lots of raycasts and custom forces involved and it was not pretty.
You could also try and merge the colliders / create them on the fly from the sprites, though this is probably more complex.
• Tiled fixed all of the problems! :) – BiiX Feb 23 '16 at 11:32
• It doesn't seem like Tiled would work for my purposes considering I am using an in-game level editor. I already tried making my own physics system, but it proved to have more glitches than Unity's, especially with all the force interactions my game requires: push blocks, high-power fans, trampolines, etc. I got it working with Unity's rigidbody physics perfectly except for the occasional catching on corners between blocks. Unless there is a fix, merging colliders may be the only option. – tyjkenn Feb 23 '16 at 20:00 | 2021-02-27 12:20:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41187116503715515, "perplexity": 1903.0154367709986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358956.39/warc/CC-MAIN-20210227114444-20210227144444-00478.warc.gz"} |
https://economics.stackexchange.com/tags/government/hot | A message from our CEO about the future of Stack Overflow and Stack Exchange. Read now.
# Tag Info
## Hot answers tagged government
16
Most of the same considerations apply to countries as apply to businesses and people, plus a couple of extra cons Pros of Being Debt Free No interest payments Not beholden to someone else (financial freedom) Cons of Being Debt Free Buying things on (interest free) credit can save a little money Paying for things in installments can match costs to income ...
10
There is an interesting report that circulated during the Clinton administration, when we predicted we'd pay off all the debt, that I think answers your question. (here's a public radio article about it) The main takeaway is that government bonds are the safest and most liquid asset. Its existence is necessary for a large number of financial institutions (...
9
You write "qualitative easing", but I think you refer to quantitative easing. I'll just do both. Quantitative Easing Quantitative easing corresponds to the central bank (CB) expanding its balance sheets by "buying" assets. This is typically done in secondary markets. It mainly injects liquidity into the system. To the extend that there is an additional ...
7
"Why I don't hear nobody speaking about such idea?" Because historical experience says it won't work. By printing money instead of collecting taxes, what increases is the nominal disposable income. The "value of work" is certainly not increased. And the important question is, does this increased nominal income lead to higher consumption? Consider a ...
6
SOE's don't have to perform worse that private enterprises, but they often do. The reasons are manifold: Some SOE's are not set up for profit motive, but rather to seed strategic industries for a nation (e.g. Port of Singapore, or Huawei Telecom). Frequently, state influence at market-oriented is counterproductive because politicians placed in leadership ...
5
The current answers correctly point out that financing the government via the printing press would generate inflation. Since inflation is bad, this would be a bad policy. However, these answers miss out on several advantages of an inflation tax. Firstly, there would be substantial productivity gains since the entire government revenue system, tax advisors, ...
5
Suppose your country holds debt equal to thirty percent of GDP and that the government is obliged to pay interest of five percent per year on that debt. This implies that each year the cost of servicing the debt is 1.5% of GDP. Thus, if the country's GDP grows at a rate of 1.5% then it can afford to service the debt indefinitely without the debt/GDP ratio ...
5
An interest rate is the return that a lender earns on money lent to someone else. If the interest rate is higher then it makes lending money more attractive because the return is higher. In particular, if the interest rate in, say, Russia increases relative to that in the USA then American lenders will switch from lending their money in America to lending ...
5
Because of the large number of roughly comparably sized private and public firms, the petroleum industry provides a laboratory for exploring differences between private and state owned enterprises in related businesses. Without passing judgement on if it has to be that way, it appears as though the private firms are vastly more efficient: Efficiency ...
5
An obligatory draft is a tax collected in kind - productive time. Instead of producing in the private sector, citizens of the economy offer their services to the army. Now, we should acknowledge that the existence of an army offers some desired public good (or at least that the society implementing the draft thinks so). Is this a utility-enhancing, or a ...
4
The answer to your first question is no. For example in Saudi Arabia only foreign residents are taxed. The state can afford to do this because it owns the oil fields and receives a lot revenue from them. In Russia the oil and gas companies gave 52% of the federal budget. Part of this is in the form of taxes, but it is mostly the profit of Gazprom, the ...
4
When recession strikes, it's prone to the currency equivalent of "bank runs" where everyone attempts to trade in their paper money for gold at once, causing a drastic reduction in the money supply, rising of interest rates, and the dreaded deflationary spiral. In the face of a recession, you want the opposite to happen. During the Great Depression, every ...
3
It depends. If customers are currently making informed decisions when they book a surge-priced car, then banning surge pricing punishes customers, drivers, intermediaries, and the wider economy. It deliberately introduces an economic inefficiency. If, however, a lot of customers are making uninformed decisions and are effectively being conned by surge ...
3
No, it cannot cause inflation. Inflation is a general rise in the price level, a decrease in the purchasing power of money. While military spending, for example, could cause inflation if paid for through seigniorage (essentially, devaluing a currency by printing more of it), there's neither any reason in theory nor any empirical evidence to support the idea ...
3
Note: This answer was posted 4 months before the OP clarified what it really wanted to ask (see comments below the answer). I will accept @Ubiquitous view of the question, which in summary is: Why not having a publicly owned monopoly in the banking sector? Instead of a, however regulated, private banking sector? It would be naive to counter "then why ...
3
First of all, please check the properties of money and keep them in mind. Indeed money is a convention, but nobody forces you to use it. For example, prisoners use ciggarettes as a medium of exchange. But let's assume you are the absolute ruler of a country and you want your people to use your currency. You can either do it by force or you have to give ...
3
Your claim that most go into academia is wrong. From the top universities, about half to two-thirds go into academia, but from most universities, most go to non-academic careers. It's simple accounting: a top-30 university graduates about 20 PhDs in economics per year, hires 1 or 2. Google "[university name] economics job market outcomes", or "[university ...
3
Actually you could use taxes because GDP formula using income approach can be also expressed as value added at basic prices + taxes less subsidies. However, you can’t do it at the same time when using the approach in your question as you would be double counting. Using: $$GDP= w+i + r+ \pi + o$$ Where $w$ are wages $i$ interest income $r$ return on ...
2
First off, you are looking at the wrong column to compare like with like. Total debt went from \$16.7tr to \$17.8tr but that includes intragovernmental debt. Changes in this number will net out in the deficit calculation (this is one part of the government lending money to another part). Secondly, you are starting on the wrong date. You need to go full ...
2
There are many ways of calculating public sector finances. One approach is similar to corporate accounting and the UK Treasury publishes Whole of Government Accounts. Those are different in construction to the Public Sector Finances statistics published by the Office for National Statistics, who follow international National Accounts methodology In ...
2
QE is designed to increase the money supply, usually in economic environment in which the money supply would otherwise be falling. The process by which this happens is not easy to understand unless you already have a very good grasp of fractional reserve banking, and it certainly isn't as simple as "printing money". In the first instance QE increases base ...
2
Possible revenue sources of government Taxes Sovereign funds investment (Norway, UAE) Natural Resources (Saudi Arabia, Russia) State Owned Enterprises
2
You can't create something from nothing. When the government prints money, that's really just colored paper. Printed money, in case production has not increased, will make money lose value. When the government raises taxes, it is taxing goods and services, i.e. getting real ressources from people (in the form of money, yes, but real ressources nonetheless). ...
2
The answer to this question is maybe (but most likely not). The most common method central banks use to increase money supply (assuming they control their own monetary policy) is through open market operations. Open market operations is when the central bank purchases or sells treasuries in the secondary market (typically). The answer to your question ...
2
This is a difficult but important question: a) At the heart of it is whether the markets are competitive or not. In competitive markets, prices reflect demand and supply so changing prices is the natural way that demand and supply are balances. You want many more cab drivers to come out in peak time, which only happens if you can reward them with a little ...
2
Central banks can create money 'out of nothing'. So for starters there isn't 'less amount' of money left with the central bank. The amount of money at the central bank is 'infinite'. So it's not because the CB lends X amount of money to the government, that the CB has X amount less money to lend to the banks, the supply of money is not constrained... Off ...
2
The government is a producer of goods/services that are then usually not subject to market transactions. Some goods/services that the government produce can be said to increase directly the utility of the households/individuals in the economy. Others can be said to function as intermediate goods, entering, usually as a positive externality, the production ...
2
The question: How much would GDP change if during a recession the government raises unemployment benefits by \$100 million? can be understood in more than one way. There is the pure accounting question which could be formulated more precisely (albeit in terms of a rather unrealistic scenario) as follows: Suppose, in a certain period, the economic ...
2
Transfer payments aren't included in GDP to prevent double-counting. The reason the author's question is troubling you is because the answer externalizes everything that happens after the payment has been made (which is when the money from that payment DOES get factored into GDP, see below). I assume it does this to reinforce that transfer payments aren't ...
2
The wages of government workers are counted as wages of individuals in the income measure of GDP. If the Government trades, then surpluses or profits of these activities are counted as income in the income measure of GDP. If GDP is being measured at market prices then some indirect taxes (on production or on imports, less any subsidies) are also taken ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2020-01-23 16:34:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33682164549827576, "perplexity": 2376.952602148079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250611127.53/warc/CC-MAIN-20200123160903-20200123185903-00532.warc.gz"} |
https://brilliant.org/problems/tricky-minimum/ | # Answer is in the question
Algebra Level 2
If the value of $$a^2 + 6a -6$$ is $$a$$, then find the minimum value of $$a$$.
×
Problem Loading...
Note Loading...
Set Loading... | 2018-06-21 06:21:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7689284086227417, "perplexity": 1815.1388876963822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864039.24/warc/CC-MAIN-20180621055646-20180621075646-00607.warc.gz"} |
https://www.mjcrowther.co.uk/2017/08/21/how-i-get-better-at-programming/ | How I get better at programming
This post starts with a disclaimer. In no way do I profess to be an expert programmer. I am 99% self-taught, with the rest coming from a month long internship at StataCorp in Texas, and the many pestering emails and questions I’ve asked Stata developers over the past few years. And of course advice from the far more experienced pot of academic programmers who I have met and worked with since I started my career.
Anyway…
My favourite Stata command
Without doubt, my favourite command in Stata is viewsource. I’m yet to meet anyone else with the same favourite command. It lets you view the source code of a command, simple as that. Given that Stata is essentially proprietary software, there is a substantial amount of Stata’s source code which is open source. You don’t even need to use the viewsource command, you can delve down into the installed files and open them up for yourself, viewsource is merely a convenience command. To me, it is the single easiest way to learn…look at others code! Especially the code of professional developers.
Dreamcoder
You can’t be a good programmer without being curious, and quite frankly, nosy. You must always want to improve, and even when a task is finished, you are quite happy to completely start again if you can shorten or quicken your code. If I think of a way to improve a program, generally I’ll scrap/adapt the old version and start again until it’s better, otherwise I won’t sleep (ok perhaps a slight exaggeration…but I have had Stata code dreams…more than once). I’m hoping it’s not just me.
Regardless, programming is the main reason I love my job. It’s the part of my job I most enjoy. I genuinely get a huge rush when I get something difficult working, be it a complex likelihood correctly maximising, some new prediction code doing something 10 times quicker than it used to, or two lines of code doing what 20 used to do. Messy, inefficient and downright ugly code really pisses me off. I’d never call myself a perfectionist, but the closest I ever get to being one is in the realm of programming. But something I love about programming is how personal and subjective it is. Yes of course an implementation is generally right or wrong, but how we get that (hopefully) right answer can always be achieved in an infinite number of ways. And that comes down to the programmer. I really enjoy it when other people show me their code. However, they tend to get nervous, mainly because they think I’m about to call them an idiot and tell them it’s all wrong… I only do that to close personal friends. I enjoy looking at others code because I find it fascinating how every single person will have approached the same task in a completely different way! You’ll always learn something, be it something to do or not to do, but it’s still learning and improving. Though to be honest, if your code isn’t indented how I like to have my code indented, then we have to fix that first…but that’s my issue.
Programming doesn’t always go well
My main collaborator Paul Lambert once started work on a command to implement the delta method, to get standard errors for complex predictions. Now Stata comes with the very powerful predictnl command, but it can be slow sometimes. So off he went to write his own. Of course, when you start a new command you must name it…he went for quickpredictnl…you can see where we’re going with this. A few days later he had his finished version. He did a speed test. And what happened? Of course, his was slower. I laughed…a lot. Note, I have asked his permission to tell this story, and he very happily said yes (in truth he said ‘yes you b*****d’…you can tell we have an effective working relationship based on mutual respect), but even if he didn’t I’d still tell it, it’s just too funny.
Getting stuck
We all get stuck when programming. You know exactly what you want to do, but you can’t remotely figure out how to implement it. Or there’s a bug, a massive hornet sized bug, and you have no idea why. What do I do when I get stuck? I swear at the computer (a lot - and I grew up in Northern Ireland, so I know some good ones). So much so that I usually have to shut my door when programming something particularly challenging. It’s very easy to get disheartened by getting stuck for extended periods of time, it’s taken me days to figure out something before, but I’m so stubborn I’d have carried on for weeks. Not everyone is as stubborn as me (thank goodness), but one thing to learn as a coder is knowing when to stop coding, to step away and have a break. If you ever want to cheer yourself up, then look back at something you programmed a few years ago - and be prepared to go on a journey starting with embarrassment at how awful at coding you used to be, to pride at what you know you can do now. | 2018-03-25 03:00:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38317570090293884, "perplexity": 1064.550567824654}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651780.99/warc/CC-MAIN-20180325025050-20180325045050-00566.warc.gz"} |
https://math.stackexchange.com/questions/368767/does-sequential-limits-coincide-with-topology-limits | Does sequential limits coincide with topology limits?
For an example, by Alaoglu's theorem, the unit ball of the dual space is weak* compact in weak* topology. Generally speaking, it is not weak* sequential compact, but if we assume it is, my question is, does the limits of the two sense coincide?
• Can you explain more precisely what you mean by "limits of the two sense"? Which senses? How are you defining them? – Nate Eldredge Apr 22 '13 at 1:18
• For example: $\{x^{\asp}_{n}\}$ is a bounded sequence in the dual space of a banach space $X$, by the Alaoglu's theorem, it will have a limits in the sense of weak* topology. If I assume that it is also sequentially convergent in the sense of for every $x\in X$, we have $x^{\asp}_{n}(x)\rightarrow x^{\asp}_{0}(x)$, I want to know whether the $x^{\asp}_{0}(x)$ is as the same as the limits in weak* topology. – vivian Apr 22 '13 at 1:36
If I understand your question correctly, the answer is this: in any topology, if you have a convergent sequence, then it converges as a net (because given a neighbourhood of the limit, eventually all points are in it).
On the other hand, a convergent net can have infinite subsets (notice that I said "subsets", not "subnets") that have many different accumulation points. To see this, let $\{a_n\}_{n\in\mathbb Z}$ be given by $$a_n=\begin{cases} q_n,&\ \mbox{ if }n<0\\ 1/(n+1),&\ \mbox{ if }n\geq0 \end{cases}$$ where $\{q_n\}$ is an enumeration of $\mathbb Q\cap[0,1]$. Then $a_n\to0$ as a net with the usual order on $\mathbb Z$, while every point in $[0,1]$ is an accumulation point of the set $\{a_0,a_1,a_{-1},a_2,a_{-2},\ldots\}$
If $v_n \rightarrow v$ in the weak* topology, $v_n$ need not converge in the usual topology. For example, in the Hilbert space $\mathcal{l}_2$, with orthonormal basis $e_i$, $(e_i,f) \rightarrow 0$ since $\|f\|_2=\Sigma(e_i,f)^2 < \infty$ for all $f \in \mathcal{l}_2$, so $e_i \rightarrow e$ in the weak* topology, but clearly $e_i$ is not convergent in $\mathcal{l}_2$.
Since the weak* topology is weaker than the usual topology, we do have that if $f_i \rightarrow f$ in the usual topology, then $f_i \rightarrow f$ in the weak* topology, and since the weak* topology is Hausdorff, if $f_i \rightarrow f$ strongly and $f_i \rightarrow g$ weakly then the limits coincide.
• You mean that if there exists the topology for the pointwise convergence of the elements in the dual space, it is stronger than the weak* topology? – vivian Apr 22 '13 at 1:44
If the normed space is assumed to be separable, the unit ball in the dual space is also weak$^\star$-sequentially compact. In this case norm-bounded subsets of the dual are metrizable, and so weak$^\star$-compactness and weak$^\star$-sequentially compactness on these norm-bounded sets coincide.
A short sketch of the proof for the metrizabilty can be found here. | 2019-09-15 05:56:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9771488308906555, "perplexity": 132.6627197904959}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570740.10/warc/CC-MAIN-20190915052433-20190915074433-00268.warc.gz"} |
https://tongfamily.com/2022/01/22/synology-video-server-and-hyperbackup/ | OK, I have a Synology and I have to admit I use about 1% of the applications that are there, you can run docker containers, they have all sorts of dedicated applications, but to reduce the attack surface area for various SmartHome devices, I like to have purpose devices since the cost is relatively low. So for my Homebridge, I use a Raspberry Pi and the Synology is just for storing files.
Still, there are two applications I do use on the thing:
1. Hyperbackup to Google Drive. If you buy a \$10/month Google Workspace, you get their Gmail, but also 2PB of storage. So I've been pushing end to end encrypted backups of the Synology up there. This used to work horribly because the old Google Backup and Sync choked on these files, but the new Google Drive works way better, it doesn't load or read directories that are not synced, so you can have TBs of data up there and you can still use it to store small files. This thing also supports backups to the more traditional AWS S3 and other places, but for a home server, this is not a bad hack 🙂
2. Media Server and Indexing Service and Video Station. This is actually sort of crazy complicated, but there are two services, the first is Media Server which is a fancy way of saying that instead of just file protocols, the Synology supports DDLNA and UPnP, so that specific devices like say an LG OLED TV (hint hint) can browse and load video files. This is how HD Homerun works to public cable signals. The confusing thing is that when you enable it nothing happens. That is because you have to start their Control Panel > Indexing Service. And yes, I'm not quite sure why one is a Control Panel thing and the other a separate application, but that's the way it is organized. and tell it where your video files are. There is a default which is /videos but otherwise it doesn't search them out. This also Video Station which provides a web interface to your music collection which is kind of cool if you just want to access it from a browser. Basically, the Synology taxonomy seems to be that a service is in the operating system, a Server is a utility that runs as a separate application and a Station is an end-user application 🙂
Now, what quality are you actually getting from these various settings? Specifically, can you really get 4K with HDR output from them. Well, with the Video Station when you click play, it says that it can play in original resolution but with the Media Server on an LG C1, yes you get the full 4K with HDR stream and you get the digital bits so Atmos seems to work. Nice job LG and Synology! | 2022-09-28 04:24:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2166365683078766, "perplexity": 1515.654434446414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00796.warc.gz"} |
https://physics.stackexchange.com/questions/107278/mars-circular-intercept-orbit | # Mars Circular Intercept Orbit
I've been trying to simulate a spacecraft entering a circular capture orbit about Mars using Mathematica, but am having a little trouble. The simulation starts when the spacecraft enters Mars' sphere of influence. In order to find the correct positions and velocities for a circular orbit, I calculated the point of closest approach between the spacecraft and Mars, and then found their positions and velocities at that point. I also calculated the angle that the spacecraft's velocity vector makes with the positive x-axis at closest approach. Once I had these bits, I used the WhenEvent function to give the spacecraft the same velocity as Mars + the velocity required for a circular orbit about Mars at closest approach (looking at the WhenEvent code will hopefully make things clearer), using the following formulae:
$$v_{sx}=v_{mx}+ \sqrt{\frac{GM_{mars}}{r}} \sin{\theta},$$ $$v_{sy}=v_{my}+\sqrt{\frac{GM_{mars}}{r}} \cos{\theta},$$
where $v_{sx}$ and $v_{sy}$ are the x- and y-components of the spacecraft's velocity, $v_{mx}$ and $v_{my}$ are the x- and y-components of Mars' velocity, $r$ is the radial separation between the spacecraft and Mars and $\theta$ is the angle made between the positive x-axis and the spacecraft's velocity vector (all at closest approach).
The following is the Mathematica code, which should work if you just plonk it into a new notebook:
Remove["Global*"]
G = 6.672*10^-11; (*Gravitational Constant*)
m[0] = 1.988544*10^30 ;(*Mass of Sun*)
m[2] = 6.4185*10^23; (*Mass of Mars*)
m[3] = 1000; (*Mass of spacecraft*)
(*Mars' position and velocity at spacecraft's entrance to Mars' SOI*)
p[2] = {-1.3528201165963936*^11, -1.8675833330580637*^11};
v[2] = {20533.99477259318, -12116.993615214029} ;
r[2] = 3.3899*10^6 ;(*Mean planetary radius of Mars*)
(*Spacecraft's position and velocity at entrance to Mars' SOI*)
p[3] = {-1.3470234059931529*^11, -1.8670782689951355*^11};
v[3] = {17250.213610967, -12349.519721984863};
(*Simulation running time*)
tmax = 86400*5
Soln = NDSolve[{
x[2]''[t] == -((G m[0] x[2][t])/((x[2][t])^2 + (y[2][t])^2)^(3/2)),
y[2]''[t] == -((G m[0] y[2][t])/((x[2][t])^2 + (y[2][t])^2)^(3/2)),
x[3]''[t] == -((G m[0] x[3][t])/((x[3][t])^2 + (y[3][t])^2)^(3/2)) - (G m[2] (x[3][t]- x[2][t]))/((x[3][t] - x[2][t])^2 + (y[3][t] - y[2][t])^2)^(3/2),
y[3]''[t] == -((G m[0] y[3][t])/((x[3][t])^2 + (y[3][t])^2)^(3/2)) - (G m[2] (y[3][t]- y[2][t]))/((x[3][t] - x[2][t])^2 + (y[3][t] - y[2][t])^2)^(3/2),
x[2][0] == p[2][[1]], y[2][0] == p[2][[2]], x[3][0] == p[3][[1]],
y[3][0] == p[3][[2]], x[2]'[0] == v[2][[1]], y[2]'[0] == v[2][[2]],
x[3]'[0] == v[3][[1]], y[3]'[0] == v[3][[2]],
WhenEvent[
t == 173379, {x[3]'[t] -> 20785.020973205566 + Sqrt[(G m[2])/6.395400814228174*^6]Sin[-0.7053105626554602],
y[3]'[t] ->(*v[2][[2]]*)-11763.84750366211 -
Sqrt[(G m[2])/6.395400814228174*^6]
Cos[-0.7053105626554602]}]}, {x[2][t], y[2][t], x[3][t],
y[3][t]}, {t, 0, tmax}, StartingStepSize -> 0.001,
AccuracyGoal -> 17, PrecisionGoal -> 17,
Method -> "StiffnessSwitching", MaxSteps -> 10000000]
Show[ParametricPlot[
Evaluate[{{x[2][t], y[2][t]}, { x[3][t], y[3][t]}} /. Soln], {t, 0,
tmax}, AxesLabel -> {x, y}, PlotStyle -> Automatic,
PlotRange -> Full, ImageSize -> Large],
Graphics[{Red, Disk[{0, 0}, r[2]]}]]
Animate[ParametricPlot[{{x[2][t], y[2][t]}, {x[3][t], y[3][t]}} /.
Soln /. t -> a, {t, Max[0, a - 20000], a}, AxesLabel -> {x, y},
Axes -> False, ImageSize -> Large], {a, 0, tmax},
AnimationRate -> 10000]
Also, for those who are interested in calculating the positions and velocities at closest approach, do the following: Comment out the WhenEvent part in NDSolve and then put the following code below the NDSolve ouput:
Spacecraft Minimum Approach Radius at Intercept Point
dt = 60;
MarsPosition =
Table[{x[2][t], y[2][t]} /. Soln, {t, 0, 86400*2.2, dt}] ;
SpaceCraftPosition =
Table[{x[3][t], y[3][t]} /. Soln, {t, 0, 86400*2.2, dt}] ;
dxy = Sqrt[(MarsPosition - SpaceCraftPosition)^2];
dr = Table[Norm[dxy[[i]]], {i, 1, Length[dxy]}];
(*Find closest approach of spacecraft to Mars*)
mindr = Min[dr] (*Pretty accurate using forward difference for speed!*)
(*Finds index position of mindr in dr*)
mindrindex = Position[dr, mindr]
(*Finds time of closest approach*)
mindrtime = dt*2891(*mindrindex*)
Heliocentric Velocity and Position of Spacecraft and Mars at Minimum Approach Radius
xy2f = MarsPosition[[2891]];
xy2f2 = xy2f[[1]];
x2f = xy2f2[[1]]
y2f = xy2f2[[2]]
xy2i = MarsPosition[[2890]];
xy2i2 = xy2i[[1]];
x2i = xy2i2[[1]];
y2i = xy2i2[[2]];
v2x = (x2f - x2i)/dt
v2y = (y2f - y2i)/dt
xy3f = SpaceCraftPosition[[2891]];
xy3f2 = xy3f[[1]];
x3f = xy3f2[[1]]
y3f = xy3f2[[2]]
xy3i = SpaceCraftPosition[[2890]];
xy3i2 = xy3i[[1]];
x3i = xy3i2[[1]];
y3i = xy3i2[[2]];
v3x = (x3f - x3i)/dt
v3y = (y3f - y3i)/dt
theta = ArcTan[v3x, v3y]
`
This is the intercept orbit I got with the above values:
As can be seen from the output, the spacecraft doesn't go into a circular orbit, but instead goes into a highly eccentric one. Messing around manually with the value of theta and setting it to -1.1 radians seems to get an orbit with quite a low eccentricity, but it is a massive difference from the value of -0.705 radians calculated using the spacecraft's closest approach velocity, so I feel like I must be doing something wrong. At first I thought the error might be because I was using a simple forward difference to calculate velocities, but surely this is adequate for this situation? Any help would be appreciated.
• Would Computational Science be a better home for this question? – Qmechanic Apr 7 '14 at 15:47
• What coordinate system are you using? What is your x axis? – HopDavid Apr 7 '14 at 20:58
• I'm using a standard Cartesian coordinate system. – InquisitiveInquirer Apr 7 '14 at 21:46
• Are you using the barycenter of the solar system for your origin? The ecliptic plane for your xy plane? Is θ the angle between the ship's velocity vector and Mars' velocity vector? I'm not familiar with Mathematica but have modeled orbits in other software. If I could better visualize the geometry of your scenario, I might be able to help. – HopDavid Apr 8 '14 at 1:03
• Sorry, I should have been more specific. Yes, the Solar System barycentre is the origin and the ecliptic is the xy plane. Though, $\theta$ is the angle that the spacecraft's velocity vector makes with the x-axis, not the angle between Mars and the spacecraft. That is, $\theta = \arctan{v_y/v_x}$, where $v_x$ and $v_y$ are the x and y component velocities of the spacecraft. – InquisitiveInquirer Apr 8 '14 at 6:15
$\theta$ is the angle made between the positive x-axis and the spacecraft's velocity vector (all at closest approach).
Don't you need $\theta$ to be the angle of the spacecraft's velocity relative to Mars? Try subtracting the Mars velocity vector from the spacecraft's velocity vector, calculating $\theta$ for the resultant vector and using that angle in the WhenEvent subroutine.
• After some testing it looks like I had to set $\theta$ equal to the angle that the spacecraft's radial vector makes relative to Mars, not its velocity vector. But since you got me thinking on those lines I'll give you the bounty, thank you! – InquisitiveInquirer Apr 10 '14 at 15:43 | 2020-02-26 13:09:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48059508204460144, "perplexity": 1059.071776583409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146342.41/warc/CC-MAIN-20200226115522-20200226145522-00319.warc.gz"} |
https://zbmath.org/?q=rf%3A473270 | ## Found 51 Documents (Results 1–51)
100
MathJax
Full Text:
Full Text:
Full Text:
### Compressible flows on moving domains: stabilized methods, weakly enforced essential boundary conditions, sliding interfaces, and application to gas-turbine modeling. (English)Zbl 1390.76805
MSC: 76N15 76M10 65M60
Full Text:
### Finite element computation of buzz instability in supersonic air intakes. (English)Zbl 1356.76168
Bazilevs, Yuri (ed.) et al., Advances in computational fluid-structure interaction and flow simulation. New methods and challenging computations. Based on the presentations at the conference, AFSI, Tokyo, Japan, March 19–21, 2014. Basel: Birkhäuser/Springer (ISBN 978-3-319-40825-5/hbk; 978-3-319-40827-9/ebook). Modeling and Simulation in Science, Engineering and Technology, 65-76 (2016).
MSC: 76M10 76E09 76J20
Full Text:
### Robust DPG methods for transient convection-diffusion. (English)Zbl 1357.65173
Barrenechea, Gabriel R. (ed.) et al., Building bridges: connections and challenges in modern approaches to numerical partial differential equations. Selected papers based on the presentations at the 101st LMS-EPSRC symposium, Durham, UK, July 8–16, 2014. Cham: Springer (ISBN 978-3-319-41638-0/hbk; 978-3-319-41640-3/ebook). Lecture Notes in Computational Science and Engineering 114, 179-203 (2016).
MSC: 65M60 35K20 65M12
Full Text:
### A high-order discontinuous Galerkin method with unstructured space-time meshes for two-dimensional compressible flows on domains with large deformations. (English)Zbl 1390.76366
MSC: 76M10 65M60
Full Text:
### Deforming-spatial-domain/stabilized space-time (DSD/SST) method in computation of non-Newtonian fluid flow and heat transfer with moving boundaries. (English)Zbl 1398.76130
MSC: 76M10 76D05 74F10 65Y05 76M25
Full Text:
### Computer modeling techniques for flapping-wing aerodynamics of a locust. (English)Zbl 1290.76170
MSC: 76Z10 92C10
Full Text:
### Space-time techniques for computational aerodynamics modeling of flapping wings of an actual locust. (English)Zbl 1286.76179
MSC: 76Z10 92C10
Full Text:
### Space-time computational analysis of bio-inspired flapping-wing aerodynamics of a micro aerial vehicle. (English)Zbl 1286.76180
MSC: 76Z10 92C10
Full Text:
### Space-time fluid-structure interaction methods. (English)Zbl 1248.76118
MSC: 76M25 74S30 74F10
Full Text:
### Viscous flow in a mixed compression intake. (English)Zbl 1382.76214
MSC: 76N15 76M10
Full Text:
Full Text:
Full Text:
### Space–time SUPG formulation of the shallow-water equations. (English)Zbl 1427.35212
MSC: 35Q35 76M10
Full Text:
### New finite element formulation based on bubble function interpolation for the transient compressible Euler equations. (English)Zbl 1425.76146
MSC: 76M10 35Q31 76N15
Full Text:
Full Text:
MSC: 76N25
Full Text:
### Finite elements in fluids: special methods and enhanced solution techniques. (English)Zbl 1177.76203
MSC: 76M10 76-02
Full Text:
### Finite elements in fluids: stabilized formulations and moving boundaries and interfaces. (English)Zbl 1177.76202
MSC: 76M10 76-02 74F10
Full Text:
### Fluid–structure interaction modeling of complex parachute designs with the space–time finite element techniques. (English)Zbl 1124.76033
MSC: 76M10 74F10 74S05
Full Text:
### A stabilized finite element method based on SGS models for compressible flows. (English)Zbl 1120.76331
MSC: 76M10 76N15
Full Text:
### Solution techniques for the fully discretized equations in computation of fluid-structure interactions with the space-time formulations. (English)Zbl 1123.76035
MSC: 76M10 76D05 74F10
Full Text:
### Stabilization and shock-capturing parameters in SUPG formulation of compressible flows. (English)Zbl 1122.76061
MSC: 76M10 76N15 76L05
Full Text:
### Comparison of finite element and pendulum models for simulation of sloshing. (English)Zbl 1084.76539
MSC: 76M10 76D27
Full Text:
### Stabilized finite element formulation of buoyancy-driven incompressible flows. (English)Zbl 1001.76052
MSC: 76M10 76D05 76R10 80A20 65Y05
Full Text:
### Computation of flows in supersonic wind-tunnels. (English)Zbl 1113.76404
MSC: 76M10 76J20
Full Text:
Full Text:
### Computation of internal and external compressible flows using $$EDICT$$. (English)Zbl 1047.76048
MSC: 76M10 76N15 65Y05
Full Text:
### Fluid-object interactions in interior ballistics. (English)Zbl 0997.76048
MSC: 76M10 76N15 74F10
Full Text:
### Stabilized-finite-element/interface-capturing technique for parallel computation of unsteady flows with interfaces. (English)Zbl 0994.76050
MSC: 76M10 76D05 65Y05
Full Text:
Full Text:
### A general procedure for deriving stabilized space-time finite element methods for advective-diffusive problems. (English)Zbl 0982.76062
MSC: 76M10 76R99
Full Text:
Full Text:
### Finite element computation of unsteady viscous compressible flows. (English)Zbl 0953.76051
MSC: 76M10 76H05 76N15
Full Text:
### A unified finite element formulation for compressible and incompressible flows using augmented conservation variables. (English)Zbl 0943.76050
MSC: 76M10 76N15 76D05
Full Text:
### Explicit reproducing kernel particle methods for large deformation problems. (English)Zbl 0909.73088
MSC: 74S30 74B20 76B99
Full Text:
### A high dimensional moving mesh strategy. (English)Zbl 0890.65101
MSC: 65M20 65M50 35K15
Full Text:
### On the computation of the boundary integral of space-time deforming finite elements. (English)Zbl 0869.76035
MSC: 76M10 76N10
Full Text:
### A space-time Galerkin/least-squares finite element formulation of the Navier-Stokes equations for moving domain problems. (English)Zbl 0899.76259
MSC: 76M10 76D05 65M60
Full Text:
Full Text:
Full Text:
### Parallel finite element simulation of large ram-air parachutes. (English)Zbl 0882.76045
MSC: 76M10 76D05 65Y05
Full Text:
### A space-time formulation for multiscale phenomena. (English)Zbl 0869.65061
MSC: 65M60 35G10 35K25
Full Text:
Full Text:
### Massively parallel finite element simulation of compressible and incompressible flows. (English)Zbl 0848.76040
MSC: 76M10 76N10 65Y05
Full Text:
### Mesh update strategies in parallel finite element computations of flow problems with moving boundaries and interfaces. (English)Zbl 0848.76036
MSC: 76M10 65Y05
Full Text:
### Finite element solution strategies for large-scale flow simulations. (English)Zbl 0846.76041
MSC: 76M10 76B10
Full Text:
### Space-time oriented streamline diffusion methods for non-linear conservation laws in one dimension. (English)Zbl 0799.65104
Reviewer: S.Jiang (Bonn)
Full Text:
### SUPG finite element computation of viscous compressible flows based on the conservation and entropy variables formulations. (English)Zbl 0772.76032
MSC: 76M10 76N10
Full Text:
all top 5
all top 5
all top 3
all top 3 | 2022-08-15 22:13:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5351303815841675, "perplexity": 12229.560258644942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00648.warc.gz"} |
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-2-section-2-2-quadratic-functions-exercise-set-page-330/30 | ## Precalculus (6th Edition) Blitzer
We have the given quadratic equation is $f\left( x \right)=2{{x}^{2}}-7x-4$. Step 1: Determine how the parabola opens: Note that a, the coefficient of ${{x}^{2}}$ , is 2. If $a>0$ , the parabola opens upwards and if $a<0$ then the parabola opens downwards. Also, if $\left| a \right|$ is small, the parabola opens more flatly than if $\left| a \right|$ is large. Now, from the provided equation of the function, it is observed that the graph opens upwards as $a>0$. Step 2: evaluate the vertex The x-coordinate can be calculated as: \begin{align} & x=-\frac{b}{2a} \\ & =-\frac{-7}{2\times 2} \\ & =\frac{7}{4} \end{align} And the y-coordinate can be calculated as: \begin{align} & y=2{{\left( \frac{7}{4} \right)}^{2}}-7\left( \frac{7}{4} \right)-4 \\ & =\frac{49}{8}-\frac{49}{4}-4 \\ & =-\frac{81}{8} \end{align} Thus, the vertex is $\left( \frac{7}{4},-\frac{81}{8} \right)$. Step 3: Above steps lead to the parabola that opens upwards and has a vertex at $\left( \frac{7}{4},-\frac{81}{8} \right)$ and it intersects the x-axis at $4$ and $-\frac{1}{2}$ , and the y-axis at -4. Thus, the required parabola is as shown above. | 2023-03-25 00:36:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999954104423523, "perplexity": 346.0762597277724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00169.warc.gz"} |
https://quomodocumque.wordpress.com/tag/erdos-renyi/ | ## Random simplicial complexes
This is a post about Matt Kahle’s cool paper “Sharp vanishing thresholds for cohomology of random flag complexes,” which has just been accepted in the Annals.
The simplest way to make a random graph is to start with n vertices and then, for each pair (i,j) independently, put an edge between vertices i and j with probability p. That’s called the Erdös-Rényi graph G(n,p), after the two people who first really dug into its properties. What’s famously true about Erdös-Rényi graphs is that there’s a sharp threshold for connectness. Imagine n being some fixed large number and p varying from 0 to 1 along a slider. When p is very small relative to n, G(n,p) is very likely to be disconnected; in fact, if
$p = (0.9999) \frac{\log n}{n}$
there is very likely to be an isolated vertex, which makes G(n,p) disconnected all by itself.
On the other hand, if
$p = (1.0001) \frac{\log n}{n}$
then G(n,p) is almost surely connected! In other words, the probability of connectedness “snaps” from 0 to 1 as you cross the barrier p = (log n)/n. Of course, there are lots of other interesting questions you can ask — what exactly happens very near the “phase transition”? For p < (log n)/n, what do the components look like? (Answer: for some range of p there is, with probability 1, a single “giant component” much larger than all others. For instance, when p = 1/n the giant component has size around n^{2/3}.)
I think it’s safe to say that the Erdös-Rényi graph is the single most-studied object in probabilistic combinatorics.
But Kahle asked a very interesting question about it that was completely new to me. Namely: what if you consider the flag complex X(n,p), a simplicial complex whose k-simplices are precisely the k-cliques in G(n,p)? X(n,p) is connected precisely when G(n,p) is, so there’s nothing new to say from that point of view. But, unlike the graph, the complex has lots of interesting higher homology groups! The connectedness threshold says that dim H_0(X(n,p)) is 1 above some sharp threshold and larger below it. What Kahle proves is that a similar threshold exists for all the homology. Namely, for each k there’s a range (bounded approximately by $n^{1/k}$ and $(log n / n)^{1/(k+1)}$) such that H_k(X(n,p)) vanishes when p is outside the range, but not when p is inside the range! So there are two phase transitions; first, H^k appears, then it disappears. (If I understand correctly, there’s a narrow window where two consecutive Betti numbers are nonzero, but most of the time there’s only one nonzero Betti number.) Here’s a graph showing the appearance and disappearance of Betti in different ranges of p:
This kind of “higher Erdös-Rényi theorem” is, to me, quite dramatic and unexpected. (One consequence that I like a lot; if you condition on the complex having dimension d, i.e. d being the size of the largest clique in G(n,p), then with probability 1 the homology of the complex is supported in middle degree, just as you might want!) And there’s other stuff there too — like a threshold for the fundamental group of X(n,p) to have property T.
For yet more about this area, see Kahle’s recent survey on the topology of random simplicial complexes. The probability that a random graph has a spectral gap, the distribution of Betti numbers of X(n,p) in the regime where they’re nonzero, the behavior of torsion, etc., etc…… | 2021-10-22 19:54:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8133845329284668, "perplexity": 1269.7647151160327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585518.54/warc/CC-MAIN-20211022181017-20211022211017-00494.warc.gz"} |
https://www.transtutors.com/questions/akron-inc-owns-all-outstanding-stock-of-toledo-corporation-amortization-expense-of-1-3333387.htm | # Akron, Inc., owns all outstanding stock of Toledo Corporation. Amortization expense of $15,000 per.. 1 answer below » Akron, Inc., owns all outstanding stock of Toledo Corporation. Amortization expense of$15,000 per year for patented technology resulted from the original acquisition. For 2013, the companies had the following account balances: _________________________________________Akron _____________Toledo Sales. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $1,100,000 ................$600,000 Cost of goods sold . . . . . . . . . . . . . . . . . . . . . . . . . . 500,000 .................400,000 Operating expenses . . . . . . . . . . . . . . . . . . . . . . . . . 400,000 .................220,000 Investment income . . . . . . . . . . . . . . . . . . . . . . . . Not given ......................-0- Dividends paid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80,000 ...................30,000 Intra-entity sales of $320,000 occurred during 2012 and again in 2013. This merchandise cost$240,000 each year. Of the total transfers, $70,000 was still held on December 31, 2012, with$50,000 unsold on December 31, 2013. a. For consolidation purposes, does the direction of the transfers (upstream or downstream) affect the balances to be reported here? b. Prepare a consolidated income statement for the year ending December 31, 2013.
KATRAJU S
a. In this business combination, the direction of the intra-entity transfers (either upstream or downstream) is not important to the...
Looking for Something Else? Ask a Similar Question | 2021-11-30 06:38:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8982312679290771, "perplexity": 189.08761976460607}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358953.29/warc/CC-MAIN-20211130050047-20211130080047-00088.warc.gz"} |
https://www.physicsforums.com/threads/hyperbolic-motions.377686/ | # Hyperbolic motions
1. Feb 12, 2010
### Altabeh
Hi everybody
I've been lately a little bit concerned over the hyperbolic motions that have the following equations in (ct,x)-space:
$$\frac{x^2}{(c^2/a)^2}-\frac{(ct)^2}{(c^2/a)^2}=1$$.
We know that events horizons are the lines that form a 45-degree angle by both ct- and x-axis. So what does actually assure us that here, for instance, for t=0, $$x=\pm c^2/a$$ lie inside events horizens? Is this just because $$a$$ can't in magnitude gets higher than $$c$$?
AB
2. Feb 12, 2010
### George Jones
Staff Emeritus
No.
$$x^2 - \left(ct\right)^2 = \left( \frac{c^2}{a} \right)^2,$$
so $a \rightarrow \infty$ gives the horizons. For $t=0$, any value of $x$ except $x = 0$ lies inside the horizons.
3. Feb 12, 2010
### Altabeh
Yeah, I got it!
Thanks
4. Feb 12, 2010
### George Jones
Staff Emeritus
Also, differentiating
$$x^2 - \left(ct\right)^2 = \left( \frac{c^2}{a} \right)^2,$$
gives
$$\frac{dx}{dt} = c \frac{ct}{x}.[/itex] Consequently, [tex]-c < \frac{dx}{dt} < c$$
gives that $\left(ct , x \right)$ lies inside the horizons.
5. Feb 12, 2010
### Altabeh
Could you explain this a little bit more?
6. Feb 12, 2010
### DrGreg
Combine the following and what do you get? | 2018-10-16 16:56:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.432170033454895, "perplexity": 3996.828743827582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510853.25/warc/CC-MAIN-20181016155643-20181016181143-00173.warc.gz"} |
https://cran.yu.ac.kr/web/packages/trialr/vignettes/A260-TITE-CRM.html | # Time-to-Event Continual Reassessment Method
#### 2020-10-15
This vignette concerns the Time to Event Continual Reassessment Method (TITE-CRM) dose-finding clinical trial design.
Cheung and Chappell (2000) introduced TITE-CRM as a variant of the regular CRM [OQuigley1990] that handles late-onset toxicities. Dose-finding trials tend to use a short toxicity window after the commencement of therapy, during which each patient is evaluated for the presence or absence of dose-limiting toxicity (DLT). This approach works well in treatments like chemotherapy where toxic reactions are expected to manifest relatively quickly. In contrast, one of the hallmarks of radiotherapy, for instance, is that related adverse reactions can manifest many months after the start of treatment. A similar phenomenon may arise with immunotherapies.
In adaptive dose-finding clinical trials, where doses are selected mid-trial in response to the outcomes experienced by patients evaluated hitherto, late-onset toxic events present a distinct methodological challenge. Naturally, the toxicity window will need to be long enough to give the trial a good chance of observing events of interest. If, however, we wait until each patient completes the evaluation window before using their outcome to forecast the best dose, the trial may take an infeasibly long time and ignore pertinent interim data.
TITE-CRM presents a solution by introducing the notion of a partial tolerance event. If a patient is half way through the evaluation window and has not yet experienced toxicity, we may say that they have experienced half a tolerance. This simple novelty allows partial information to be used in dose-recommendation decisions. If the patient goes on to complete the window with no toxic reaction, they will be regarded as having completely tolerated treatment, as is normally the case with CRM and other dose-finding algorithms. This notion of partial events is only applied to tolerances, however. If a patient experiences toxicity at any point during the evaluation window, they are immediately regarded as having experienced 100% of a DLT event.
To illustrate TITE-CRM mathematically, we start with the likelihood from the plain vanilla CRM. Let $$Y_i$$ be a random variable taking values $$\{0, 1\}$$ reflecting the absence and presence of DLT respectively in patient $$i$$. A patient administered dose $$x_i$$ has estimated probability of toxicity $$F(x_i, \theta)$$, where $$\theta$$ represents the set of model parameters. The likelihood component arising from patient $$i$$ is
$F(x_i, \theta)^{Y_i} (1 - F(x_i, \theta))^{1-Y_i}$
and the aggregate likelihood after the evaluation of $$J$$ patients is
$L_J(\theta) = \prod_{i=1}^J \left\{ F(x_i, \theta) \right\}^{Y_i} \left\{ 1 - F(x_i, \theta) \right\}^{1-Y_i}$
Cheung and Chappell (2000) observed that each patient may provide a weight, $$w_i$$, reflecting the extend to which their outcome has been evaluated. The weighted likelihood is
$L_J(\theta) = \prod_{i=1}^J \left\{ w_i F(x_i, \theta) \right\}^{Y_i} \left\{ 1 - w_i F(x_i, \theta) \right\}^{1-Y_i}$
TITE-CRM weights the outcomes according to the extend to which patients have completed the evaluation period. To illustrate the design, we reproduce the example given on p.124 of Cheung (2011). Four patients have been treated at dose-level 3 and all are part-way through the 126-day toxicity evaluation window.
The authors use the empiric model so that there is one parameter, $$\theta = \beta$$, the dose-toxicity relation is $$F(x_i, \beta) = x_i^{exp(\beta)}$$, and a $$N(0, \sigma_{\beta}^2)$$ prior is specified on $$\beta$$.
library(trialr)
fit <- stan_crm(skeleton = c(0.05, 0.12, 0.25, 0.40, 0.55), target = 0.25,
doses_given = c(3, 3, 3, 3),
tox = c(0, 0, 0, 0),
weights = c(73, 66, 35, 28) / 126,
model = 'empiric', beta_sd = sqrt(1.34), seed = 123)
fit
#> Patient Dose Toxicity Weight
#> 1 1 3 0 0.5793651
#> 2 2 3 0 0.5238095
#> 3 3 3 0 0.2777778
#> 4 4 3 0 0.2222222
#>
#> Dose Skeleton N Tox ProbTox MedianProbTox ProbMTD
#> 1 1 0.05 0 0 0.0749 0.00703 0.1315
#> 2 2 0.12 0 0 0.1171 0.02993 0.0993
#> 3 3 0.25 4 0 0.1886 0.10083 0.1507
#> 4 4 0.40 0 0 0.2779 0.21949 0.1752
#> 5 5 0.55 0 0 0.3845 0.37180 0.4432
#>
#> The model targets a toxicity level of 0.25.
#> The dose with estimated toxicity probability closest to target is 4.
#> The dose most likely to be the MTD is 5.
#> Model entropy: 1.45
The first table gives a summary of the patient information. We see that each patient has received dose-level 3, none have yet experienced toxicity although all are only partly through the evaluation window. The second table summarises dose-level information. We see that dose-level 4 has estimated mean probability of toxicity closest to the target 25%, although dose-level 5 is the dose most frequently advocated by the dose-toxicity curves generated by MCMC. This exuberance should be tempered by the fact that we have not yet treated any patients at dose-level 4, although it is currently recommended for the next patient.
A TITE-CRM option is provided for each of the CRM variants implemented in trialr. It is enabled simply by specifying the weights parameter. The necessity to provide weights under TITE-CRM rather obscures the attraction of using the outcome string approach of describing patients’ doses and DLT outcomes demonstrated in the CRM vignette. Thus, we provide stan_crm the three vectors doses_given, tox and weights to convey the patient-level information.
The object returned by stan_crm is the same, regardless of whether weights are provided or not. Thus, all of the visualistion methods presented in the CRM visualistion vignette apply.
## Other CRM vignettes## Other CRM vignettes
There are many vignettes illustrating the CRM and other dose-finding models in trialr. Be sure to check them out.
# trialr and the escalation package
escalation is an R package that provides a grammar for specifying dose-finding clinical trials. For instance, it is common for trialists to say something like ‘I want to use this published design… but I want it to stop once $$n$$ patients have been treated at the recommended dose’ or ‘…but I want to prevent dose skipping’ or ‘…but I want to select dose using a more risk-averse metric than merely closest-to-target’.
trialr and escalation work together to achieve these goals. trialr provides model-fitting capabilities to escalation, including the CRM methods described here. escalation then provides additional classes to achieve all of the above custom behaviours, and more.
escalation also provides methods for running simulations and calculating dose-paths. Simulations are regularly used to appraise the operating characteristics of adaptive clinical trial designs. Dose-paths are a tool for analysing and visualising all possible future trial behaviours. Both are provided for a wide array of dose-finding designs, with or without custom behaviours like those identified above. There are many examples in the escalation vignettes at https://cran.r-project.org/package=escalation.
# trialr
trialr is available at https://github.com/brockk/trialr and https://CRAN.R-project.org/package=trialr
# References
Cheung, Ying Kuen. 2011. Dose Finding by the Continual Reassessment Method. New York: Chapman & Hall / CRC Press.
Cheung, Y K, and R Chappell. 2000. “Sequential Designs for Phase I Clinical Trials with Late-Onset Toxicities.” Biometrics 56 (4): 1177–82. | 2021-09-25 20:39:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5068076252937317, "perplexity": 2867.8879614731445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057775.50/warc/CC-MAIN-20210925202717-20210925232717-00638.warc.gz"} |
http://answers.com.xemphimonlines.com/Q/FAQ/10184 | ## Percentages, Fractions, and Decimal Values
Parent Category: Math and Arithmetic
Percentages are defined as a fraction or portion of a whole. Generally percentages deal with an amount out of one hundred but can be used in sales, economics, and science. If you've walked into any store you're sure to find something marked a certain percent off. Fractions are not whole numbers with a numerator over a denominator. Usually the numerator and denominator are expressed as integers. There are four types of fractions: simple fractions, complex fractions, mixed fractions, and improper fractions. Decimals are also not integers. They are expressed often like 0.34 and are often rounded off since some go for too long. Both decimals and fractions are also used to convert to percentages.
As of 2013 Hispanic or Latino people made up 17.1% of the USpopulation. Around 37.6 million people spoke Spanish in the home asof 2011, that is two-thirds of the non-english speaking populationin the US.
It is: 3.25% of 58=1.885
To find 5 percent of a number, multiply the number by 0.05. In thisinstance, 0.05 x 105=5.25. Therefore, 5 percent of 105 is equalto 5.25.
According to the CDC, Oral Contraceptives have a failure rate of9%. Thus meaning their effectiveness is only 91%
90/100 x 210=9 x 21=189
20 percent of 365 is 73.
% rate=26.67%=4 /15 * 100%=0.2667 * 100%=26.67%
It should be 850%
Converting a number to a percent is easy; just multiply by 100: 0.0051 x 100=51 %
First, you must make sure that the denominators are the same, so ifyou are adding 3/6 + 4/8, you have to make the denominators thesame, but how? to make the denominators the same, you have to find the LCM, whichis the least common multiple, and in this case, 24 is the LCM, andso your fraction will...
Calculating concentration of a chemical solution is a basic skill all students of chemistry mustdevelop early in their studies. What is concentration?
Okay. The 'full amount' of something is one-hundred percent. Halfof one hundred is fifty. For example, if something costsfive-hundred dollars and it was on special in a shop, it would costtwo hundred and fifty dollars. So if an advertisement sayssomething is fifty percent off, it means half off the...
300%=300 out of 100. 300/100 equals 3. 3=3/1 3=3.0
32%, which can be found by solving the equation 8=25x for x andmultiplying the result by 100, or simply by noting that x=8/25=(8/25) (4/4)=32/100 .
To convert 37.5% to decimal divide by 100: 37.5% ÷ 100=0.375
Approximately 1 million members are associated with gangs. Around 20, 000 violent gangs are in the US. On an average, 30,000 people die each year of gang violence in the US.
20 percent of 4 kg=1.6 kg
10.00 dollars you would put 10 over 900 multiplied by 100.
A ewe will give birth around 147-151 days.
(3/100)*2=(0.03)*2=0.06
40% of a number equates to 40 / 100 of that number. 250 x 40/100=100 If the number is 250,000 then 40%=250000 x 40/100=100000
1m as a fraction of 1 kilometer=1 /1000
Decimal: 0.88 Fraction: 22/25
6% of the number=38500 * (6/100)=2310
If rounded, 1/20th
Expressed as a proper fraction in its simplest form, 0.7 is equal to 7/10 or seven tenths.\n
1 percent of 18,024 is 180.24
250% 30/12=x/100 2.5=x/100 x=250
260 is 400% of 65. Proportion 260/65=x/100 Cross Multiply: 65x=26000 Simplify: x=400
In order to solve something, there needs to be an equation. If I have read your equation right you have: (a² - 4)/(2 + 6a) x (3 + 9a)/(a² + 5a + 6) This looks like it needs to be simplified. There are two fractions being multiplied, so multiply thenumerators and denominators together,...
It means that your reaction is extremely efficient, that yourproduct is stable and that there are no side products. Your labskills are also very good. However, if you're an undergrad, you likely haven't checked thepurity of this. It is likely to be contaminated with some of theexcess starting...
3,000,000 times 0.15=450,000.
22% of 150=22% * 150=0.22 * 150=33
Percent or percentage equates to hundredths. 39% equates to 39 / 100 ' 39% of 20=20 x 39 / 100=39 / 5=7.8
The only equivalent decimals are created by expressing the number with hundredths, thousandths and so on. 6.2=6.20=6.200 It can be expressed as a mixed number and a fraction. 6.2=6 2 / 10 as a mixed number. This can be simplified to 6 1 / 5 . 6.2=62 / 10 as an improper fraction. This...
28% of a number is equivalent to 28 / 100 of that number. In this case, the fraction can be simplified if required.
Well..... Im not a maths GENIOUS but ........ I DONT KNOW!
85.5 % of a number equates to 85.5 / 100 of that number. 85.5 / 100=855 / 1000=171 / 200 as the fraction in its simplest form.
30% of 170=30% * 170=0.3 * 170=51
Per cent means /cent perhaps /100 0.25=0.25 * 1=0.25 * 100 / 100=25 / 100=25 /cent=25 percent=25%
Convert 5.75 to a fraction
A part of the number that can be either bigger, smaller or equal toit. Example 200 100% of 200=200 50% of 200=100 150% of 200=300 | 2018-04-26 16:57:46 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8065379858016968, "perplexity": 3060.7480176087315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948426.82/warc/CC-MAIN-20180426164149-20180426184149-00630.warc.gz"} |
http://pointclouds.org/documentation/tutorials/basic_structures.php | # Getting Started / Basic Structures
The basic data type in PCL 1.x is a :pcl:PointCloud<pcl::PointCloud>. A PointCloud is a C++ class which contains the following data fields:
• Specifies the width of the point cloud dataset in the number of points. width has two meanings:
• it can specify the total number of points in the cloud (equal with the number of elements in points – see below) for unorganized datasets;
• it can specify the width (total number of points in a row) of an organized point cloud dataset.
Note
An organized point cloud dataset is the name given to point clouds that resemble an organized image (or matrix) like structure, where the data is split into rows and columns. Examples of such point clouds include data coming from stereo cameras or Time Of Flight cameras. The advantages of an organized dataset is that by knowing the relationship between adjacent points (e.g. pixels), nearest neighbor operations are much more efficient, thus speeding up the computation and lowering the costs of certain algorithms in PCL.
Note
An projectable point cloud dataset is the name given to point clouds that have a correlation according to a pinhole camera model between the (u,v) index of a point in the organized point cloud and the actual 3D values. This correlation can be expressed in it’s easiest form as: u = f*x/z and v = f*y/z
Examples:
cloud.width = 640; // there are 640 points per line
• Specifies the height of the point cloud dataset in the number of points. height has two meanings:
• it can specify the height (total number of rows) of an organized point cloud dataset;
• it is set to 1 for unorganized datasets (thus used to check whether a dataset is organized or not).
Example:
cloud.width = 640; // Image-like organized structure, with 480 rows and 640 columns,
cloud.height = 480; // thus 640*480=307200 points total in the dataset
Example:
cloud.width = 307200;
cloud.height = 1; // unorganized point cloud dataset with 307200 points
• :pcl:points<pcl::PointCloud::points> (std::vector<PointT>)
Contains the data array where all the points of type PointT are stored. For example, for a cloud containing XYZ data, points contains a vector of pcl::PointXYZ elements:
pcl::PointCloud<pcl::PointXYZ> cloud;
std::vector<pcl::PointXYZ> data = cloud.points;
• Specifies if all the data in points is finite (true), or whether the XYZ values of certain points might contain Inf/NaN values (false).
• :pcl:sensor_origin_<pcl::PointCloud::sensor_origin_> (Eigen::Vector4f)
Specifies the sensor acquisition pose (origin/translation). This member is usually optional, and not used by the majority of the algorithms in PCL.
• :pcl:sensor_orientation_<pcl::PointCloud::sensor_orientation_> (Eigen::Quaternionf)
Specifies the sensor acquisition pose (orientation). This member is usually optional, and not used by the majority of the algorithms in PCL.
To simplify development, the :pcl:PointCloud<pcl::PointCloud> class contains a number of helper member functions. For example, users don’t have to check if height equals 1 or not in their code in order to see if a dataset is organized or not, but instead use :pcl:PointCloud<pcl::PointCloud::isOrganized>:
if (!cloud.isOrganized ())
...
The PointT type is the primary point data type and describes what each individual element of :pcl:points<pcl::PointCloud::points> holds. PCL comes with a large variety of different point types, most explained in the Adding your own custom PointT type tutorial.
# Compiling your first code example
Until we find the right minimal code example, please take a look at the Using PCL in your own project and Writing a new PCL class tutorials to see how to compile and write code for or using PCL. | 2019-01-20 07:34:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2258797585964203, "perplexity": 1579.6637730217612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583700734.43/warc/CC-MAIN-20190120062400-20190120084400-00326.warc.gz"} |
https://trueshelf.com/exercises/199/two-distance-sets/ | Subscribe to the weekly news from TrueShelf
## Two-distance sets
Let $d_1, d_2 \in \mathbb{R}$, and let $S \subset \mathbb{R}^n$ be a set of vectors such that $|| x - y || \in${$d_1, d_2$} for all $x, y \in S$.
• Prove that there exists such a set $S$ of size ${n \choose 2}$.
• Prove that every such set $S$ has at most $\frac{1}{2}(n+1)(n+4)$ points.
0
0
0
0
0
0
0
0
0
0 | 2017-07-23 18:32:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9908269643783569, "perplexity": 342.6549776337281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424586.47/warc/CC-MAIN-20170723182550-20170723202550-00553.warc.gz"} |
http://math.stackexchange.com/questions/161451/entropy-of-a-linear-toral-automorphism | # Entropy of a Linear Toral Automorphism
I'm trying to calculate the entropy of the Linear Toral Automorphism induced by
$$f(x,y,z)=(x,y+x,y+z)$$
This is an exercise in the Katok book.
This map has all eigenvalues equal to 1. But I do not want to use that $~~ h_{top}(f)= log (max|\lambda_i|)$. would like to use Katok's suggestion that says that the cardinality separate sets grow quadratically with $n$ where $n$ is the size of the orbit. But I can not see it clearly.
-
Hint: start by computing eigenvalues of $Df$. The largest eigenvalue will tell you about expansion rate along that tangent direction. What does this tell you about entropy? – William Jun 22 '12 at 4:04
@William: This map has all eigenvalues equal to 1. But I do not want to use that $~~ h_{top}(f)= log (max|\lambda_i|)$. – user27456 Jun 22 '12 at 16:40
Could anybody tell me what branch of math is this? And what is that entropy? Wikipedia article or something similar will suffice. – Yrogirg Jun 25 '12 at 6:13
@ Yrogirg: Look at this page, maths.bristol.ac.uk/~maxcu/DynSysErgTh.html . For something more specific look at the reading titled topological entropy – user27456 Jun 25 '12 at 15:57
We have $$f^n(x,y,z)=(x,\,y+nx,\,z+ny+\tbinom n 2 x)$$ Taking $\|\cdot\|_\infty$ as a metric on $(\mathbb R/\mathbb Z)^3$, this implies $$d_n(a,b) \le (1+n+n(n-1)/2) \|a-b\|_\infty$$ where $d_n$ is the maximum distance between the two orbits $(a,f(a),\dots,f^n(a))$.
So that an $(n,\varepsilon)$-separated set must be $(0,\Omega(\varepsilon/n^2))$-separated (in other words, the metric $d_n$ grows at most quadratically) and therefore, since we are in dimension $d=3$, for fixed $\varepsilon$ its cardinality grows as $O(n^{2d})$, which suffices to conclude that $h_{top}(f)=0$.
Note that the cardinality itself does grow faster than quadratically, as can be seen with the following $n^2(n-1)/2$ points $M_{uv}$: \left\{\begin{aligned} x=&u/\tbinom n 2\\ y=&v/n\\ z=&0 \end{aligned}\right. If $d_n(M_{uv},M_{u'v'})<\varepsilon<1/4$ for some odd $n$, then we have $$|n(y'-y)+\tbinom n 2 (x'-x)|<\varepsilon\\ |v'-v+u'-u|<1\\ v'-v = -(u'-u)$$ $$|(n+1)/2\cdot(y'-y)+\tbinom{(n+1)/2}{2}(x'-x)|<\varepsilon\\ |u'-u|<\frac{n}{(n+1)/4}\varepsilon<1\\ u=u'\\ v=v'$$ So that we have a set of $\Omega(n^3)$ points that is $(n,\varepsilon)$-separated. | 2016-02-11 03:11:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571630358695984, "perplexity": 214.7095960749655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160950.71/warc/CC-MAIN-20160205193920-00274-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://hub.hamamatsu.com/us/en/ask-an-engineer/infrared-products/mid-infrared-questions-and-answers.html | # Mid-infrared Questions & Answers
## How do I choose among an MIR LED, a QCL, and a xenon flash lamp?
There’s a famous phrase in the spectroscopy world, “fit for purpose.” This is a perfect mantra for selecting components. Selecting the right light source starts with the desired application. What are you trying to achieve with this instrument? What is the proper wavelength and power output? What market is it serving? What is the target final cost? The answers to those questions along with the explanations below should lead to an answer.
MIR LEDs
• Wavelength: 3.3 μm (CH₄), 3.9 μm (reference light), and 4.3 μm (CO₂) are provided.
• Higher output, higher reliability, lower power consumption, faster response than lamps
QCLs
• Wavelength: 4 μm to 10 μm band
• High resolution, high output, high reliability, high-speed response
Xenon (Xe) flash lamps
• Wavelength: 0.2 μm to 5.0 μm (continuous spectrum)
• High output pulse emission in the microsecond order
• Long life
MIR LEDs
LEDs boast reliable lifetimes as well as low power consumption. They also come at a relatively low cost compared to other MIR light sources. The main tradeoff lies with power output, so these units are not intended for analytical accuracy. Portable gas monitors would be a great example application for these components.
QCLs
Quantum cascade lasers (QCLs) are the gold standard for generating light anywhere between 4-10 microns. Our DFB (distributed feedback) models provide industry-leading linewidth resolution, enabling possible ppb measurements. QCLs also have reliable lifetimes while providing high power output. All this performance comes at a much higher cost, and power requirements for lasing start at around 500mA. An instrument using a QCL will be cost intensive and require quite a bit of expertise to pull off, but nothing will touch the sensitivity it can achieve.
Xenon (Xe) flash lamps
For wide spectral output and high frequency operation, look no further than the Xe flash lamp. With output ranging from 0.2 microns to 5+ microns, these lamps make it possible to create an instrument that detects multiple gases. However, these lamps should not be considered for measuring very low concentrations due to the wide output and stability. Although Xe flash lamps with emission out to 7 microns have been developed, the relative output past 5 microns remains low. Measurements further into the fingerprint region would be very difficult to achieve.
## What is D* (D-star)?
D* is known as the “detectivity” of a detector, or the photosensitivity per unit active area in a detector. As seen in the equation below, the lower the noise equivalent power (NEP) of a detector, the higher the D* (and vice versa). NEP is the minimum power of signal needed for a detector to overcome its noise floor, or for SNR to equal 1. The lower this value, the higher the sensitivity of a detector. This relationship shows, therefore, that the higher the D*, the higher the sensitivity as well. We can also see from the equation below that the smaller the detector active area (A), the higher the D*.
${D}^{*}=\frac{\sqrt{A}}{NEP}$
D* takes into account more than just a detector’s active area, however. It is also a function of the temperature [K] or wavelength [µm] of a radiant source, the chopping frequency [Hz], and the bandwidth [Hz] of a detector—as seen in the expression of detectivity as “D* (A, B, C),” with each letter corresponding to the three characteristics mentioned.
What makes D* so useful is that it allows a comparison of different active area sizes and chemistries. While D* provides a better gauge of sensitivity, detector characteristics such as light wavelength, response time, active area shape, and number of elements, as well as the necessary electronics, should be taken into account when selecting an infrared detector.
When the applications demand more sensitivity, cooling serves the function of lowering the noise floor of a detector without reducing its quantum efficiency (QE). As a result, the lower the temperature, the higher the D* at a certain input power. It’s important to remember that cooling drives up cost and complexity, so it’s best to consider uncooled detectors first. Hamamatsu offers a wide range of uncooled detectors as well as detectors with multi-stage thermoelectric (TEC) cooling and liquid nitrogen cooling.
## Give me the short answer to “Why should I choose InAsSb?”
Photovoltaic operation typically leads to slower measurements. Hamamatsu’s InAsSb detectors mitigate that situation by boasting a rise time on the order of nanoseconds. In addition, many infrared detectors contain materials that are not RoHS compliant (mercury and lead), but InAsSb material is fully RoHS compliant. In uncooled applications, InAsSb is a strong contender for providing big cost advantages as well.
## Meet the engineers
Whether she has a camera in hand, or is working alongside her University Support Group team to solve a problem, Stephanie Butron tries to see things from a different perspective—like seeing the invisible side of mid-infrared. As an Applications Engineer at Hamamatsu Corporation, she enjoys learning more about the variety of projects and applications Hamamatsu’s customers are working on, and understanding better how Hamamatsu can help. When she isn’t helping people focus in on their research, Stephanie enjoys focusing her camera lens on the sights around her. From still lives to portraits, Stephanie tries to find new ways to look at the world. | 2022-12-07 19:31:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4264778196811676, "perplexity": 2599.3864179485604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711218.21/warc/CC-MAIN-20221207185519-20221207215519-00857.warc.gz"} |
https://www.groups.ma.tum.de/statistics/veranstaltungen/seminar-on-statistics-and-data-science/ | # Seminar on Statistics and Data Science
This seminar series is organized by the research group in mathematical statistics and features talks on advances in methods of data analysis, statistical theory, and their applications.
The speakers are external guests as well as researchers from other groups at TUM.
All talks in the seminar series are listed in the Munich Mathematical Calendar.
The seminar takes place in room BC1 2.01.10 under the current rules and simultaneously via zoom. To stay up-to-date about upcoming presentations please join our mailing list. You will receive an email to confirm your subscription.
Join the seminar. Please use your real name for entering the session. The session will start roughly 10 minutes prior to the talk.
(keine Einträge)
## Previous talks
### 19.01.2022 12:30 Tobias Windisch (Robert Bosch GmbH): Learning Bayesian networks on high-dimensional manufacturing data
In our manufacturing plants, many tens of thousands of components for the automotive industry, like cameras or brake boosters, are produced each day. For many of our products, thousands of quality measurements are collected and checked during their assembly process individually. Understanding the relations and interconnections between those measurements is key to obtain a high production uptime and keep scrap at a minimum. Graphical models, like Bayesian networks, provide a rich statistical framework to investigate these relationships, not alone because they represent them as a graph. However, learning their graph structure is an NP-hard problem and most existing algorithms designed to either deal with a small number of variables or a small number of observations. On our datasets, with many thousands of variables and many hundreds of thousands of observations, classic learning algorithms don’t converge. In this talk, we show how we use an adapted version of the NOTEARs algorithm that uses mixture density neural networks to learn the structure of Bayesian networks even for very high-dimensional manufacturing data.
mehr
### 15.12.2021 12:30 Dennis Leung (University of Melbourne): ZAP: z-value adaptive procedures for false discovery rate control with side information
In the last five years, adaptive multiple testing with covariates has gained much traction. It has been recognized that the side information provided by auxiliary covariates which are independent of the primary test statistics under the null can be used to devise more powerful testing procedures for controlling the false discovery rate (FDR). For example, in the differential expression analysis of RNA-sequencing data, the average read counts across samples provide useful side information alongside individual p-values, as the genetic markers with higher read counts should be more promising to display differential expression. However, for two-sided hypotheses, the usual data processing step that transforms the primary test statistics, generally known as z-values, into p-values not only leads to a loss of information carried by the main statistics but can also undermine the ability of the covariates to assist with the FDR inference. Motivated by this and building upon recent theoretical advances, we develop ZAP, a z-value based covariate-adaptive methodology. It operates on the intact structural information encoded jointly by the z-values and covariates, to mimic an oracle testing procedure that is unattainable in practice; the power gain of ZAP can be substantial in comparison with p-value based methods, as demonstrated by our simulations and real data analyses.
mehr
### 08.12.2021 12:30 Niels Richard Hansen (University of Copenhagen): Conditional independence testing based on partial copulas
The partial copula provides a method for describing the dependence between two real valued random variables X and Y conditional on a third random vector Z in terms of nonparametric residuals. These residuals are in practice computed via models of the conditional distributions X|Z and Y|Z. In this talk I will show how the nonparametric residuals can be combined to give a valid test of conditional independence provided that nonparametric estimators of the conditional distributions converge at a sufficient rate. The rates can be realized via estimators based on quantile regression. If time permits, I will show how the test can be generalized to conditional local independence (Granger noncausality) in a time dynamic framework.
mehr
t.b.a.
mehr
### 17.11.2021 12:30 Mladen Kolar (University of Chicago): Estimation and Inference for Differential Networks
We present a recent line of work on estimating differential networks and conducting statistical inference about parameters in a high-dimensional setting. First, we consider a Gaussian setting and show how to directly learn the difference between the graph structures. A debiasing procedure will be presented for construction of an asymptotically normal estimator of the difference. Next, building on the first part, we show how to learn the difference between two graphical models with latent variables. Linear convergence rate is established for an alternating gradient descent procedure with correct initialization. Simulation studies illustrate performance of the procedure. We also illustrate the procedure on an application in neuroscience. Finally, we will discuss how to do statistical inference on the differential networks when data are not Gaussian.
mehr
### 10.11.2021 12:15 Michaël Lalancette (University of Toronto): The extremal graphical lasso
Multivariate extreme value theory is interested in the dependence structure of multivariate data in the unobserved far tail regions. Multiple characterizations and models exist for such extremal dependence structure. However, statistical inference for those extremal dependence models uses merely a fraction of the available data, which drastically reduces the effective sample size, creating challenges even in moderate dimension. Engelke & Hitz (2020, JRSSB) introduced graphical modelling for multivariate extremes, allowing for enforced sparsity in moderate- to high-dimensional settings. Yet, the model selection and estimation tools that appear therein are limited to simple graph structures. In this work, we propose a novel, scalable method for selection and estimation of extremal graphical models that makes no assumption on the underlying graph structure. It is based on existing tools for Gaussian graphical model selection such as the graphical lasso and the neighborhood selection approach of Meinshausen & Bühlmann (2006, Ann. Stat.). Model selection consistency is established in sparse regimes where the dimension is allowed to be exponentially larger than the effective sample size. Preliminary simulation results appear to support the theoretical results.
mehr
### 18.10.2021 14:00 Bernd Sturmfels (MPI Leipzig) : Algebraic Statistics with a View towards Physics
We discuss the algebraic geometry of maximum likelihood estimation from the perspective of scattering amplitudes in particle physics. A guiding example is the moduli space of n-pointed rational curves. The scattering potential plays the role of the log-likelihood function, and its critical points are solutions to rational function equations. Their number is an Euler characteristic. Soft limit degenerations are combined with certified numerical methods for concrete computations.
mehr
### 22.09.2021 12:15 Hongjian Shi (TUM): On universally consistent and fully distribution-free rank tests of vector independence
Rank correlations have found many innovative applications in the last decade. In particular, suitable rank correlations have been used for consistent tests of independence between pairs of random variables. Using ranks is especially appealing for continuous data as tests become distribution-free. However, the traditional concept of ranks relies on ordering data and is, thus, tied to univariate observations. As a result, it has long remained unclear how one may construct distribution-free yet consistent tests of independence between random vectors. This is the problem addressed in this paper, in which we lay out a general framework for designing dependence measures that give tests of multivariate independence that are not only consistent and distribution-free but which we also prove to be statistically efficient. Our framework leverages the recently introduced concept of center-outward ranks and signs, a multivariate generalization of traditional ranks, and adopts a common standard form for dependence measures that encompasses many popular examples. In a unified study, we derive a general asymptotic representation of center-outward rank-based test statistics under independence, extending to the multivariate setting the classical Hájek asymptotic representation results. This representation permits direct calculation of limiting null distributions and facilitates a local power analysis that provides strong support for the center-outward approach by establishing, for the first time, the nontrivial power of center-outward rank-based tests over root-n neighborhoods within the class of quadratic mean differentiable alternatives.
mehr
### 14.04.2021 12:15 Mona Azadkia (ETH Zurich): A Simple Measure Of Conditional Dependence
We propose a coefficient of conditional dependence between two random variables $Y$ and $Z$, given a set of other variables $X_1, \cdots , X_p$, based on an i.i.d. sample. The coefficient has a long list of desirable properties, the most important of which is that under absolutely no distributional assumptions, it converges to a limit in $[0, 1]$, where the limit is 0 if and only if $Y$ and $Z$ are conditionally independent given $X_1, \cdots , X_p$, and is 1 if and only if Y is equal to a measurable function of $Z$ given $X_1, \cdots , X_p$. Moreover, it has a natural interpretation as a nonlinear generalization of the familiar partial $R^2$ statistic for measuring conditional dependence by regression. Using this statistic, we devise a new variable selection algorithm, called Feature Ordering by Conditional Independence (FOCI), which is model-free, has no tuning parameters, and is provably consistent under sparsity assumptions. A number of applications to synthetic and real datasets are worked out.
mehr
### 14.04.2021 13:15 Armeen Taeb (ETH Zurich): Latent-variable modeling: causal inference and false discovery control
Many driving factors of physical systems are latent or unobserved. Thus, understanding such systems and producing robust predictions crucially relies on accounting for the influence of the latent structure. I will discuss methodological and theoretical advances in two important problems in latent-variable modeling. The first problem focuses on developing false discovery methods for latent-variable models that are parameterized by low-rank matrices, where the traditional perspective on false discovery control is ill-suited due to the non-discrete nature of the underlying decision spaces. To overcome this challenge, I will present a geometric reformulation of the notion of a discovery as well as a specific algorithm to control false discoveries in these settings. The second problem aims to estimate causal relations among a collection of observed variables with latent effects. Given access to data arising from perturbations (interventions), I will introduce a regularized maximum-likelihood framework that provably identifies the underlying causal structure and improves robustness to distributional changes. Throughout, I will explore the utility of the proposed methodologies for real-world applications such as water resource management.
mehr
### 24.02.2021 12:15 Elisabeth Ullmann (TUM): Multilevel estimators for models based on partial differential equations
Many mathematical models of physical processes contain uncertainties due to incomplete models or measurement errors and lack of knowledge associated with the model inputs. We consider processes which are formulated in terms of classical partial differential equations (PDEs). The challenge and novelty is that the PDEs contain random coefficient functions, e.g., some transformations of Gaussian random fields. Random PDEs are much more flexible and can model more complex situations compared to classical PDEs with deterministic coefficients. However, each sample of a PDE-based model is extremely expensive. To alleviate the high costs the numerical analysis community has developed so-called multilevel estimators which work with a hierarchy of PDE models with different resolution and cost. We review the basic idea of multilevel estimators and discuss our own recent contributions: i) a multilevel best linear unbiased estimator to approximate the expectation of a scalar output quantity of interest associated with a random PDE [1, 2], ii) a multilevel sequential Monte Carlo method for Bayesian inverse problems [3], iii) a multilevel sequential importance method to estimate the probability of rare events [4]. [1] D. Schaden, E. Ullmann: On multilevel best linear unbiased estimators. SIAM/ASA J. Uncert. Quantif. 8(2), pp. 601-635, 2020 [2] D. Schaden, E. Ullmann: Asymptotic analysis of multilevel best linear unbiased estimators, arXiv:2012.03658 [3] J. Latz, I. Papaioannou, E. Ullmann: Multilevel Sequential² Monte Carlo for Bayesian Inverse Problems. J. Comput. Phys., 368, pp. 154-178, 2018 [4] F. Wagner, J. Latz, I. Papaioannou, E. Ullmann: Multilevel sequential importance sampling for rare event estimation. SIAM J. Sci. Comput. 42(4), pp. A2062–A2087, 2020
mehr
### 18.02.2021 17:00 Dorota Kurowicka (TU Delft): Simplified R-vine based forward regression
An extension of the D-vine based forward regression procedure to a R-vine forward regression is proposed. In this extension any R-vine structure can be taken into account. Moreover, a new heuristic is proposed to determine which R-vine structure is the most appropriate to model the conditional distribution of the response variable given the covariates. It is shown in the simulation that the performance of the heuristic is comparable to the D-vine based approach. Furthermore, it is explained how to extend the heuristic into a situation when more than one response variable are of interest. Finally, the proposed R-vine regression is applied to perform a stress analysis on the manufacturing sector which shows its impact on the whole economy. Reference: Zhu, Kurowicka and Nane. https://doi.org/10.1016/j.csda.2020.107091
mehr
### 03.02.2021 16:00 Holger Dette (Ruhr-Universität Bochum): Testing relevant hypotheses in functional time series via self-normalization
In this paper we develop methodology for testing relevant hypotheses in a tuning-free way. Our main focus is on functional time series, but extensions to other settings are also discussed. Instead of testing for exact equality, for example for the equality of two mean functions from two independent time series, we propose to test a \textit{relevant} deviation under the null hypothesis. In the two sample problem this means that an $L^2$-distance between the two mean functions is smaller than a pre-specified threshold. For such hypotheses self-normalization, which was introduced by Shao (2010) and is commonly used to avoid the estimation of nuisance parameters, is not directly applicable. We develop new self-normalized procedures for testing relevant hypotheses and demonstrate the particular advantages of this approach in the the comparisons of eigenvalues and eigenfunctions.
mehr | 2022-01-26 07:36:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46776095032691956, "perplexity": 896.9710234702538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00361.warc.gz"} |
http://math.stackexchange.com/tags/combinatorics/hot | # Tag Info
## Hot answers tagged combinatorics
6
One easy way is to observe in the binomial expansion of $(1+1)^{2n} = \sum\limits_{k=0}^{2n} \binom{2n}{k}$, the term $\binom{2n}{n}$ is the largest one among all the $2n+1$ terms of the form $\binom{2n}{k}$. This gives us a bound $$\frac{2^{2n}}{2n+1} \le \binom{2n}{n} \le 2^{2n} \quad\implies\quad \log 2 - \frac{\log(2n+1)}{2n} \le ... 5 Alternative computation:$$\begin{align*} a_n &= \frac{1}{2n} \log \binom{2n}{n} = \frac{1}{2n} \log \frac{(2n)!}{(n!)^2} \\ &= \frac{1}{2n} \sum_{k=1}^n \log \frac{n+k}{k} = \frac{1}{2n} \sum_{k=1}^n \log \Bigl( 1 + \frac{1}{k/n} \Bigr). \end{align*}$$Therefore, as n \to \infty, we get a Riemann sum:$$\begin{align*} \lim_{n \to \infty} a_n ...
3
You mean like this? http://ruwix.com/online-rubiks-cube-solver-program/solution.php?cube=0343515641165422615412533412316442361454656232126363525&x=1 To find the solution just click "play".
3
One could equally ask for the probability with ordered or unordered pairs, with different results. It suffices to count the number of pairs $a,b$ satisfying $x=ab\vee x=ba$, where $x\in G$ is fixed. Say we wish to count ordered pairs of not necessarily distinct elements. Notice the equivalence $x=ab\vee x=ba\iff b=a^{-1}x\vee b=xa^{-1}$. If we naively ...
3
You could use the binomial theorem twice. Let $[x^{k}]$ denote the coefficient of $x^k$ from the polynomial $P(x)=\sum_{j=0}^{n}a_jx^j$, i.e. $[x^{k}]P(x)=a_{k}$. Now, ...
3
Here is what Wikipedia has to say about the matter: Hamiltonicity of the hypercube is tightly related to the theory of Gray codes. More precisely there is a bijective correspondence between the set of $n$-bit cyclic Gray codes and the set of Hamiltonian cycles in the hypercube $Q_n$. There is a reference to a 1963 paper by W. H. Mills in Proc. AMS.
3
A number is divisible by $8$ if the last three digits are divisible by $8$. Now, we can arrange the first $5$ digits of our answer in $5^5$ ways, because each of the position can take $1$ of $5$ values. Now, our problem reduces to the following. How many three digit numbers formed with $\{1,2,3,4,5\}$ are divisible by $8$? We can enumerate all the ...
3
We know that the string will take the form of $$*S█S█S█S*$$ where $█$ MUST have at least one character and $*$ can be of any length (even 0). I would suggest the following steps: Find the number of ways you can put the $S$s (they can be in positions $(1,3,5,7)$, $(2,5,8,11)$, $(1,4,6,9)$, etc.) Find the number of different strings you can make with ...
2
Usual tricks to get started for any problem Try a simpler problem Try brute-force calculation Look for other problems that this one resembles Have you any ideas on how to do any of these three things? For counting problems in particular, one trick that is: Try counting the complement: how many things are there in all, and how many aren't the kind of ...
2
If the contestants are all distinct, you can take account of that by changing your coefficients: Of the $5$ entrants from each state, there are $\binom{5}{0}$ ways to choose 0 of them, $\binom{5}{1}$ ways to choose $1$, $\binom{5}{2}$ ways to choose $2$, and so on. So, the generating function for contestants from a single state is $$... 2 An explicit but not very useful formula would be$$ \sum_{ c=1}^C {\sum_{ 0 \le n_i \le N_i \atop {\sum {n_i}=c}}} \frac{c!}{\prod n_i!} $$were C is the maximum word length and N_i coutns the available repetitions of each letter. Java code (not very elegant or efficient): https://ideone.com/4lLX2u 2 For a you have a stars and bars problem where you have to have exactly 10-1=9 dividers. For b you multiply by the factorial of each component. For c the Wikipedia article shows how to modify the number. 2 An equivalent way of describing the game is that, starting from 1, the two players simply take turns multiplying the current product by any number between 2 and 9. It sounds as if the winner is defined to be the player who first produces a product that is greater than or equal to n. Naming the player who is supposed to win and asking whether they ... 2 There must be at least 14 participants signed up for one of the three classes (by the pigeonhole principle), and this is the best possible n if one means simply that some (single) combination of a class must attract at least n participants. However if instead we mean to count participants whose entire signup selections agree, then the number n will ... 2 This is simpler if you rewrite your left hand side as$$ \sum_{k=0}^n\binom{2n}{2k}^2-\sum_{k=0}^{n-1}\binom{2n}{2k+1}^2= \sum_{i=0}^{2n}(-1)^i\binom{2n}i^2= \sum_{i=0}^{2n}(-1)^i\binom{2n}i\binom{2n}{2n-i} $$first. So you are looking for the coefficient of X^{2n} in the product (1-X)^{2n}(1+X)^{2n}=(1-X^2)^{2n}, which is the same as the ... 2 HINT :$$\begin{align}f'(x) &= 3x^2 + 2ax + b \\ &= 3(x + \frac{a}{3})^2 + b - \frac{a^2}{3}\\ &\ge b - \frac{a^2}{3}\end{align}$$Now, f'(x) > 0 \implies f(x) is an increasing function (an unrelated but good question to think about : does f(x) increasing \implies f'(x) > 0?). What can you now say about the probability for which ... 2 I. First calculate the number of words that can be formed from "CHISEL", with no restrictions. Subtract from your answer to I the number of words that contain (the "chunk" "LE" or the chunk "CHEL"). Assuming repetition of letters is not allowed, then we we have 6 possible letters to choose from for the first letter of the word, 5 possible choices ... 2 Of all the numbers that are formed with 1,2,3,4,5 - the last three digits need to be divisible by 8. There are 5^3 ways you could arrange the five numbers for the last three digits. Of these last three digits that are divisible by 8 are 312, 152, 512, 432, 352, 112, 232, 224, 144, 424, 344, 552, 544. A total of 13 of them which I got by brute force ... 2 As suggested in several comments, the simplest form of Stirling approximation for n! is$$\sqrt{2 \pi } e^{-n} n^{n+\frac{1}{2}}$$(http://en.wikipedia.org/wiki/Stirling%27s_approximation) Take the logarithms and develop first$$\log\left(2n!\right) - 2\log\left(n!\right)$$which results to be (2 n+1) \log (2). The remaining is obvious. If I may, ... 2 Just a sketch of a proof; sorry, this is all I can do with the time I have right now. If m\in\mathbb N is arbitrary, then an m-reciprocal polynomial means a polynomial p \in \mathbb Z\left[q\right] whose coefficient before q^i equals its coefficient before q^{m-i} for every i\in\mathbb Z (this implies that the degree of p is \leq m). Let ... 1 I'll provide a combinatorical counting argument. Without the non-consecutiveness condition, the answer would be {n \choose 2} = \frac{n\cdot(n-1)}{2}. Intuitively, you choose 1 number first (n choices) and another one from the remaining set (n-1 choices) and abstract away the move sequence. Back to the original problem, for each m \in \{2, ..., ... 1 First of all, some unsolicited advice: your derivation for (b) need not be so messy. It greatly simplifies things to observe that$$ f(n,m,k) = f(n-1,m,k) + f(n-1,m-1,k) - f(n-k-1,m-k,k) \tag{1} $$holds not just when 0 \le k \le m \le n, k < n. In fact, it holds for any n \ge 0, m \ge 0, k \ge 1, except in the two cases m = n = k, and m = n = ... 1 Hint: There are two vowels A and E. So 4-letter words with them side by side have one of the 6 follows forms: AE-- -AE- --AE EA-- -EA- --EA Given any of these forms, how many ways can we add in the consonants? This gives the total number N of 4-letter words with A and E side by side. And the probability will be N/^6 P_4=N/360. 1 Hint: Assume your number has n+1 (distinct, i.e. no repetitions allowed) digits d_0, ... d_{n} (where d_0 denotes the unit digit that corresponds to 10^0, d_1 the tenth's digit that corresponds to 10^1 and so on), so that the number N can be written as$$\sum_{i=0}^{n}10^id_i$$Denote with P the set of all the allowed permutations \sigma. ... 1 Nicely done! As an alternative approach, you could note that the main issue is where J,M,P are sitting. So, we could proceed casewise as follows: (1) If we seat J on an end--2 ways to do this--then that leaves 6 open seats in which we can put M,P--\binom{6}{2}\cdot2!=\frac{6!}{4!} ways to do this--at which point, once those three are seated, ... 1 Consider how you can place the numbers \{1,2,...,2n\} into the columns, starting with 1 and proceeding from there. We create a string of X's and Y's, which, when read from the beginning indicate where to place the next number. X means place the number in the left column, Y says place the number in the right column. Now, note that all of the ... 1 You need to be a little more careful in applying the Counting Theorem. You're correct that all 2^6=64 colorings are fixed by the identity element, but it's only for the two rotations by \pm\pi/3 that only 2 colorings are fixed. For the two rotations by \pm2\pi/3 there are 2^2=4 colorings that are fixed, and for the (one) rotation by \pi, there ... 1 If people vote reflecting their preferences (i.e. voting for their first preference candidate) then somebody who gets over 50% of votes would be the Condorcet candidate. There are other issues: in particular simple plurality systems may discourage some voters from voting for their first preference candidate, and if this happens, then somebody who gets ... 1 Find the total number of words that can be formed using the letters C-H-I-S-E-L, and subtract the number of words containing LE or CHEL, noting that no word can contain both. In order to find the number of words containing, for example, CHEL, we may consider CHEL as its own character, so that$$ n_{CHEL} = 3 \times 2 \times 1 since there are $3$ ...
1
$\binom{4}{1}\binom{4}{2}\binom{44}{2}+\binom{4}{1}\binom{4}{3}\binom{44}{1}+\binom{4}{1}\binom{4}{4}\binom{44}{0}+\binom{4}{2}\binom{4}{2}\binom{44}{1}+\binom{4}{2}\binom{4}{3}\binom{44}{0}+\binom{4}{3}\binom{4}{2}\binom{44}{0}$ First factor deals with kings, second with aces, third with remaining cards. It might not be the shortest route, but gives you a ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2014-03-07 07:29:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9925186038017273, "perplexity": 322.4544349447047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999636575/warc/CC-MAIN-20140305060716-00059-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://fizzbuzzer.com/gridland-metro-challenge/ | ## fizzbuzzer
### Looking for good programming challenges?
Use the search below to find our solutions for selected questions!
# Gridland Metro challenge
Sharing is caring!
### Problem statement
The city of Gridland is represented as an $n \times m$ matrix where the rows are numbered from $1$ to $n$ and the columns are numbered from $1$ to $m$.
Gridland has a network of train tracks that always run in straight horizontal lines along a row. In other words, the start and end points of a train track are $(r, c_1)$ and $(r,c_2)$, where $r$ represents the row number, $c_1$ represents the starting column, and $c_2$ represents the ending column of the train track.
The mayor of Gridland is surveying the city to determine the number of locations where lampposts can be placed. A lamppost can be placed in any cell that is not occupied by a train track.
Given a map of Gridland and its train tracks, find and print the number of cells where the mayor can place lampposts.
Note: A train track may (or may not) overlap other train tracks within the same row.
Input Format
The first line contains three space-separated integers describing the respective values of $n$ (the number of rows), $m$ (the number of columns), and $k$ (the number of train tracks).
Each line $i$ of the $k$ subsequent lines contains three space-separated integers describing the respective values of $r$, $c_1$, and $c_2$ that define a train track.
Constraints
$1 \le n,m \le 10^9$
$0 \le k \le 1000$
$1 \le r \le n$
$1 \le c_1 \le c_2 \le m$
Output Format
Print a single integer denoting the number of cells where the mayor can install lampposts.
Sample Input
4 4 3
2 2 3
3 1 4
4 4 4
Sample Output
9
Explanation
In the diagram above, the yellow cells denote the first train track, green denotes the second, and blue denotes the third. Lampposts can be placed in any of the nine red cells, so we print $9$ as our answer.
### Solution Concept
The maximum number of cells where the mayor can place lampposts is $total = nm$. Next we group each input $(row, c_1, c_2)$ by $row$. For that we use a dictionary map with map[row] holding a list of intervals $(c_1, c_2)$ for $row$. For instance, if our input looks as follows:
4 4 4
2 2 3
3 1 4
4 4 4
3 2 3
our map will look as follows:
map[1] -> []
map[2] -> [[2,3]]
map[3] -> [[1,4],[2,3]]
map[4] -> [[4,4]]
Note that the first row is our problem configuration ($n,m$ and $k$).
This map can be constructed in $\mathcal{O}(k)$ time.
Next, we iterate over our map and merge the interval list in case of any overlapping intervals. For instance, if we have the folloing interval list:
[1,3],[2,6],[8,10],[15,18]
the merged list will looks as follows:
[1,6],[8,10],[15,18]
Next, once we have a merged the interval list for each $row$ in map, all that is left to do ist sum the number of columns each interval list occupies for each $row$ in map and subtract it from $total$. For example, the following interval list:
[1,6],[8,10],[15,18]
occupies $13$ columns.
Finaly, return $total$.
import sys
from collections import defaultdict
"""
Class representing an interval with start and end position.
"""
class Interval:
def __init__(self, start, end):
self.start = start
self.end = end
def __eq__(self, other):
return self.start == other.start
def __lt__(self, other):
return self.start < other.start
def __repr__(self):
return "[%s, %s]" % (self.start, self.end)
"""
Given a collection of intervals, merges all overlapping intervals.
Sample Input:
[1,3],[2,6],[8,10],[15,18]
Output:
[1,6],[8,10],[15,18]
"""
def merge(intervals):
# Sort all intervals in decreasing order of start time.
intervals.sort(reverse=True)
n = len(intervals)
cursor = 0
for i in range(0, n):
if (cursor != 0 and intervals[i].end >= intervals[cursor-1].start):
while (cursor != 0 and intervals[i].end >= intervals[cursor-1].start):
# While current interval overlaps with previous interval --> Merge
intervals[cursor-1].start = min(intervals[cursor-1].start, intervals[i].start)
intervals[cursor-1].end = max(intervals[cursor-1].end, intervals[i].end)
cursor -= 1
else:
intervals[cursor] = intervals[i]
cursor += 1
return intervals[:cursor]
"""
Given a collection of disjunct intervals it returns the total space occupied by those intervals.
Sample Input:
[1,6],[8,10],[15,18]
Output:
13
"""
def countOccupied(intervals):
ans = 0
for interval in intervals:
ans += (interval.end - interval.start + 1)
return ans
def main():
f = open("in.txt")
n, m, k = (int(i) for i in f.readline().split())
map = defaultdict(list)
for i in range(0, k):
row, c1, c2 = (int(i) for i in f.readline().split())
map[row].append(Interval(c1,c2))
total = n*m
for row in map:
map[row] = merge(map[row])
total -= countOccupied(map[row])
print(total)
main() | 2019-11-15 02:49:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 36, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3401826024055481, "perplexity": 2227.538297878042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668561.61/warc/CC-MAIN-20191115015509-20191115043509-00483.warc.gz"} |
https://plainmath.net/high-school-statistics/6016-make-a-scatterplot-for-the-data-height-and-weight-of-femalesheight | sagnuhh
2021-03-06
Make a scatterplot for the data.
Height and Weight of Females
$\begin{array}{|cccccccccc|}\hline \text{Height (in.):}& 58& 60& 62& 64& 65& 66& 68& 70& 72\\ \text{Weight (lb):}& 115& 120& 125& 133& 136& 115& 146& 153& 159\\ \hline\end{array}$
SoosteethicU
Expert
Step 1
Scatterplot
Height is on the horizontal axis and Weight is on the vertical axis.
The heights range from 58 to 72, thus an appropriate scale for the horizontal
axis is from 50 to 80.
The weights range from 115 to 159, thus an appropriate scale for the vertical
axis is from 105 to 170.
Result:
Height is one the horizontal axis and Weight is one the vertical axis.
Do you have a similar question? | 2023-02-09 02:17:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 27, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8667752742767334, "perplexity": 898.7206597384896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501066.53/warc/CC-MAIN-20230209014102-20230209044102-00617.warc.gz"} |
https://studyadda.com/notes/7th-class/science/soil/soil/6062 | # 7th Class Science Soil Soil
Soil
Category : 7th Class
Learning Objectives
1. To understand concept of soil formation
2. To understand soil profile and composition of soil
3. To learn about different types of soil
4. To study soil erosion-its causes and preventions
5. To learn about soil pollution –its sources and control.
Soil is a precious gift from nature. Soil supports life on earth. Most people think soil as a layer of dirt and mud. However this layer of mud and dirt is actually filled with life. Food we eat, fibre we use to make fabric, habitat for various organism etc. is provided by soil. If you closely observe a freshly dug pit you may see various creatures like earthworm, ants, beetles etc. Soil provides nutrients to the plants and support their growth. All living organisms depend directly or indirectly on soil. Let’s learn more about soil.
SOIL
The mixture of rock particles and humus is called soil. Soil is an important natural resource. It contains water, dissolved substances, mineral salts and living organisms. Soil forms a very thin layer on the surface of the earth ranging from a few m to 3 to 4 m.
Note: Humus is a brown or black organic substance formed from decaying plant remains or animal matter. It determines the fertility of soil. It is porous in nature and increases the ability of soil to retain water.
SOIL FORMATION
Soil is formed from parent rock material over millions of years by a process of weathering. Weathering is the process of breaking down of rock present on the surface of earth into fine particles.
Weathering Occurs by Two Main Processes
(a) Physical weathering, which is caused by physical phenomena like atmospheric changes (heating, cooling, wetting-drying etc).
(b) Biological weathering, which involves breaking down of rocks by the action of living organisms.
Do you know?
Earthworm’s burrows act as tunnels which allow water to moves quickly and provide pathways for roots to grow. They also decompose dead plants and animal matter. Their castings are valuable as fertilizer.
SOIL PROFILES
Soil profile is a vertical section of different layers of the soil. Various layers are called horizons. Each layer diners in colour, depth, chemical composition. Generally we see the top surface of the soil, not layers below it. Soil profile can be seen while digging a we'l or laying the foundation of a building. Soil profile i.e. various layers of soil can also be observed in a deep cut through the soil. Typically, four distinct soil layers can be seen. It can also be seen at the sides of a mad on a hill or at steep river bank.
A-Horizon
The upper most horizon is dark in colour. It is rich in humus and minerals. The humus makes the soil fertile and provides nutrients to growing plants. It is generally a soft, porous layer and can retain more water. It is also called top-soil or the A-horizon.
Functions of Top-soil or A-horizon
(i) it provides shelter for many living organisms such as worms, rodents, moles and beetles
(ii) The roots of small plants are embedded entirely in the top-soil
B - Horizon
Middle Layer or B-horizon or subsoil is the layer next to the top-most soil or A-horizon. It contains lesser amount of humus but contains more of minerals. It is generally harder and more compact.
C- Horizon
Third layer or C-horizon is the layer below the B-layer and is made up of small lumps of rocks with cracks and crevices. It is difficult to dig beyond this layer.
R- Horizon
Bed rock or R-horizon is a layer below C-horizon. It is hard and difficult to dug with a spade. It mainly consists of parents rock. It undergoes weathering.
COMPOSITION OF SOIL
Main components of soil are:
(i) Soil particles like sand, silt, clay, gravel etc.
(ii) Humus, an organic matter formed by decomposition of dead organisms.
(iii) Air, Water, Soil organisms.
The difference in the proportion of these components leads to the formation of different kinds of soil.
Do you know?
When rainwater sinks underground, it reaches the impervious layer R-horizon and accumulates over it. This water is called groundwater. The upper level of this layer which is saturated with water is the water table. Water table is rarely leveled and follows the general slope and land above it. The level of water table fluctuates from season to season. It rises in the rainy season and falls in the dry season.
TYPES OF SOIL
On the basis of proportion of particles of various sizes soil can be classified as.
(i) Sandy soil: If soil contains greater proportion of big particles it is called sandy soil.
(ii) Clayey soil: In such a soil the proportion of fine particles is relatively higher.
(iii) Loamy Soil: In such a soil the amount of large and fine particles is about the same.
Properties of Various Types of Soil
(i) Sandy Soil: Contains sand particles of large size and they can't fit close together. Large spaces are available between them. The spaces are filled with air and thus such a soil is well aerated, water can drain through the spaces and so sandy soil is light, well aerated and dry.
(ii) Clayey Soil: The smaller particles present in it can pack tightly together, leaving little space for air. These tiny gaps can hold water so clayey soil has little air. They are heavy as they hold more water as compared to sandy soil.
(iii) Loamy Soil: Best top soil for growing plants is loam. Loamy soil is a mixture of sand, clay and silt (a type of soil particles). The loamy soil also contains humus. Such a soil has the right water holding capacity for the growth of plants
Note: The properties of soil are greatly influenced by the size of particles present in it.
TYPES OF INDIAN SOIL
1. Red Soil: This soil is red in colour due to the presence of large amounts of iron oxide.
2. Black Soil: It is rich in the minerals, iron and magnesium. This soil is suitable for the growth of sugarcane and cotton.
3. Alluvial Soil: This soil, formed by the weathering of rocks is brought down by flowing rivers from the mountains. It is very fertile and rich in humus. It is suitable for the cultivation of wheat, rice and sugarcane.
4. Desert Soil : This sandy soil does not hold much water. Cacti, Date palm. Coconut palm etc. which do not need much water grow in this type of soil.
5. Mountain Soil: This is highly fertile soil contains the highest humus content.
6. Laterite Soil: This soil is found in regions of heavy rainfall. It is good for the growth of plantation crops like, coffee, tea, coconut and banana.
PROPERTIES OF SOIL
Adsorption of Water in the Soil Plants need water to grow. If the soil does not hold water the plants would need frequent watering or they will die. The amount of water a particular type of soil can absorb is called its water absorption tendency.
Note: Silt occurs as deposit in river beds. The size of silt particles is between those of sand and clay.
Moisture in Soil
Soil holds water in it which is called soil moisture. The capacity of soil to hold water is important for various crops.
Percolation Rate of Water
The rate at which water exits in the soil is known as its percolation rate. Different soils have different percolation rate of water. To calculate percolation rate we use the following formula
$\text{Percolation rate (mL/min)=}\frac{\text{Amount of water percolated (mL)}}{\text{Percolation time (min)}}$
Do you know?
Percolation rate is highest for sandy soil and least in case of clayey soil.
SOIL EROSION
The removal of top soil by water and wind is known as soil erosion.
The top soil contains humus and mineral salts, which are vital for the growth of plants. So, removal of top soil by water and wind leaves the underneath subsoil and rocky base exposed.
Thus, erosion causes a significant loss of humus and nutrients and hence, decreases the fertility of soil.
Soil Erosion
Causes of Soil Erosion
There are several causes of soil erosion, which can be divided into two categories.
(i) Natural causes: It involves natural agents like wind and water.
(a) High wind velocity over lands, which have no vegetation, carries away the loose top soil.
(b) Pouring raindrops, over areas with no or very little vegetation, also carries away the top soil.
(ii) Man-made causes: Besides natural agents, there are certain man-made activities, which cause soil erosion. For example:
(a) Deforestation: Deforestation is the cutting or removal of trees or other vegetation for timber or for farming purposes. It increases soil erosion. Roots of plants hold soil particles together. In the absence of plants, the top layer of soil is easily removed by the action of high speed winds or water flow, thereby increasing the chances of soil erosion.
Deforestation
(b) Overgrazing: Overgrazing by flocks of cattle, buffaloes, goats and sheeps leaves a very little plant cover on the soil. The hooves of the animals make the soil dry, which reduces its porosity and percolation.
(c) Improper agricultural practices: Improper tillage and burning of stubble of weeds reduces the water-holding capacity of the soil. As a result, soil become dry and hence, can be easily blown away as dust.
(d) Heavy rainfall and strong winds: Uncovered soil is eroded quickly by heavy rain and strong winds.
(e) Slope: Run off water passing along the slope gathers speed and develops high cutting and carrying capacity.
Overgrazing
Effects of Soil Erosion
1. Soil erosion reduces the fertility of soil.
2. It leads to land sliding.
3. Soil erosion exposes the lower hard and rocky layer. As a result, the fertile land gets converted into a desert. This process is known as desertification of land.
4. It leads to flash floods. Roots of plants hold soil particles together. In the absence of plants, the seeping of water is reduced and thus the ground water does not get replenished. This could then cause floods.
Decertification of Land
Control of Soil Erosion
1. Deforestation should be stopped and more and more plants/trees should be planted.
2. Wind erosion is reduced if rows of trees and shrubs are planted at right angles to the prevailing direction of wind.
3. There should be a control on grazing. Grazing should be allowed only on areas meant for it and not on agricultural land.
4. Adopt terracing of field. In this, slope is divided into a number of flat fields for slowing down the flow of water.
5. Floods can be controlled by building dams. Embankment or mud walls should be constructed around hill slopes or field to stop the flow of water
Terrace Farming
SOIL POLLUTION
Soil pollution occurs either by direct wastes or indirectly by air pollution. The main source of air pollution are:
1. Improper dumping of garbage and sewage wastes in soil
Dumping Of Garbage
1. Acid water from factories and industries and acid rain
Acid Waste and Spilling
1. Excessive use of pesticides and fertilizers.
2. Spilling or leakage of chemicals
3. Improper dumping of plastic and metals which do not decay easily.
Do you know?
Over 80% of items in landfills can be recycled, but they’re not.
Control of Soil Pollution
1. Use of organic fertilizers instead of chemical fertilizers and pesticides
2. Proper treatment of liquid waste before release into water bodies
3. Recycling of solid waste like plastic and metals
4. Use of animal and domestic wastes for producing biogas.
CONCEPT MAP
#### Other Topics
##### 30 20
You need to login to perform this action.
You will be redirected in 3 sec | 2019-10-16 12:55:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37033534049987793, "perplexity": 4928.151526634299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668569.22/warc/CC-MAIN-20191016113040-20191016140540-00363.warc.gz"} |
https://www.numerade.com/questions/in-a-european-country-a-bathroom-scale-displays-its-reading-in-kilograms-when-a-man-stands-on-this-s/ | ### Discussion
You must be signed in to discuss.
##### Christina K.
Rutgers, The State University of New Jersey
LB
##### Aspen F.
University of Sheffield
Lectures
Join Bootcamp
### Video Transcript
the situation here is the following. The man is a standing on a scale which exerts a normal forest on him and at the same time he's exerting a force F downwards on the bar and then the bar. Who he's up. We for force F. They only have to discover what is the magnitude off the force f. In order to do that, we have to use Newton's second law. For that, I would choose the following reference frame a vertical axis pointing upwards a party. Newton's second law in that situation results in the following The net force is equals to the mast. The actor mass off the men Times news acceleration. But the net force is composed by three forces the weight, the normal and the force f Then we have normal force plus f minus the weight forest being equals to zero because the acceleration is the question zero has a man is standing still. So the force F is given by the weight force minus normal force. But we don't know what is the value off the normal force. So how can we calculates the force half without knowing the normal? It happens that we know about is the normal the scale presents and mass that's related to the normal force, not to the weight force. What the scale presence is the normal force divided by the acceleration of gravity. This is what the scale can read. It can't actually read your mass. It can only read the normal force so we can use the fact that the normal force is given by the operator mast times the acceleration of gravity and at the same time the weight force is given by his actor mass times acceleration of gravity. So the force half is given by already factoring the acceleration of gravity. He is actor last minus his apparent mess. So remembering that G is approximately 9.8 meters per second squared, we get 9.8 times 90 true 900.6 minus 75.1, and the results in the force off approximately 172 neutrons. And this is the answer to this question.
Brazilian Center for Research in Physics
##### Christina K.
Rutgers, The State University of New Jersey
LB
##### Aspen F.
University of Sheffield
Lectures
Join Bootcamp | 2021-07-29 09:14:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5323647260665894, "perplexity": 437.9922706780355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153854.42/warc/CC-MAIN-20210729074313-20210729104313-00080.warc.gz"} |
https://www.cs.mcgill.ca/announcements_and_events/seminars/archive?semester=fall&year=2009 | Date( Fall 2009 ) Category Seminar Info 2009/12/02 CQIL - Cryptography and Quantum Information Place: McConnnell 103 Time: 11:30 - 12:30 Speaker: Pranab Sen Affiliation: Tata Institute of Fundamental Research Title: k-Copy Quantum Expanders Nearly Meeting the Ramanujan Bound (Part II) Abstract: A k-copy quantum expander is a small set of unitary operators such that the superoperator consisting of choosing a random unitary from the set and applying k independent copies of it is close to the ideal superoperator consisting of choosing a random unitary from the Haar measure and applying k independent copies of it. k-copy quantum expanders give a way of constructing approximate k-designs of unitaries, and are an analogue of approximate k-wise independence in the quantum world. The well-known expanders of computer science turn out to be 1-copy expanders. A k-copy expander consisting of D unitaries cannot be closer than \Omega(D^{-1/2}) to the ideal superoperator in the spectral norm on linear operators, in what is known as the Ramanujan bound. No explicit construction of even 1-copy quantum expanders was known earlier, that was provably close to the Ramanujan bound. In this work, we give an explicit construction of k-copy quantum expanders that have distance c_k D^{-1/2 + o(1)} to the ideal superoperator, where c_k is a constant depending on k. This leads to better constructions of approximate k-designs of unitaries. Our construction is a quantum analogue of a construction of classical 1-copy expanders by Ben-Aroya and Ta-Shma, which in turn was a generalisation of a combinatorial construction called zig-zag graph product by Reingold, Vadhan and Wigderson. 2009/11/27 CQIL - Cryptography and Quantum Information Place: McConnell 103 Time: 11:30 - 12:30 Speaker: Benoit Collins Affiliation: University of Ottawa Title: Free probability and quantum information theory Abstract: After reviewing a few notions of free probability, I will explain how it can be used to refine our understanding of the behavior of the collection of eigenvalues of output states of quantum channels chosen at random. I will also explain the applications to the entropy additivity problems. This is joint work with Ion Nechita. Seminar co-sponsored by the MITACS Quantum Information Processing program. 2009/11/25 CQIL - Cryptography and Quantum Information Place: McConnell 103 Time: 11:30 - 12:30 Speaker: Pranab Sen Affiliation: Tata Institute Title: k-Copy Quantum Expanders Nearly Meeting the Ramanujan Bound Abstract: A k-copy quantum expander is a small set of unitary operators such that the superoperator consisting of choosing a random unitary from the set and applying k independent copies of it is close to the ideal superoperator consisting of choosing a random unitary from the Haar measure and applying k independent copies of it. k-copy quantum expanders give a way of constructing approximate k-designs of unitaries, and are an analogue of approximate k-wise independence in the quantum world. The well-known expanders of computer science turn out to be 1-copy expanders. A k-copy expander consisting of D unitaries cannot be closer than \Omega(D^{-1/2}) to the ideal superoperator in the spectral norm on linear operators, in what is known as the Ramanujan bound. No explicit construction of even 1-copy quantum expanders was known earlier, that was provably close to the Ramanujan bound. In this work, we give an explicit construction of k-copy quantum expanders that have distance c_k D^{-1/2 + o(1)} to the ideal superoperator, where c_k is a constant depending on k. This leads to better constructions of approximate k-designs of unitaries. Our construction is a quantum analogue of a construction of classical 1-copy expanders by Ben-Aroya and Ta-Shma, which in turn was a generalisation of a combinatorial construction called zig-zag graph product by Reingold, Vadhan and Wigderson. 2009/11/20 General Place: MC103 Time: 11:00 - 12:30 Speaker: Peter Marbach Affiliation: University of Toronto Area: Networks Title: Online Social Networks: Research Challenges and some Results Abstract: Online social networks have revolutionized the way we interact and share information over the Internet. Popular social networking applications include YouTube, Flickr, MySpace, Facebook, and Twitter, which support millions of active users. While being enormously popular, these applications only scratch the surface of what is possible to do, and there are tremendous opportunities in developing new, more advanced online social networking applications. Creating such applications poses new and fascinating research problems. One of major research challenges in this domain is to develop a formal understanding of online social networks both in terms of how online social networks are formed, and how they can be used to efficiently share and distribute information. In the talk, we will discuss research aiming at creating a mathematical foundation of online social networks that provides a formal understanding and framework for the design and analysis of algorithms for online social networking applications. The first part of the talk will present a broader research agenda for online social network. The second part will focus on recent theoretical results on search algorithms in online social networks. An interesting aspect of the results that we obtain is that they provide insight into why real-life social networks are so efficient. Biography of Speaker: Peter Marbach was born in Lucerne, Switzerland. He received the Eidg. Dipl. El.-Ing. (1993) from the ETH Zurich, Switzerland, the M.S. (1994) in electrical engineering from the Columbia University, NY, U.S.A, and the Ph.D. (1998) in electrical engineering from the Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, U.S.A. He was appointed as assistant professor in 2000, and associate professor in 2005, at the Department of Computer Science of the University of Toronto. He has also been a visiting professor at Microsoft Research, Cambridge, UK, at the Ecole Polytechnique Federal at Lausanne (EPFL), Switzerland, and at the Ecole Normale Superieure, Paris, France, and a post-doctoral fellow at Cambridge University, UK. Peter Marbach has received the IEEE INFOCOM 2002 Best Paper Award for his paper "Priority Service and Max-Min Fairness". He is on the editorial board of the ACM/IEEE Transactions of Networking. His research interests are in the fields of communication networks, in particular in the area of wireless networks, peer-to-peer networks, and online social networks. 2009/11/16 DMO - Discrete Mathematics and Optimization Place: Burnside 1205 Time: 16:30 - 17:30 Speaker: Christophe Weibel Affiliation: McGill University Title: Flow-Cut Gaps for Integer and Fractional Multiflows Abstract: Consider a routing problem instance consisting of a supply graph G=(V,E (G)) and a demand graph H=(V,E(H)). If the pair obeys the cut condition, then the flow-cut gap for this instance is the minimum value C such that there exists a feasible multiflow for H if each edge of G is given capacity C. It is well-known that the flow-cut gap may be greater than 1 even in the case where G is the (series-parallel) graph K2,3. The subject of this talk is the "integer" flow-cut gap. What is the minimum value C such that there exists a feasible integer valued multiflow for H if each edge of G is given capacity C? We study the case of series-parallel graphs. Let G be obtained by series-parallel operations starting from an edge st, and consider orienting all edges in G in the direction from s to t. A demand is compliant if its endpoints are joined by a directed path in the resulting oriented graph. We show that if the cut condition holds for a compliant instance and G+H is Eulerian, then an integral routing of H exists. This result includes, as a special case, routing on a ring, but is not a special case of the Okamura-Seymour theorem. We use this to prove that the integer flow-cut gap in series-parallel graphs is 5. 2009/11/11 CQIL - Cryptography and Quantum Information Place: McConnell 103 Time: 11:30 - 12:30 Speaker: Steve Flammia Affiliation: Perimeter Institute for Theoretical Physics Title: Ultra Fast Quantum State Tomography Abstract: Everybody hates tomography. And with good reason! Experimentalists hate it because it is inefficient and difficult. Theorists hate it because it isn't very "quantum." But because of our current lack of meso-scale quantum computers capable of convincingly performing non-classical calculations, tomography seems like a necessary evil. In this talk, I will attempt to banish quantum state tomography to the Hell of Lost Paradigms where it belongs. I hope to achieve this by introducing several methods for learning quantum states more efficiently, in some cases exponentially so. The first method runs in polynomial time and outputs a polynomial-sized classical approximation of the state (in matrix product state form), together with a rigorous bound on the fidelity. The second result takes advantage of the fact that most interesting states are close to pure states to get a quadratic speedup using ideas from compressed sensing. I'll also show simulations of these methods that demonstrate how well they work in practical situations. Both of these results are heralded, and require no a priori assumptions about the state. This is joint work with S. Bartlett, D. Gross, R. Somma (first result), and D. Gross, Y.-K. Liu, S. Becker, J. Eisert, (second result; arXiv:0909:3304). Seminar sponsored by the MITACS Quantum Information Processing program. 2009/11/09 Algorithms Place: Burnside 1205 Time: 16:30 - 17:30 Speaker: Juan Vera Affiliation: Waterloo Title: Positive Polynomials Over Equality Constrained unbounded Sets Abstract: A simple yet powerful algebraic connection between the set of polynomials that are non-negative on a given closed domain and the set of polynomials that are non-negative on the intersection of the same domain and the zero set of a given polynomial is presented. This connection has interesting theoretical as well as algorithmic implications. It yields a succinct derivation of copositive programming reformulations for a big class of programs, generalizing Burer's copositive formulation for mixed non-convex quadratically constrained quadratic programming problems. As corollary of our main theorem we also obtain representation theorems for positive polynomials on closed sets. To illustrate the results we show how to use Polynomial Programming as a general framework for conic relaxations. This framework is used to re-derive previous relaxation schemes and provide stronger ones for general binary quadratic optimization. In particular, second-order cone relaxations are proposed. Computational tests show that using the second-order cone relaxation with triangle inequalities provides a bound that is competitive with the semidefinite bound strengthened with triangle inequalities but the former relaxation is computationally more efficient. This is joint work with Javier Pena and Luis Zuluaga, and Bissan Ghaddar and Miguel Anjos. 2009/11/04 CQIL - Cryptography and Quantum Information Place: McConnell 103 Time: 11:30 - 12:30 Speaker: Mathieu Cliche Affiliation: University of Waterloo Area: Quantum information Title: A relativistic quantum channel that can extract entanglement from the vacuum Abstract: We model a relativistic quantum channel in which Alice and Bob are Unruh-DeWitt detectors for scalar quanta and the only noise in their communication is due to quantum fluctuations. We calculate the classical channel capacity as a function of the spacetime separation and we confirm that the classical as well as the quantum channel capacity are strictly zero for spacelike separations. We show that this channel can be used to entangle Alice and Bob instantaneously. Alice and Bob are shown to extract this entanglement from the vacuum through a Casimir-Polder effect. 2009/10/20 CQIL - Cryptography and Quantum Information Place: McConnell 103 Time: 17:00 - 18:00 Speaker: Min-Hsiu Hsieh Affiliation: Erato-SORST Title: Performance of Entanglement-assisted Quantum LDPC Codes Constructed From Finite Geometries Abstract: We investigate the performance of entanglement-assisted quantum low-density parity-check (LDPC) codes constructed from finite geometries. Though the entanglement-assisted formalism provides a universal connection between a classical linear quaternary code and an entanglement-assisted quantum error-correcting code (EAQECC), the issue of maintaining large amount of pure maximally entangled states in constructing EAQECCs is a practical obstacle to its use. We provide families of EAQECCs with an entanglement consumption rate that decreases exponentially. Surprisingly, these EAQECCs outperform those codes constructed earlier. Based on joint work with Wen-Tai Yen and Li-Yi Hsu. The paper is available at arXiv:0906.5532 2009/10/19 DMO - Discrete Mathematics and Optimization Place: Burnside 1205 Time: 16:30 - 17:30 Speaker: Gyula Pap Affiliation: Cornell University Title: Algorithms for integral and half-integral multiflows Abstract: A classical problem in combinatorial optimization is that of maximum multiflows. We are given an undirected graph G=(V,E), and a set of terminals A subset of V. A multiflow is defined as a set of |A|(|A|-1)/ 2 flows between all pairs of distinct terminals. The goal is to maximize the total size of the multiflow, that is the sum of the size of all the |A|(|A|-1)/2 flows, subject to certain capacity constraints. Variations of the problem arise from edge- or node- capacities, and/or adding a (half-)integrality constraint. The problem is really interesting even for the special case of all-one node capacities, which is characterized by Mader's min-max formula, and solved by Lovász' linear matroid matching algorithm. Mader's formula also applies for arbitrary capacities, but matroid matching, in the obvious way of expanding the graph, only implies an exponential time algorithm. The main result presented in this talk is a strongly polynomial time algorithm to find a maximum integral multiflow subject to node- capacities (the most general of all these variations). This improves on a result of Ibaraki, Karzanov and Nagamochi for the edge- capacitated, half-integral version, and on Keijsper, Pendavingh and Stougie's weakly polynomial time algorithm for the edge-capacitated, integral version. These results are generalized to the node- capacitated version, although using completely different tehniques. The half-integral node-capacitated multiflow problem is solved based on a recent result saying that the polytope of shortest maximum multiflows is half-integral, which implies a strongly polynomial algorithm via ellipsoid method and Frank and Tardos' diophantine approximation. To solve the integral node-capacitated multiflow problem, we first solve it to half-integrality, and then use a scaling- type argument motivated by Gerards' proximity lemma for b-matching to reduce the problem to small capacities. In the presentation we will also take a look at related results and techniques, such as a splitting-off algorithm for Lov\'asz' and Cherkassky's result, a divide-and-conquer-type argument for the edge-capacitated version. 2009/10/19 General Place: MC320 Time: 11:00 - 12:00 Speaker: Patrick Lam Affiliation: Assistant Prof at Waterloo Area: Data Structures Title: Implementation and Use of Data Structures in Java Programs Abstract: Programs manipulate data. For many classes of programs, this data is organized into data structures. Java's standard libraries include robust, general-purpose data structure implementations; however, standard implementations may not meet developers' needs, forcing them to implement ad-hoc data structures. We investigate the implementation and use of data structures in practice by developing a tool to statically analyze Java libraries and applications. Our DSFinder tool reports 1) the number of likely and possible data structure implementations in a program and 2) characteristics of the program's uses of data structures. We applied our tool to 62 open-source Java programs and manually classified possible data structures. We found that 1) developers overwhelmingly used Java data structures over ad-hoc data structures; 2) applications and libraries confine data structure implementation code to small portions of a software project; and 3) the number of ad-hoc data structures correlates with the number of classes in both applications and libraries, with approximately 0.020 data structures per class. This is joint work with Syed Albiz, submitted to ICSE 2010. Biography of Speaker: Patrick was a B.Sc. and M.Sc. student at McGill. He then did his Ph.D. at MIT with Martin Rinard and then a post-doc back at McGill. He is currently an Assistant Prof at Waterloo. 2009/10/14 CQIL - Cryptography and Quantum Information Place: McConnell 103 Time: 11:30 - 12:30 Speaker: Jérémie Roland Affiliation: NEC Laboratories (Princeton) Title: Adiabatic quantum optimization fails for random instances of NP-complete problems Abstract: Adiabatic quantum optimization has attracted a lot of attention because small scale simulations gave hope that it would allow to solve NP-complete problems efficiently. Later, negative results proved the existence of specifically designed hard instances where adiabatic optimization requires exponential time. In spite of this, there was still hope that this would not happen for random instances of NP-complete problems. This is an important issue since random instances are a good model for hard instances that can not be solved by current classical solvers, for which an efficient quantum algorithm would therefore be desirable. Here, we will show that because of a phenomenon similar to Anderson localization, an exponentially small eigenvalue gap appears in the spectrum of the adiabatic Hamiltonian for large random instances, very close to the end of the algorithm. This implies that unfortunately, adiabatic quantum optimization also fails for these instances by getting stuck in a local minimum, unless the computation is exponentially long. Joint work with Boris Altshuler and Hari Krovi. 2009/10/07 CIM - Centre for Intelligent Machines Place: McConnell 437 Time: 11:30 - 12:30 Speaker: Davd W. Cofer Affiliation: Georgia State University Title: ANIMATLAB: A PHYSICS BASED 3-D GRAPHICS ENVIRONMENT FOR BEHAVIORAL NEUROBIOLOGY RESEARCH Abstract: Understanding neural mechanisms of behavior has frequently depended on comparisons between detailed descriptions of freely behaving animals and fictive motor programs displayed by neurons in anesthetized, restrained, and dissected preparations. We have developed a software toolkit, AnimatLab, to help researchers determine whether their descriptions of neural circuit function can account for how specific patterns of behavior are controlled. AnimatLab enables one to build a virtual body from simple LEGO™-like building blocks, situate it in a virtual 3-D world subject to the laws of physics, and control it with a multi-cellular, multi-compartment neural circuit model. A Body Editor enables adjustable blocks, balls, truncated cones, cylinders and meshes to be connected through a variety of joints to form a body. Sensors enable extrinsic and intrinsic signals to be detected, and virtual muscles governed by Hill muscle models span the joints to produce movement. The physics engine Vortex™ from CM-Labs simulates gravity, buoyancy, drag, friction, contact collision, and muscle tension for the various body parts. A Neural Editor enables a neural network to be constructed from a menu of neurons and synapses, and then linked to sensors and effectors on the body. We are currently using AnimatLab to study the neural control of locomotion and arm movements, the escape behavior of crayfish, and jumping in locust. In the near future we plan to expand AnimatLab so it can be used to build and test biomimetic robots. 2009/10/07 CQIL - Cryptography and Quantum Information Place: McConnell 103 Time: 11:30 - 12:30 Speaker: Artem Kaznatcheev Affiliation: McGill University Title: Properties of unitary t-designs Abstract: Unitary t-designs provide a method to simplify integrating polynomials of degree less than t over U(d). Designs allows us to replace the average over U(d) (an integral) by an average over the design (a finite sum). We prove the trace double sum inequality and use it to provide a metric definition of designs. The new definition provides an easier way of checking in a set of unitaries forms a design. In the search for small designs, we classify minimal designs according to their weight function. As an alternative approach to deriving lower bounds on the size of t-designs, we introduce a greedy ‘algorithm’ for constructing designs. We provide a construction for minimum 1-designs based on orthonormal bases of the space of d-by-d matrices. The constructions provides a simple way to evaluate the average of U*V*UV for fixed V. This allows us to prove that t-designs are non-commuting sets, supporting our intuition that the elements of a design are well ‘spread out’. 2009/10/05 Algorithms Place: Burnside 1205 Time: 16:30 - 17:30 Speaker: Deeparnab Chakrabarty Affiliation: University of Waterloo Title: Two generalizations of (0,1)-covering problems Abstract: Pick your favorite (set) covering problem: for the purpose of this talk consider the line-cover problem of covering a set of points on a line by line segments. We call a covering problem (0,1) if any set covers any element at most once. This terminology arises from casting this problem as a covering integer program (CIP) where the constraint matrix is {0,1}. We consider two generalizations of a (0,1) covering problem. In each generalization, we associate an integral supply for each set (column of the CIP matrix) and a demand for each element (row of the CIP matrix). In the first problem, called the column-restricted covering problem, we need to choose a minimum cost family of sets such that for each element the total supply of the sets chosen exceeds the demand of the element. The second problem, which we call the priority covering problem, is another (0,1) covering problem obtained as follows. We say a set covers an element iff it contains it and the supply of the set is larger than the demand of the element. Given this new set system, we want to choose a standard set cover, the minimum cost family of sets which covers every element at least once. We are interested in connecting the approximability, and in particular the integrality gaps of the corresponding CIPs, of these generalized problems to that of the original (0,1) problem. In this talk, we will describe some partial progress we've made. In particular, we show that for the line cover problem the integrality gaps of both are within a constant factor of the original one's. Joint work with Elyot Grant and Jochen Konemann 2009/09/30 CQIL - Cryptography and Quantum Information Place: McConnell 103 Time: 11:30 - 12:30 Speaker: Ion Nechita Affiliation: University of Ottawa Title: Eigenvalue statistics for products of random quantum channels Abstract: We study products of random quantum channels from the point of view of random matrix theory. Two models are considered: independent and conjugate channels. The second model has been very successful in quantum information theory as a source for counter examples to some famous additivity conjectures. We show how the two models relate and we compute average traces of outputs of such random channels. Our main tool is a graphical calculus for random matrices developed by the authors. This calculus is based on a diagrammatic notation for tensors, inspired by ideas of Penrose and Coecke. Then, interpreting Weingarten calculus in our formalism, we describe a method for computing expectation values of diagrams which contain Haar-distributed random unitary matrices. This is done by the means of a graph-expansion of the original diagram. The graphical computations are intuitive and give insight on the dominating terms. Finally, applications of these results to additivity conjectures are discussed. This is joint work with Benoît Collins (University of Ottawa). 2009/09/16 CQIL - Cryptography and Quantum Information Place: McConnell 103 Time: 11:30 - 12:30 Speaker: Kazuo Iwama Affiliation: Kyoto University Area: Quantum Information Title: Quantum Counterfeit Coin Problems Abstract: The counterfeit coin problem is a well-known puzzle whose origin dates back to 1945; in the American Mathematical Monthly, 52, p.~46, E. Schell posed the following question which is probably one of the oldest questions about the complexity of algorithms: You have eight similar coins and a beam balance. At most one coin is counterfeit and hence underweight. How can you detect whether there is an underweight coin, and if so, which one, using the balance only twice?'' The puzzle immediately fascinated many people and since then there have been several different versions and extensions in the literature. In this talk, we study the quantum query complexity for this problem. We assume that the balance scale gives us only balanced'' or tilted'' information and that we know the number $k$ of false coins in advance. The balance scale can be modeled by a certain type of oracle and its query complexity is a measure for the cost of weighing algorithms (the number of weighings). Let $Q(k,N)$ be the quantum query complexity of finding all $k$ false coins from the $N$ given coins. We show that for any $k$ and $N$ such that $k < N/2$, $Q(k,N)=O(k^{1/4})$, contrasting with the classical query complexity, $\Omega(k\log(N/k))$, that depends on $N$. Some bounds for small $k$ are also investigated: (i) $Q(1,N)=1$, (ii) $Q(2,N)=1$, and (iii) $2\leq Q(3,N) \leq 3$. Biography of Speaker: Professor Iwama received the B.E., M.E. and Ph.D. degrees from Department of Electrical Engineering, Kyoto University in 1973, 1975 and 1980, respectively. His reseach interets are mainly algorithms and complexity theory. | 2014-03-09 02:48:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6442189812660217, "perplexity": 1095.1294430814687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999670852/warc/CC-MAIN-20140305060750-00000-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://datascience.stackexchange.com/questions/31329/training-with-data-of-different-shapes-is-padding-an-alternative | # Training with data of different shapes. Is padding an alternative?
I have a dataset of about 1k samples and I want to apply some unsuspervised techniques in order to clustering and visualization of these data.
The data can be interpreted as a table of a spreadsheet and unfortunately it doensn't have a very defined pattern of structure. The number of table lines varies, but not the columns.
The data is structured like this:
sample 1:
{
"table1": {
"column1": [
"-",
"-",
"-"
],
"column2": [
"2017-04-16 10:00",
"2017-04-16 10:00",
"2017-04-16 10:00"
],
"column3": [
"-",
"-",
"-"
],
"column4": [
"name X",
"name Y",
"name Z"
],
"column5": [
"0",
"0",
"0"
],
}
}
sample 2:
{
"table1": {
"column1": [
"-",
"-",
"-",
"-",
"-",
"-",
"-",
"-"
],
"column2": [
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00"
],
"column3": [
"-",
"-",
"-",
"-",
"-",
"-",
"-",
"-"
],
"column4": [
"name A",
"name Z",
"name B",
"name X",
"name C",
"name D",
"name E",
"name F"
],
"coumn5": [
"",
"",
"3",
"1",
"0",
"3",
"0",
"0"
]
}
}
These samples comes from alarms generated by a systems that collects informations from a lot of nodes (these nodes are named as "name A", "name B"...). My objective is to transform these data into a matrix (n_samples x n_features) to apply clustering and visualization algorithms.
How can I work with these data for unsupervised training? Is padding a way forward for this problem? If so, how can I apply the padding on this case?
## 1 Answer
Whether or not padding is ppropriate really depends on the entire structure of your dataset, how relevant the different variables/columns are and also the type of model you want to run at the end.
Padding would be used, whereby you would have to fix the length of each sample (either to the length of the longest sample, or to a fixed length - longer samples would be trimmed or filtered somehow to fit into that length). Variables that are strings can be padded with empty strings, variables with number can be padded with zeros. There are however many other ways to pad, e.g. using the average of a numerical variable, or even model-based padding, padding with values that "make most sense" for filling gaps in that specific sample. Getting deep into it like that might more generally be called imputation, instead of padding - it is common in time series data, where gaps aren't always at one end of a sample.
Below I outline one approach to padding or standardizing the length of each sample. It it not specifically padding.
As you did not mention a programming language, I will give and code snippet in Python, but the same is easily achievable in other languages such as R and Julia.
### The Approach
Based on the two examples you provide, it seems each example would be a calendar day, on which there are a variable number of observations.
There are also columns that are strings, and others are strings of numbers (e.g. column 5 in sample 2).
In time-series analysis in general, it is desirable to have a continuous frequency of data. That means have one day give one input. So my approach would be to make your data into a form that resembles a single input for each of the variables (i.e. columns) of each sample.
This is general approach, and you will have to try things out or do more research on how this would look in reality for your specific data at large.
### Timestamps
I would use these as a kind of index, like the index in a Pandas DataFrame. One row = one timestamp. Multiple variables are then different columns.
### Dealing with strings
I will assume that your dataset has a finite number of possible strings in each column. For example, that column 4 (holding names), will always hold a name from a given set. One could perform set(table2['column 4']) to see which values there are (removing duplicates). Or even:
# Gather a single list containing all strings in column 4 of all samples
all_names = []
[] # list comprehension to loop
# Check how many strings there are
from collections import Counter
counter = Counter(table2['column4'])
print(len(counter)) # see how many unique values exist
print(counter) # see how many times each string appears
print(counter.most_common(5)) # see the most common 5 strings
Now assuming this shows there is a finite number (more than likely the case), You could look into using a sparse representation of each sample (that means for each day). For example, if all the words in the entire dataset were: ['hi', 'hello', 'wasup', 'yo', 'bonjour'] (duplicates removed), then for one single sample with column 4 holding e.g. ['hi', 'hello', 'yo', 'hi'], your sparse representation for this sample would be: [2, 1, 0, 1, 0], because the sample has two 'hi', one 'hello', zero 'wasup' and so on. This sparse representation would then be your single input for column 4 for the timestamp (that single sample).
It might be worth looking into something like the DictVectorizer and CountVectorizer from Scikit-Learn.
### Dealing with number columns
As I mentioned right at the beginning, you could pad these to a chosen length, perhaps matching the length of the string based representation above (or not!), depending on your final model. You can then pad the inputs with a value that makes sense for your model (see the kind of options I mentioned at the beginning of my answer). This should land you with, once again, a single vector for the given day, containing the information in the numerical column.
• Great answer. I have some notes on your thoughts about the scenario. 1- I'm considering to take off the time information as it's not relevant for the analysis. 2 - Does the rows order matters on this case? I mean, if "name X" occurred on index 0 of the first sample, it HAS to appear on every index 0 of the next samples. If so, the "Dealing of strings" makes complete sense. I was planning to take this off too. Unfortunately I cannot share the complete data with you but there's more 3 columns like "column5" that are relevant for traning, the rest I was planning to get rid. Please check my edit. May 7, 2018 at 19:05
• If you are working with strings in one column then you create a sparse representation, the order of the strings won't matter if you use something that counts the appearances of each string. May 8, 2018 at 9:35 | 2023-02-02 22:29:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49069610238075256, "perplexity": 1194.2903270065563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00134.warc.gz"} |
http://kitchingroup.cheme.cmu.edu/blog/2015/03/18/Clickable-links-for-Twitter-handles-in-Emacs/ | | categories: emacs | tags: | View Comments
Org-mode has clickable links, and they are awesome. You can make your own links, for example here is a link for twitter handles that opens a browser to the handle, and exports as an html link.
(org-add-link-type "twitter"
(lambda (handle)
(lambda (path desc backend)
Check it out here: johnkitchin.
There is another alternative to make clickable text, and that is the button-lock package. You define a regular expression for the text you want to be clickable, and a function to run when it is clicked. Here is an example.
(require 'button-lock)
(global-button-lock-mode)
(button-lock-set-button
"@\$$[-a-zA-Z0-9_:]*\$$"
(lambda ()
(interactive)
(re-search-backward "@")
(re-search-forward "@\$$[a-zA-Z0-9_]*\$$")
(let* ((handle (match-string-no-properties 1)))
Check it out: @johnkitchin. Of course, you can make your clicking function more sophisticated, e.g. to give you a menu of options , e.g. to send a tweet to someone, or open the web page, or look them up in your org-contacts. The differences between this and an org-mode link are that this works in any mode, and it has no export in org-mode, so it will go as plain text. Since this is just a feature for Emacs though, that should be fine.
org-mode source
Org-mode version = 8.2.10 | 2017-10-19 20:03:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3904477059841156, "perplexity": 2815.650153712773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823462.26/warc/CC-MAIN-20171019194011-20171019214011-00543.warc.gz"} |
https://tex.stackexchange.com/questions/615930/tikz-3d-plot-transfer-function | # Tikz 3d plot transfer function
This is a follow-up question to this
I have plotted the graph with online help of different resources (code and picture attached)
Now the problems are:
It is not correcting plotting the curve in the y-z plane the red line and give this error message
Package pgfplots Error: Sorry, you can't use 'y' in this context. PGFPlots are expected to sample a line, not a mesh. Please use the [mesh] option combined with [samples y>0] and [domain y!=0:0] to indicate a twodimensional input domain.
Labels are too small and not placed properly.
If somebody can set view angles so it looks like the original picture of the book (Picture attached).
If somebody can help in that regard.
RAR
\documentclass[border=1cm]{standalone}
\usepackage{pgfplots}
\usetikzlibrary{calc,math}
\pgfkeys{/pgf/declare function={H(\x,\y) = 3*((((((\x)^2-(\y)^2+3*(\x)+2)^2+((2*\x*\y)+3*\y)^2)+2.2204e-16)^(1/2))/(((((\x)^3+5*(\x)^2-3*\x*(\y)^2+8*x-5*(\y)^2+6)^2+(3*(\x)^2*y+10*\x*\y-(\y)^3+8*y)^2)+2.2204e-16)^(1/2)));}}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
axis lines=middle, axis on top,
axis equal image,
width=50cm,
view={30}{10},
xmin=-4,
xmax=0,
ymin=-2,
ymax=2,
zmin=0,
zmax=5,
miter limit=1,
xlabel=$\sigma$,
xlabel style={anchor=east,xshift=-5pt,at={(xticklabel* cs:.95)}},
ylabel=$j\Omega$,
zlabel=$\mathopen| H(s)\mathclose|$,
zlabel style={anchor=north east},
xtick = {-3,-2,-1,0},
hide obscured x ticks=false,
ytick = {-1,0,1},
ztick = {2,4},
]
smooth,
surf,
faceted color=gray,
line width=0.1pt,
fill=white,
domain=-4:0,
y domain = -2:2,
samples = 50,
samples y = 50,
restrict z to domain*=0:5]
{H(\x,\y)};
\addplot3[domain=-2:2,samples=70, samples y = 0,red, thick] ({0},{x},{H(0,x)});
\end{axis}
\end{tikzpicture}
\end{document}
I have completed everything but just one thing left. I am not able to draw the curve of function in YZ plane where X=0. I cannot understand why this line
\addplot3[domain=-2:2,samples=70, samples y = 0,red, thick] ({0},{x},{H(0,x)});
giving this error
Package pgfplots Error: Sorry, you can't use 'y' in this context. PGFPlots expected to sample a line, not a mesh. Please use the [mesh] option combined with [samples y>0] and [domain y!=0:0] to indicate a twodimensional input domain.
I have searched all websites everybody syntax is working.
Basically it should follow the curve in YZ plane of H(x,y) where x=0.
if somebody can throw some light.
\documentclass[border=1cm]{standalone}
\usepackage{pgfplots}
\usetikzlibrary{calc,math}
\usetikzlibrary{shapes.misc}
\usetikzlibrary{arrows}
\pgfplotsset{every tick label/.append style={font=\huge}}
\pgfkeys{/pgf/declare function={H(\x,\y) = 3*((((((\x)^2-(\y)^2+3*(\x)+2)^2+((2*\x*\y)+3*\y)^2)+2.2204e-16)^(1/2))/(((((\x)^3+5*(\x)^2-3*\x*(\y)^2+8*x-5*(\y)^2+6)^2+(3*(\x)^2*y+10*\x*\y-(\y)^3+8*y)^2)+2.2204e-16)^(1/2)));}}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
axis lines=middle, axis on top,
axis equal image,
axis line style={black, ultra thick},
width=50cm,
view={30}{15},
xmin=-4,
xmax=0,
ymin=-2,
ymax=2,
zmin=0,
zmax=5,
miter limit=1,
xlabel=$\sigma$,
xlabel style={font=\Huge, anchor=east,xshift=1pt,at={(xticklabel* cs:1.05)}},
ylabel style={font=\Huge, anchor=west},
ylabel=$j\Omega$,
zlabel=$\mathopen| H(s)\mathclose|$,
zlabel style={font=\Huge, anchor=north west},
xtick = {-3,-2,-1,0},
hide obscured x ticks=false,
ytick = {-1,0,1},
ztick = {2,4},
]
smooth,
surf,
faceted color=gray,
line width=0.1pt,
fill=white,
domain=-4:0,
y domain = -2:2,
samples = 50,
samples y = 50,
restrict z to domain*=0:5]
{H(\x,\y)};
(0,1,0)
(-1,1,0)
(-1,0,0)
};
(-1,1,0)
(-1,1,5)
};
(0,-1,0)
(-1,-1,0)
(-1,0,0)
} ;
(-1,-1,0)
(-1,-1,5)
};
(-3,0,0)
(-3,0,5)
};
\addplot3[black] coordinates {(-1,1,0)} node[solid, cross out,draw=black,] {};
\addplot3[black] coordinates {(-1,-1,0)} node[solid, cross out,draw=black] {};
\addplot3[black] coordinates {(-3,0,0)} node[solid, cross out,draw=black] {};
\addplot3[domain=-2:2,samples=70, samples y = 0,red, thick] ({0},{x},{H(0,x)});
\end{axis}
\end{tikzpicture}
\end{document}
• Does it really make sense to set the samples y to 0. You won't get any points. What is your expected output for the red function? Sep 19 at 5:09
• It is the curve in the 3d graph which is basically the Fourier transform. i.e. I need to plot function H with values of y and having x equal to zero. So I am in plane YZ and just plot H with y varying from -2 to 2. Sep 19 at 5:13
• I can't understand the same command works in this code but not in my code. tex.stackexchange.com/questions/383343/… Sep 19 at 5:21
• I am taking of the line \addplot3[domain=0:1.5,samples=70, samples y = 0, red, thick] ({0},{x},{H(0,x)}); I used the same line in my code but its not working. Sep 19 at 5:41
• I see the code is in the answer not in the question Sep 19 at 5:47
Finally, It is done. Mistake was a typo in function so the graph is complete. This graph is to show the poles and zeros of a transfer function with poles at $s=-1+j$, $s=-1-j$ and $s=-3$. The zeros are at $s=-1$ and $s=-2$. The read curve is the Fourier transform or frequency response.
Here is the complete code
\documentclass[border=1cm]{standalone}
\usepackage{pgfplots}
\usetikzlibrary{calc,math}
\usetikzlibrary{shapes.misc}
\usetikzlibrary{arrows}
\pgfplotsset{every tick label/.append style={font=\huge}}
\pgfkeys{/pgf/declare function={H(\x,\y) = 3*((((((\x)^2-(\y)^2+3*(\x)+2)^2+((2*\x*\y)+3*\y)^2)+2.2204e-16)^(1/2))/(((((\x)^3+5*(\x)^2-3*\x*(\y)^2+8*\x-5*(\y)^2+6)^2+(3*(\x)^2*\y+10*\x*\y-(\y)^3+8*\y)^2)+2.2204e-16)^(1/2)));}}
\tikzset{cross/.style={cross out, draw=black, minimum size=2*(#1-\pgflinewidth), inner sep=0pt, outer sep=0pt},
cross/.default={1pt}}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
axis lines=middle, axis on top,
axis equal image,
axis line style={black, ultra thick},
width=50cm,
view={40}{15},
xmin=-4,
xmax=0,
ymin=-2,
ymax=2,
zmin=0,
zmax=5,
miter limit=1,
xlabel=$\sigma$,
xlabel style={font=\Huge, anchor=east,xshift=1pt,at={(xticklabel* cs:1.05)}},
ylabel style={font=\Huge, anchor=west},
ylabel=$j\Omega$,
zlabel=$\mathopen| H(s)\mathclose|$,
zlabel style={font=\Huge, anchor=north west},
xtick = {-3,-2,-1,0},
hide obscured x ticks=false,
ytick = {-1,0,1},
ztick = {2,4},
]
smooth,
surf,
faceted color=gray,
line width=0.1pt,
fill=white,
domain=-4:0,
y domain = -2:2,
samples = 50,
samples y = 50,
restrict z to domain*=0:5]
{H(\x,\y)};
(0,1,0)
(-1,1,0)
(-1,0,0)
};
(-1,1,0)
(-1,1,5)
};
(0,-1,0)
(-1,-1,0)
(-1,0,0)
} ;
(-1,-1,0)
(-1,-1,5)
};
(-3,0,0)
(-3,0,5)
};
\addplot3[black] coordinates {(-1,1,0)} node[solid, cross=8pt,draw=black,] {};
\addplot3[black] coordinates {(-1,-1,0)} node[solid, cross=8pt,draw=black] {};
\addplot3[black] coordinates {(-3,0,0)} node[solid, cross=8pt,draw=black] {}; | 2021-11-29 14:47:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.760010302066803, "perplexity": 7294.593164595551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358774.44/warc/CC-MAIN-20211129134323-20211129164323-00298.warc.gz"} |
https://stacks.math.columbia.edu/tag/03HS | Lemma 10.136.14. Suppose that $A$ is a ring, and $P(x) = x^ n + b_1 x^{n-1} + \ldots + b_ n \in A[x]$ is a monic polynomial over $A$. Then there exists a syntomic, finite locally free, faithfully flat ring extension $A \subset A'$ such that $P(x) = \prod _{i = 1, \ldots , n} (x - \beta _ i)$ for certain $\beta _ i \in A'$.
Proof. Take $A' = A \otimes _ R S$, where $R$ and $S$ are as in Example 10.136.8, where $R \to A$ maps $a_ i$ to $b_ i$, and let $\beta _ i = -1 \otimes \alpha _ i$. Observe that $R \to S$ is syntomic (Lemma 10.136.13), $R \to S$ is finite by construction, and $R$ is Noetherian (so any finite $R$-module is finitely presented). Hence $S$ is finite locally free as an $R$-module by Lemma 10.78.2. We omit the verification that $\mathop{\mathrm{Spec}}(S) \to \mathop{\mathrm{Spec}}(R)$ is surjective, which shows that $S$ is faithfully flat over $R$ (Lemma 10.39.16). These properties are inherited by the base change $A \to A'$; some details omitted. $\square$
Comment #7916 by Masugi Kazuki on
Sorry, I cannot understand why $R \to S$ is locally free, and $A \to A'$ is injection. (c.i.-ness of fiber is okay: because of finiteness, if $R \to S$ is locally free, it is faithfully flat and syntomic.)
Comment #8168 by on
OK, thanks for your comment. I have fixed this here by adding a reference to Lemma 10.136.13. I encourage the reader to prove directly that the map $R \to S$ of Example 10.136.8 is finite free of positive rank directly.
There are also:
• 2 comment(s) on Section 10.136: Syntomic morphisms
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2023-04-01 23:39:35 | {"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9253560900688171, "perplexity": 396.8854830016454}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00149.warc.gz"} |
http://www.es.ele.tue.nl/poosl/Tools/poosl_ide_unittest/ | ## Introduction
The experimental POOSL Unit Testing Plug-in brings facilities to test the functionality of Data Classes.
This page describes facilities to test POOSL Data Classes. To do so, it uses some of the mechanisms from the Xpect testing framework; integrated in a JUnit environment, the tests are picked up from class and method annotations.
## Prerequisites
The Unit Testing plug-in has been tested on Windows 7 with Eclipse Mars.2 and the ESI POOSL Plug-in version 3.6.0.
At the moment, the plugin requires the Plug-in Development Environment (PDE) Plug-in installed in Eclipse, for the JUnit integration with other plugins. Additionally, you need a properly set-up project, with the right plugin dependencies.
## Installation
The plug-in can be installed by pointing Eclipse to one of the following update sites:
A Release update site will be available as soon as the POOSL Unit Testing Plug-in has been tested on other platforms than Windows 7.
A standalone version that can be run outside Eclipse is under development. The plug-in should work with the Maven Surefire execution, but this is still experimental and not yet thoroughly tested.
## How to use
Download and extract the example project. Import the project into Eclipse through the Import... -> Existing projects into workspace wizard.
Right click the project root or the src/test/MyTests.java file. Select Run as -> JUnit Test, and the POOSL Unit Tests will start and report the output to Eclipse. If you clicked the project root, all the JUnit tests will be executed. If you clicked the specific JUnit Test file, only that file will be started.
You can now start writing POOSL files that use import unittest.poosl, and any Data Class that (directly!) extends UnitTestCase will be registered as a testing unit. Take a look at the example Unit Tests in /models/example.poosl in the example project. For the Unit Testing API, you can take a look at the methods in the /models/unittest.poosl file.
## Features
Some of its features include:
• Isolated test-cases, including Before, After, Setup and Teardown annotations on methods to provide testing fixtures.
• Test for expected errors.
• Automatically generating test models and running them with Rotalumis to check the execution.
• JUnit integration for test-cases and test-suites.
• JUnit Eclipse UI Integration for easy access to definitions of failed test.
You can also use your own version of Rotalumis, by specifying the location through an environment variable ROTALUMIS_BIN to the file location of the Rotalumis engine that you want to use.
## Known issues
• The double click handler is currently unable to properly locate the file where the test resides, when the project name (in .project) does not match the Bundle-SymbolicName in the MANIFEST.MF.
## Bug reports and feature requests
The POOSL Unit Testing Plug-in is not under active development, but bug reports and feature requests can be targeted to Joost van Pinxten. | 2018-11-18 13:16:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33952292799949646, "perplexity": 6662.490614528496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744368.74/warc/CC-MAIN-20181118114534-20181118140534-00277.warc.gz"} |
https://www.studyadda.com/index.php?/notes/5th-class/mathematics/area-perimeter-and-volume/perimeter/8305 | # 5th Class Mathematics Area, Perimeter and Volume Perimeter
## Perimeter
Category : 5th Class
### Perimeter
As we know all the geometrical shapes like triangles, quadrilaterals, etc. occupy some area. Perimeter is referred as the length of the boundary line which surrounds the area occupied by a geometrical shape. In the rectilinear figures the line segment which bounds the area are called sides. Thus we can say perimeter of a geometrical shape is the sum of the length of the all sides which bound the area occupied by that shape.
Find the perimeter of the following figure:
Explanation
Perimetre of the figure $=AB+BC+CD+DE+EA$
Perimetre of the figure $=4.5cm+4cm+2.5cm+3cm+4cm=18cm.$
Perimeter of the Triangles
A triangles has three sides.
Perimetre of the triangle $\text{ABC=AB+BC+CA}$
Thus, perimetre of a triangle is the sum of length of its three sides.
Find the perimetre of the following triangle:
Perimetre of the triangle $~ABC=AB+BC+AC$ $=4\text{ }cm+3.5\text{ }cm+5cm$ $=12.5\text{ }cm.$
Perimetre of an Equilateral Triangle
Perimetre of an equilateral triangle is equal to $3\times$side.
Perimetre of the triangle $~ABC=AB+BC+CA$
In an equilateral triangle all sides are equal Therefore,$~AB=BC=CA$
Thus perimetre of the equilateral triangle $ABC=AB+AB+AB$ $=3\times AB$
AB is a side of the equilateral triangle ABC.
Therefore, perimetre of an equilateral triangle $=3\times$side.
Find the perimetre of the following triangle:
Explanation
Perimetre of an equilateral triangle $=3\times$side In the triangle ABC Perimetre of the triangle$~ABC=3\times AB$ $\therefore AB=BC=CA=4cm$ Therefore, perimetre of the triangle ABC $=3\times 4\,cm$ =12 cm.
Perimetre of Isosceles Triangle
Perimetre of the triangle $XYZ=XY+YZ+ZX$
An isosceles triangle has two equal sides In the triangle $XYZ,$ $XY=XZ$
Thus perimetre of$~XYZ=XY+XY+ZX$ $=2\times XY+YZ$ Here $XY$ is one of the equal sides. Therefore, perimetre of an isosceles triangle $=2\times$length of one of the equalsides + length of the unequal side.
Find the perimetre of the following triangle:
Explanation
Perimetre of an isosceles triangle $=2\times$one of the equal sides + unequal side In the triangle $XYZ$ .
Perimetre of the triangle $XYZ\text{ }=2\times XY+YZ$ $XY=ZX=3\text{ }cm$and$YZ=4cm$ therefore,
perimetre of the triangle $~XYZ=2\times 3\text{ }cm+4\text{ }cm$ $=10\text{ }cm.$
Perimetre of Scalene Triangles
Perimetre of the triangle $~PQR=PQ+QR+RP$
All the sides of an scalene triangle are of different lengths Therefore, perimetre of the triangle $PQR=PQ+QR+PR$ Thus perimetre of an scalene triangle = Sum of the length of all three sides.
Find the perimetre of the following triangle:
Explanation
Perimetre of an scalene triangle = Sum of the length of all sides In triangle PQR
Perimetre of the triangle PQR = PQ + QR + RP PQ = 5 cm, QR = 5.3 cm, and RP = 7 cm
Therefore, perimetre of the triangle PQR = 5 cm + 5.3 cm + 7 cm = 17.3 cm.
In quadrilateral ABCD, perimetre = AB + BC + CD + DA
Thus perimetre of a quadrilateral is the sum of length of its all four sides.
Find the perimetre of the following quadrilateral:
Explanation
Perimetre of the quadrilateral ABCD = AB + BC + CD + DA = 22 cm = 3.5 cm + 2 cm + 4.5 cm + 2.5 cm = 12.5 cm.
Perimetre of Rectangles
In the given rectangle PQRS
Perimetre of the rectangle PQRS = PQ + QR + RS + PS
We know that opposite sides of a rectangle are equal
Therefore, PQ = RS
And QR = PS
Thus perimetre of the rectangle
PQRS = PQ + QR + PQ + QR
= PQ + PQ + QR + QR
= 2 x PQ + 2 x QR
= 2(PQ+QR)
PQ is the length of the rectangle and QR is the breadth of the rectangle
Thus perimetre of a rectangle = 2 (Length + Breadth).
Find the perimetre of the following rectangle:
Explanation
Perimetre of the rectangle ABCD = 2(Length + Breadth)
=2(AB+BC)
= 2 (8 cm + 6 cm) = 28 cm.
Perimetre of Squares
Perimetre of the square ABCD = AB + BC + CD + DA
We know that all sides of a square are equal AB = BC = CD = DA
Therefore, perimetre of the square ABCD = AB + AB + AB + AB =4AB
AB is a side of the square
Thus perimetre of a square $=4\times$side.
Find the perimetre of the square whose length of one side is 7 cm.
Solution:
Perimetre of a square = 4 x side
$=\text{ }4\times 7\text{ }cm=28\text{ }cm$
Perimetre of a Circle
Circle is made up of a curved line. To find the length of that curved line we multiplythe diameter of the circle by a constant ?$\pi$?. Value of $\pi$(pie) is $\frac{22}{7}$
Thus, Perimetre of a circle $=\pi \times$ Diametre
Or Perimetre of a circle $=\pi \times 2\times$Radius (Diametre = 2 xradius)
Perimetre of the circle $=\pi d$
Or Perimetre of the circle $=2\pi r$
If radius of a circle is 3.5 cm, find the perimetre of the circle.
Solution:
Perimetre the circle $2\pi r$ 22
$=2\times \frac{22}{7}\times 3.5=22\,cm$
#### Other Topics
LIMITED OFFER HURRY UP! OFFER AVAILABLE ON ALL MATERIAL TILL TODAY ONLY!
You need to login to perform this action.
You will be redirected in 3 sec | 2018-05-25 08:33:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7134122848510742, "perplexity": 1417.9544760955914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867055.20/warc/CC-MAIN-20180525082822-20180525102822-00341.warc.gz"} |
https://www.studyadda.com/sample-papers/11th-class/english/english-olympiad-sample-paper-3/790 | # 11th Class English Sample Paper
Below given is a letter to the Editor of a newspaper with four regarding an incorrect news published. Fill those blanks with the options provided in P, Q, R, S to make the letter readable and sensible, To The Editor Hindustan Times New Delhi Sub: Amending the news about theft in Col t 4. Page - 8, of your daily on dt. 2nd December, 2011 Sir, I respectfully ______I______ before republishing it. According to your staff correspondent, ________II________ from the house. The thieves entered the house due to the wrong bolt at the door. But actually the thieves came to our house through the open window of the kitchen. They __________III_________ the cupboard. Further no money has been refunded to me while your report says that a sum of five thousand has been recovered by the police. So, I hope that you would rectify the mistakes in the report such that _______ IV_______with the report. Yours faithfully, Charls Richard
P. The police investigation is matched properly Q. ran off with Rs. 70000/- (Seventy Thousand Rupees) in cash and some golden Jewellery from R. The thieves came to our house at midnight and stole about five thousand rupees in cash and some golden Jewellery S. Invite your attention to the above referred news and request you to make a correction therein Choose your option
A) PQRS
B) SRQP
C) QRSP
D) RPSQ
E) None of these
Direction: Give the opposite meaning of the words underlined in the sentences given in question. Though his view is correct his behavior was impertinent.
A) Healthy
B) Respectful
D) Smooth
E) Impressive
Direction: Give the opposite meaning of the words underlined in the sentences given in question. It is a herculean task for me.
A) Indecent
B) Puny
C) Ponderous
D) Big
E) None of these
Below given is a sentence in four parts. One of the parts contains a grammatical error. Find the part. Satyajit Roy, who (I) /conceived, co-authored (II) /and directed a number of good films, was (III)/one of the India's most talented film maker. (IV)
A) I
B) II
C) III
D) IV
E) None of these
Find the analogy. Illiteracy : Education : : Flood: ?
A) Rain
B) Bridge
C) Dam
D) River
E) None of these
Fill in the blanks with appropriate option. Virendra Sehwag ________another feather ______his cap by his double century in one day match.
B) Created, by
C) Took, in
D) Kept, in
E) None of these
Change the voice. Why do you -waste time?
A) Why is time wasted by you?
B) Why is time been wasted by you?
C) Why has time been wasted by you
D) Why is time being wasted by you?
E) None of these
Direction: Give the synonym of the words underlined in the sentences given in question. The teacher drew examples copiously from various books.
A) Largely
B) Continuously
C) Plentifully
D) Completely
E) None of these
Direction: Give the synonym of the words underlined in the sentences given in question. Corruption stalks every sphere of national life.
B) Penetrates
C) Pollutes
D) Poisons
E) None of these
Arrange P, Q, R, S to make a meaningful sentence. After the awarding speeches P: the Prize given Q: and R: had been made S: I got up to give my address in reply
A) RSQP
B) RQPS
C) SPQR
D) SRQP
E) None of these
Direction: Read the statements given here with to decide their sequences for a meaningful passage and then answer the questions that follow. I. She decided to go to the school and meet the principal. II. Suddenly she realized that she had no money with her. III. By the time she reached there, he had left the office. IV. Therefore, she decided to go to the office of Jina's father and get the money. V. Mrs. Reed wanted her daughter Jina to get admission in a public school. Which sentence should come third in the paragraph?
A) I
B) IV
C) II
D) V
E) None of these
Direction: Read the statements given here with to decide their sequences for a meaningful passage and then answer the questions that follow. I. She decided to go to the school and meet the principal. II. Suddenly she realized that she had no money with her. III. By the time she reached there, he had left the office. IV. Therefore, she decided to go to the office of Jina's father and get the money. V. Mrs. Reed wanted her daughter Jina to get admission in a public school. Which sentence should come last in the paragraph?
A) III
B) V
C) IV
D) I
E) None of these
Direction: Give one word substitutions for the given sentences. Gift left by will.
A) Alimony
B) Parsimony
C) Legacy
D) Property
E) None of these
Direction: Give one word substitutions for the given sentences. Things that can be felt or touched.
A) Pandemic
B) Palpable
C) Paltry
D) Panchromatic
E) None of these
Two sentences with homonyms (underlined) are given below. Read both the sentences carefully and decide in which of the sentences the use of homonym is correct. I: You should not interfere in one's personal affairs. II: The personals of ICS were proud of their positions during the British rule in India.
A) I is correct
B) II is correct
C) Both I&II is correct
D) Both I &II is incorrect
E) None of these
Give the kind of noun for the word underlined in the sentence below. The girl was donning a garland of beads.
A) Proper noun
B) Common Noun
C) Collective Noun
D) Material Noun
E) None of these
Fill in the blank with correct pronoun. One must do ________duty to one's country.
A) His
B) One's
C) theirs
D) All of these
E) None of these
Fill in the blank with correct form of verb. It is high time that you _____home.
A) Go
B) Have gone
C) Went
D) Going
E) None of these
• question_answer19) In which of the following sentences an infinitive has been correctly used?
A) Do you know to play the harmonium?
B) You had better to stop taking the medicine with harmful side effects.
C) Planned to not go on a vacation this year.
D) The teacher instructed the students to go.
E) None of these
The underlined word in the given sentences is a: She hesitates singing in the company of unknown persons.
A) Noun
B) Gerund
D) Auxiliary Verb
E) None of these
Give the correct question tag for the sentence given below. If you come across my umbrella anywhere, bring it to me; ________?
A) Can you?
B) Don't you?
C) Will you?
D) Isn't it?
E) None of these
In this question two statements are given. On the basis of these statements select the correct options form the given choices. Statements: I. The Reserve Bank of India has recently put restrictions on few small banks in the country. II. The small banks in the private and co-operative sector in India are not in a position to withstand the competitions of the bigger in the public sector. Choose your option
A) Statement I is the cause and statement II is its effect.
B) Statement II is the cause and statement I is its effect.
C) Both the statements I and II are independent causes.
D) Both the statements I and II are effects of independent causes.
E) None of these
• question_answer23) Pick the odd one out.
A) 1st June 1999
B) 29th February 2000
C) 29th April 1998
D) 31st June 1999
E) None of these
Arrange P, Q, R, S between ${{S}_{1}}$ & ${{S}_{6}}$ to make a meaningful paragraph. ${{S}_{1}}$: There has been an alarming increase in the number of vehicles on Delhi roads. P: The pedestrian has, however, been the worst sufferer. Q: There is no place where the pedestrian can move freely without the fear of traffic. R: Zebra crossing like the pavements are no longer safe. S: This has further aggravated the problem of pollution in the city. ${{S}_{6}}$: Should the pedestrians' case be allowed to go by default?
A) SPRQ
B) SRQP
C) PQRS
D) PRSQ
E) None of these
Choose the appropriate filler for the blank. The notice at the petrol pump should be___.
A) All engines need to be switched off
B) All engines have to be switched off
C) All engines must have to be switched off
D) All engines must be switched off
E) None of these
Judge the right word for the underlined portion in the given sentences. They have been accused of inciting the crowd to protest.
A) Instigating
B) Urging
C) Stirring
D) Provoking
E) None of these
• question_answer27) Pick the sentence which is grammatically correct.
A) Common people are rather impressed by the style of a speech than by its substance.
B) I was rather impressed by the manner of the speaker than by his matter.
C) Fortunately, in thirty seven bomb- blasts, only five lives were lost.
D) It is too hard an essay for me to far attempt.
E) None of these
Fill in the blank with appropriate conjunction, Leave on time, ______ you would miss the train.
A) Lest
B) Else
C) Unless
D) If
E) None of these
A) SRQP
B) SPQR
C) SQRP
D) PQRS
E) None of these
Fill in the blank with appropriate preposition. _________being an engineer, he is a very good singer.
A) Beside
B) Beyond
C) Besides
D) Along
E) None of these
Change the direct sentence into indirect one. The sage said, "God helps those who help themselves."
A) The sage said that God helps those who help themselves.
B) The sage said that God helped those who helped themselves.
C) The sage said that God help those who helped themselves.
D) The sage said the God helped those who help themselves.
E) None of these
Direction: Choose the option that best expresses the meaning of the idioms/ phrase given below. All and sundry
A) Greater share
B) All of a sudden
C) Completion of work
D) Everyone without distinction
E) None of these
Direction: Choose the option that best expresses the meaning of the idioms/ phrase given below. A green horn
B) An inexperienced man
C) A trainee
D) A soft-hearted man
E) None of these
Fill in the blank as per subject-verb agreement. Every boy and girl ________ready.
A) Was
B) Were
C) Are
D) All of these
E) None of these
• question_answer35) Find the sentence wherein an adverb has been wrongly used.
A) She writes very carefully.
B) He is too weak to walk.
C) His wife's rude behavior gives him too much pain.
D) He is quite all right,
E) None of these
Fill in the blank with correct determiners. Jenny has __________knowledge.
A) Some
B) Few
C) Little
D) Much
E) None of these
Below given is a paragraph with four blanks. Fill those blanks with the options provided in P, Q, R, S to make the paragraph a meaningful one. Traditions and rituals are ______I____. Even though the country has made remarkable growth in various fields, it is tragically representing the lowest sex ratio. Not only female feticide and infanticide, ________II_____ are also attached to the gravest concern of humanity. Lack of education holds the girl child to a low standard of living and provides inability to expose her skills and knowledge. According to Indian law, it is illegal to facilitate marriage of a girl under the age of 18 but hardly the rule is followed. Research done by UNICEF indicated _______ III_____. Despite of various promotional events, _______IV, _______, the social evil of ill-treating a girl child is still prevailing in the country. P: Illegal marriage of almost 82 percent of girls. Q: A series of other discrepancy, like lack of girl education, lack of nutrition, early marriage and absence of basic necessities R: Outlining the survival of the girl child in India S: Government regulations and the increasing testimony of Indian women, Choose your option
A) PQRS
B) QRSP
C) RSPQ
D) RQPS
E) None of these
Direction:: Read the passage carefully and answer the questions that follow: Hibernation is one of the main adaptations that allow certain northern animals to survive long, cold winters. Hibernation is like a very deep sleep that allows animals to save their energy when there is little or no food available. The body functions of 'true hibernators' go through several changes while they are hibernating. Body temperature drops, and the heart rate slows. For example, woodchuck. Animals, such as the skunk and raccoon, are not considered true hibernators, as they wake up in the winter to feed, and their body functions do not change as much. Since they only sleep for a little bit at a time, the term 'dormancy' or 'light sleeping' is used to describe their behaviour. The largest animals to hibernate are bears. Hibernating animals also have a special substance in the blood called hibernation inducement trigger, or HIT. This substance becomes active in the fall, when the days become cooler and shorter. The main reason behind hibernation of animal is:
A) To save their energy
B) To keep themselves from enemies
C) To go to long deep sleep
D) To survive long, cold winters
E) None of these
Direction:: Read the passage carefully and answer the questions that follow: Hibernation is one of the main adaptations that allow certain northern animals to survive long, cold winters. Hibernation is like a very deep sleep that allows animals to save their energy when there is little or no food available. The body functions of 'true hibernators' go through several changes while they are hibernating. Body temperature drops, and the heart rate slows. For example, woodchuck. Animals, such as the skunk and raccoon, are not considered true hibernators, as they wake up in the winter to feed, and their body functions do not change as much. Since they only sleep for a little bit at a time, the term 'dormancy' or 'light sleeping' is used to describe their behaviour. The largest animals to hibernate are bears. Hibernating animals also have a special substance in the blood called hibernation inducement trigger, or HIT. This substance becomes active in the fall, when the days become cooler and shorter. Which one of the following is true regarding the body function changes of an animal when it hibernates?
A) They can live without food for long time
B) Their heart rate slows
C) There is a rise in their body temperature
D) They are able to wake up quickly
E) None of these
Direction: Read the passage carefully and answer the questions that follow: Hibernation is one of the main adaptations that allow certain northern animals to survive long, cold winters. Hibernation is like a very deep sleep that allows animals to save their energy when there is little or no food available. The body functions of 'true hibernators' go through several changes while they are hibernating. Body temperature drops, and the heart rate slows. For example, woodchuck. Animals, such as the skunk and raccoon, are not considered true hibernators, as they wake up in the winter to feed, and their body functions do not change as much. Since they only sleep for a little bit at a time, the term 'dormancy' or 'light sleeping' is used to describe their behaviour. The largest animals to hibernate are bears. Hibernating animals also have a special substance in the blood called hibernation inducement trigger, or HIT. This substance becomes active in the fall, when the days become cooler and shorter. Raccoons and skunks are not 'true hibernators'. Why?
A) They remain awake for long period of time
B) Their heart beats slow-down from usual 40 to 50 beats
C) Their body temperature does not change at all
D) They wake up to feed in the winter
E) None of these | 2019-08-22 17:32:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2747531235218048, "perplexity": 5244.0352408694625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317339.12/warc/CC-MAIN-20190822172901-20190822194901-00081.warc.gz"} |
https://web2.0calc.com/questions/what-is-the-probability-that-7-people-can-seat-with-a-certain-3-people-not-in-a-consecutive-order | +0
# What is the probability that 7 people can seat with a certain 3 people not in a consecutive order?
0
882
12
What is the probability that 7 people can seat with a certain 3 people not in a consecutive order?
Guest Nov 21, 2014
#3
+93627
+8
What is the probability that 7 people can seat with a certain 3 people not in a consecutive order?
Lets so those 3 people are glued together.
Then it would be like seating 4 people.
Sorry I can't count it would be 4 people and the cojoined triplets - that is 5 'bodies'. I will correct it from here.
there are 5! ways of sitting 5 people in a row = 120 ways
now thos 3 people can be joined together in any order so ther would be 3! ways = 6 ways.
So ther are 120*6= 720 ways those 3 people can be seated together.
now there are 7! ways of seating 7 people in a row. That is 5040 ways
5040-720=4320 so there are 4320 ways to seat everyone without those 3 individuals sitting together.
So the prob that they will not be sitting together is 4320/5040
$${\frac{{\mathtt{4\,320}}}{{\mathtt{5\,040}}}} = {\frac{{\mathtt{6}}}{{\mathtt{7}}}} = {\mathtt{0.857\: \!142\: \!857\: \!142\: \!857\: \!1}}$$
Now it is a running joke around here - none of us are really probablity experts. So don't trust me to much :))
CPhill answer and mine now agree. I believe this answer is correct
Melody Nov 21, 2014
#1
+93627
0
Are they sitting in a row or around a table?
Can 2 of those people sit together?
Melody Nov 21, 2014
#2
0
in a row i guess
Guest Nov 21, 2014
#3
+93627
+8
What is the probability that 7 people can seat with a certain 3 people not in a consecutive order?
Lets so those 3 people are glued together.
Then it would be like seating 4 people.
Sorry I can't count it would be 4 people and the cojoined triplets - that is 5 'bodies'. I will correct it from here.
there are 5! ways of sitting 5 people in a row = 120 ways
now thos 3 people can be joined together in any order so ther would be 3! ways = 6 ways.
So ther are 120*6= 720 ways those 3 people can be seated together.
now there are 7! ways of seating 7 people in a row. That is 5040 ways
5040-720=4320 so there are 4320 ways to seat everyone without those 3 individuals sitting together.
So the prob that they will not be sitting together is 4320/5040
$${\frac{{\mathtt{4\,320}}}{{\mathtt{5\,040}}}} = {\frac{{\mathtt{6}}}{{\mathtt{7}}}} = {\mathtt{0.857\: \!142\: \!857\: \!142\: \!857\: \!1}}$$
Now it is a running joke around here - none of us are really probablity experts. So don't trust me to much :))
CPhill answer and mine now agree. I believe this answer is correct
Melody Nov 21, 2014
#4
+7188
0
I honestly don't get this......
happy7 Nov 21, 2014
#5
+82
0
Verymuch appreciated the effort :) im having problems in trying to analyze probability problems and btw thankyou again :) to follow up on a question the 3 guys cannot sit with each other right?
yuhki Nov 21, 2014
#6
+82
0
if there are 4!*3!= 144 total ways those guys can sit why subtract it to 7! ? when only those 3! guys dont want to sit with other? im just not at this sorry for asking things like this?
yuhki Nov 21, 2014
#7
+89791
+3
Here's my contribution to this ....
The total arrangements for any 7 people sitting in a row is 7! = 5040
We can seat the 3 people together in five different ways.....(seats 1 - 5)
And for each of these there are 3! ways they can be arranged
So, the total number of ways they can sit together is:
(5 * 3!) = 30
So....the probability that they don't sit together is
1 - ( the probability they do sit together) =
1 - 30/5040 = 0.994047619047619 = about 99%
I don't know if this is correct......if anuone spots a flaw in my logic.....I'll be glad to know!!!
CPhill Nov 21, 2014
#8
+93627
0
You never need to appologise for admitting that you do not understand. we want you to learn. We do not to pretend to learn.
there are 144 ways those people can sit together. BUT the question asks you how many ways can they be seated so that they are NOT together. That is why I subtracted from the total number of seating arrangements possible which is 7!
Oh my way any 2 of those guys can sit together, just not all 3 of them. :)
Melody Nov 21, 2014
#9
+82
0
i see :) how i think i kinda get it now. What if those 3 guys cannot sit with each other in any way, what would be the solution if each of the 3 dont want to be sitted to each other? what would you subtract to the total?
yuhki Nov 21, 2014
#10
+93627
+3
I had a post agreeing with you chris but it has disappeared and I have changed my mind.
I think I still like my answer. There are only 5 places those those 3 can sit, that is true, but you need to take to acount where everyone else is sitting as well. Anyway I am too tired to do more now. See you tomorrow. :))
Melody Nov 21, 2014
#11
+89791
+5
Mmmm...I see what Melody is saying.....I think that we would need to account for the other people, as well!!
For each arrangement of the three sitting together, there are 4! ways of arranging the other 4 people.
So, the total arrangements are 30*4! = 720
So
The probability of them not sitting together is
1 - 720/5040 = about 85.7%
I may think on this one some more !!!
CPhill Nov 21, 2014
#12
+7188
0
I don't get this....
What should I do? I'm done with my schoolwork for the entire day
happy7 Nov 21, 2014 | 2018-10-17 00:02:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6378853917121887, "perplexity": 1373.3667251009028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510893.26/warc/CC-MAIN-20181016221847-20181017003347-00265.warc.gz"} |
https://www.physicsforums.com/threads/diffeomorphism-and-jacobian.360663/ | # Diffeomorphism and Jacobian
1. Dec 5, 2009
### kof9595995
Why is a non zero jacobian the necessary condition for a diffeomorphism? How to prove it?
2. Dec 5, 2009
### Hurkyl
Staff Emeritus
A necessary condition, you mean.
Have you tried applying the definitions of any of the terms involved, and/or some basic structural theorems? By doing so, what sorts of equivalent statements were you able to produce?
3. Dec 5, 2009
### quasar987
if f is a a diffeo, then f o f^{-1} = id. Differentiate both side, put in matrix form and take the determinant.
4. Dec 5, 2009
### kof9595995
Emm....If I have a diffeomorphism f, that means f and f^-1 are both differentiable. If I can prove it's differential is also invertible, then Jacobian must be non zero. Emm...then what I can think of is: is the differential of f^-1 equals the inverse of the differential of f?
5. Dec 5, 2009
### kof9595995
Thanks, now I get it. | 2018-07-18 05:52:18 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8131068348884583, "perplexity": 1990.8418683403727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590051.20/warc/CC-MAIN-20180718041450-20180718061450-00347.warc.gz"} |
http://mathoverflow.net/questions/125900/constructing-a-linear-ode-for-a-product-of-two-holonomic-functions-without-intro | # Constructing a linear ODE for a product of two holonomic functions without introducing additional singularities
A function $f$ is called holonomic if it satisfies some linear differential equation with polynomial coefficients $$p_n(x) f^{(n)}(x)+\dots+p_1(x)f'(x)+p_0(x)f(x)=0.$$ Now if $f,g$ are holonomic then so are their sum and product. To obtain a differential equation for $h=fg$, first observe that $$h^{(k)} = \sum_{i=0}^k {k\choose i} f^{(i)} g^{(k-i)}.$$ Since each $f^{(i)}$ is a linear combintation of $f,f',\dots,f^{(n-1)}$ with rational coefficients (where $n$ is the order of the ODE satisfied by $f$), and analogously each $g^{(i)}$ is a linear combination of $g,g',\dots,g^{(m-1)}$, then $h,h',h'',\dots$ span a finite-dimensional vector space of dimension at most $d=mn$, and hence there exists a nontrivial relation of the form $$r_d(x)h^{(d)}+\dots+r_1(x)h'+r_0(x)h=0$$with $r_i(x)$ rational functions.
Now suppose that the ODE which $f$ satisfies had singular points $u_1,\dots,u_n$, while the equation of $g$ had singular points $w_1,\dots,w_m$. The ODE for $h$ might have additional singular points besides $u_1,\dots,u_n,w_1,\dots,w_m$. For example, if $$(x-a)f''(x)+cf(x)=0\\\\ (x-b)g''(x)+dg(x)=0$$ then $h=fg$ satisfies an ODE of order 4 with leading coefficient $$(x-a)^3 (x-b)^3 \biggl((c-d)x-(bc-ad)\biggr)h^{(4)}+\dots$$ (I calculated this using C.Mallinger's GeneratingFunctions Mathematica package).
Is it possible to construct a holonomic ODE for $h$ (in the above example and in general) without introducing additional singular points?
-
Hi Dima,
The operator you obtain by the algorithm you describe has minimal possible order. You pay for the minimality of the order by having (in general) a nonminimal degree of the polynomial coefficients. You can turn the minimal-order operator into a minimal-degree operator if you are willing to pay the price of a higher order. There is in general no operator which has both order and degree as small as possible.
As long as you are only interested in the singularities, i.e., you are only concerned about avoiding extra factors in the leading coefficient, you can use a desingularization algorithm to turn the minimal order operator into one which has no unnecessary singular points. For differential operators, this is a classical technique, explained for example in the ODE book of Ince (Section 16.4). For recurrence operators, there is an algorithm by Abramov and van Hoeij (http://www.math.fsu.edu/~hoeij/papers/issac99/new.pdf).
If you are interested in minimizing the degree not only of the leading coefficient, but of all the polynomial coefficients in the operator simultaneously, see my joint paper with Chen, Jaroschek, and Singer, to appear on this year's ISSAC (http://www.risc.uni-linz.ac.at/people/mkauers/publications/jaroschek13.pdf).
Best regards, Manuel
-
Thanks Manuel! That kind of things is exactly what I needed. – dima Apr 4 '13 at 11:19
The paper which may be relevant to this question is: MR0320402 Frank, Günter; Wittich, Hans Zur Theorie linearer Differentialgleichungen im Komplexen. Math. Z. 130 (1973), 363–370. (They consider the differential equations with only one singular point. But their results show that the answer may depend on the nature of the singularity).
-
Since my German is nonexistent, could you please pinpoint the location in the paper where they talk about the case of one singular point? Also, I couldn't figure out if they always assume that the solutions of the ODEs are entire functions... – dima Mar 31 '13 at 12:48
The paper is about entire solutions. The singular point is at infinity. They prove that the class of solutions of linear differential equations with polynomial coefficients and NO singular points in C (that is the top coefficient is 1) is an algebra. That is this class of functions is closed under addition and multiplication. This is a special case of your conjecture. They also say that if you consider entire coefficients (not necessarily polynomials) the class of solutions you obtain is not even closed with respect to addition. – Alexandre Eremenko Mar 31 '13 at 13:47 | 2016-07-25 11:57:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313482046127319, "perplexity": 224.48995612563672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824226.79/warc/CC-MAIN-20160723071024-00176-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://qurope.eu/db/publications/trace-distance-measure-coherence | ## Trace-distance measure of coherence
Date:
2015-11-10 - 2016-01-12
Author(s):
Swapan Rana, Preeti Parashar, Maciej Lewenstein
Reference:
Phys. Rev. A 93, 012110
We show that trace distance measure of coherence is a strong monotone for all qubit and, so called, X states. An expression for the trace distance coherence for all pure states and a semidefinite program for arbitrary states is provided. We also explore the relation between l1-norm and relative entropy based measures of coherence, and give a sharp inequality connecting the two. In addition, it is shown that both lp-norm- and Schatten-p-norm–based measures violate the (strong) monotonicity for all p \in (1, \infty). | 2021-03-05 10:19:17 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9432519674301147, "perplexity": 1516.627516297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370752.61/warc/CC-MAIN-20210305091526-20210305121526-00068.warc.gz"} |
https://wiki.math.ntnu.no/tma4100/2013h/tema/transcendentalfunctions | # Transcendental Functions
When applying calculus one often ends up dealing with so-called transcendental functions. These are functions that cannot be obtained as a fractional power of a rational function. The most prominent examples are the exponential, hyperbolic and trigonometric functions, along with their respective inverses. As these functions are ubiquitous in applications, one has much to gain by becoming familiar with their properties.
Topics
- Inverse Functions
- Inverse Functions
The condition for an inverse to exist is that you don't "lose information" when applying the function: no two distinct points are mapped to the same point.
Definition 1: One-to-one
A function $f$ is said to be one-to-one if it maps different points to different points. Symbolically: $x_1 \neq x_2 \implies f(x_1) \neq f(x_2)$.
Definition 2: Inverse function
If $f$ is one-to-one then it has an inverse, written $f^{-1}$. The value of $f^{-1}(y)$ is defined to be the number $x$ in the domain of $f$ for which $f(x) = y$.
Differentiating inverse functions
As a consequence of the second definition we get the identities $f^{-1}(f(x)) = x$ and $f(f^{-1}(x)) = x$ when an inverse exists. Suppose the domain of $f$ is an interval, that $f'$ exists and doesn't change sign on the interval. Then $f$ is either increasing or decreasing on that interval, therefore one-to-one, and thus there exists an inverse $f^{-1}$. Assuming that $f^{-1}$ is differentiable, we can find its derivative by differentiating both sides of the above identity:
$f'(f^{-1}(x)) \cdot \frac{d}{dx} f^{-1}(x) = 1,$ implying $\frac{d}{dx} f^{-1}(x) = \frac{1}{f'(f^{-1}(x))}.$
(Actually one doesn't have to assume that $f^{-1}$ is differentiable, as one can show that the stated assumption on $f$ will automatically imply it.)
Relevant parts of the book: Section 3.1
Relevant examples: Differentiating an inverse function
Relevant videos: Exam 2009 problem 4
Relevant Maple worksheets: : Inverse functions
Pencasts:
- Exercise 3.1:3
- Exercise 3.1:7
- Exercise 3.1:34
- Exponential and Logarithmic Functions
- Exponential and Logarithmic Functions
The following are some of the most important properties of exponential and logarithmic functions, and should be learned by heart. The results are stated for a general base $a>0$, but in practice one mostly uses the bases 2, $e$ and 10.
Laws of exponents
Let $a>0$. Then \begin{align} a^0 &= 1, & \qquad a^{x+y} &= a^x a^y, \\[1em] a^{-x} &= \frac{1}{a^x}, & \qquad a^{x-y} &= \frac{a^x}{a^y}, \\[1em] (a^x)^y &= a^{xy}, & \qquad (ab)^x &= a^x b^x. \end{align}
Laws of logarithms
Let $x, y, a, b>0$ and $a,b \neq 1$. Then \begin{align} \log_a 1 &= 0, & \qquad \log_a(xy) &= \log_a x + \log_a y, \\[1em] \log_a\left(\frac{1}{x}\right)&=-\log_a x, & \qquad \log_a \left(\frac{x}{y}\right) &= \log_a x-\log_a y. \\[1em] \log_a (x^y) &= y \log_a x, & \qquad \log_a x &= \frac{\log_b x}{\log_b a}. \end{align}
Important limits
Let $a > 1$. Then \begin{align} &\lim_{x\to -\infty}a^x = 0, & \qquad \lim_{x\to \infty}a^x = \infty, \\[1em] &\lim_{x\to 0+} \log_a x = -\infty, &\qquad \lim_{x\to \infty} \log_a x = \infty. \end{align}
Derivatives
Both the exponential and logarithm functions are differentiable, with derivatives $\frac{d}{dx} a^x = \ln(a) a^x,$ $\frac{d}{dx} \log_a x = \frac{1}{x \ln(a)}.$
Relevant parts of the book: Sections 3.2, 3.3, 3.4
Relevant examples:
- Manipulating logarithms
- Logarithmic differentiation
- Exponential cooling
Relevant Maple worksheets: : Logarithmic functions
Pencasts: Exercises 3.4:1-3
- The Inverse Trigonometric Functions
- The Inverse Trigonometric Functions
Since the trigonometric functions are periodic, they are not one-to-one on the entire real line. One therefore has to restrict their domains suitably if one wants an inverse function.
Definition 9: The inverse sine function
The inverse of $\sin$, called $\arcsin$, is defined by $y = \arcsin(x) \Longleftrightarrow x = \sin(y) \ \mathrm{and} \ -\pi/2 \le y \le \pi/2.$ ($\arcsin$ is the inverse of the sine function with its domain restricted to $[-\pi/2,\pi/2]$.)
Definition 11: The inverse tangent function
The inverse of $\tan$, called $\arctan$, is defined by $y = \arctan(x) \Longleftrightarrow x = \tan(y) \ \mathrm{and} \ -\pi/2 < y < \pi/2.$ ($\arctan$ is the inverse of the tangent function with its domain restricted to $(-\pi/2,\pi/2)$.)
Definition 12: The inverse cosine function
The inverse of $\cos$, called $\arccos$, is defined by $y = \arccos(x) \Longleftrightarrow x = \cos(y) \ \mathrm{and} \ 0 \le y \le \pi.$ ($\arccos$ is the inverse of the cosine function with its domain restricted to $[0,\pi]$). An equivalent definition is $\arccos(x) = \pi/2-\arcsin(x)$.
Derivatives
$\frac{d}{dx} \arcsin(x) = \frac{1}{\sqrt{1-x^2}}$ $\frac{d}{dx} \arccos(x) = \frac{-1}{\sqrt{1-x^2}}$ $\frac{d}{dx} \arctan(x) = \frac{1}{1+x^2}$
Relevant parts of the book: Section 3.5
Relevant Maple worksheets: : Trigonometric functions
Pencasts: Exercise 3.5:11
- Hyberbolic Functions
- Hyberbolic Functions
The hyperbolic functions are combinations of exponentials, and as such natural counterparts of their trigonometric kin; each family of functions can be obtained from the other by a complex shift of variables, and—up to sign changes—all formulas for the trigonometric functions are valid also for their hyperbolic counterparts.
Definition 15: The hyberbolic cosine and hyperbolic sine functions
$\cosh(x) = \frac{e^x+e^{-x}}{2}$
$\sinh(x) = \frac{e^x-e^{-x}}{2}$
Definition 17: The hyperbolic tangent function
$\tanh(x) = \frac{\sinh(x)}{\cosh(x)} = \frac{e^x-e^{-x}}{e^x+e^{-x}}$
Derivatives
$\frac{d}{dx} \sinh(x) = \cosh(x)$ $\frac{d}{dx} \cosh(x) = \sinh(x)$ $\frac{d}{dx} \tanh(x) = \frac{1}{\cosh^2(x)}$
N.b. It can be shown that $\sin(x) = \frac{1}{2i} (e^{ix} - e^{-ix})$ and $\cos(x) = \frac{1}{2} (e^{ix} + e^{-ix})$, explaining the relationship between the trigonometric and hyperbolic functions.
Relevant parts of the book: Section 3.6
Relevant Maple worksheets: : Hyperbolic functions
Pencasts: Exercise 3.6:5 | 2022-01-21 17:27:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983790516853333, "perplexity": 939.6321240252881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303512.46/warc/CC-MAIN-20220121162107-20220121192107-00081.warc.gz"} |
https://stackabuse.com/how-to-create-c-cpp-addons-in-node | How to Create C/C++ Addons in Node - Stack Abuse
How to Create C/C++ Addons in Node
Node.js is great for a lot of reasons, one of which is the speed in which you can build meaningful applications. However, as we all know, this comes at the price of performance (as compared to native code). To get around this, you can write your code to interface with faster code written in C or C++. All we need to do is let Node know where to find this code and how to interface with it.
There are a few ways to solve this problem depending on what level of abstraction you want. We'll start with the lowest abstraction, which is the Node Addon.
An addon works by providing the glue between Node and C/C++ libraries. For the typical Node developer this may be a bit complicated as you'll have to get in to actually writing C/C++ code to set up the interface. However, between this article and the Node documentation, you should be able to get some simple interfaces working.
There are a few things we need to go over before we can jump in to creating addons. First of all, we need to know how to compile (something Node developers happily forget about) the native code. This is done using node-gyp. Then, we'll briefly talk about nan, which helps with handling different Node API versions.
node-gyp
There are a lot of different kinds of processors out there (x86, ARM, PowerPC, etc), and even more operating systems to deal with when compiling your code. Luckily, node-gyp handles all of this for you. As described by their Github page, node-gyp is a "cross-platform command-line tool written in Node.js for compiling native addon modules for Node.js". Essentially, node-gyp is just a wrapper around gyp, which is made by the Chromium team.
The project's README has some great instructions on how to install and use the package, so you should read that for more detail. In short, to use node-gyp you need to do the following.
$cd my_node_addon Generate the appropriate build files using the configure command, which will create either a Makefile (on Unix), or vcxproj (on Windows): $ node-gyp configure
And finally, build the project:
$node-gyp build This will generate a /build directory containing, among other things, the compiled binary. Even when using higher abstractions like the ffi package, it's still good to understand what is happening under the hood, so I'd recommend you take the time to learn the ins and outs of node-gyp. nan nan (Native Abstractions for Node) is an easily overlooked module, but it'll save you hours of frustration. Between Node versions v0.8, v0.10, and v0.12, the V8 versions used went through some big changes (in addition to changes within Node itself), so nan helps hide these changes from you and provides a nice, consistent interface. This native abstraction works by providing C/C++ objects/functions in the #include <nan.h> header file. To use it, install the nan package: $ npm install --save nan
"include_dirs" : [
"<!(node -e \"require('nan')\")"
]
And you're ready to use the methods/functions from nan.h within your hooks instead of the original #include <node.h> code. I'd highly recommend you use nan. There isn't much point in re-inventing the wheel in this case.
Before starting on your addon, make sure you take some time to familiarize yourself with the following libraries:
• The V8 JavaScript C++ library, which is used for actually interfacing with JavaScript (like creating functions, calling objects, etc).
• NOTE: node.h is the default file suggested, but really nan.h should be used instead
• libuv, a cross-platform asynchronous I/O library written in C. This library is useful for when performing any type of I/O (opening a file, writing to the network, setting a timer, etc) in your native libraries and you need to make it asynchronous.
• Internal Node libraries. One of the more important objects to understand is node::ObjectWrap, which most objects derive from.
Throughout the rest of this section, I'll walk you through an actual example. In this case, we'll be creating a hook to the C++ <cmath> library's pow function. Since you should almost always be using nan, that's what I'll be using throughout the examples.
For this example, in your addon project you should have at least these files present:
• pow.cpp
• binding.gyp
• package.json
The C++ file doesn't need to be named pow.cpp, but the name typically reflects either that it is an addon, or its specific function.
// pow.cpp
#include <cmath>
#include <nan.h>
void Pow(const Nan::FunctionCallbackInfo<v8::Value>& info) {
if (info.Length() < 2) {
Nan::ThrowTypeError("Wrong number of arguments");
return;
}
if (!info[0]->IsNumber() || !info[1]->IsNumber()) {
Nan::ThrowTypeError("Both arguments should be numbers");
return;
}
double arg0 = info[0]->NumberValue();
double arg1 = info[1]->NumberValue();
v8::Local<v8::Number> num = Nan::New(pow(arg0, arg1));
info.GetReturnValue().Set(num);
}
void Init(v8::Local<v8::Object> exports) {
exports->Set(Nan::New("pow").ToLocalChecked(),
Nan::New<v8::FunctionTemplate>(Pow)->GetFunction());
}
NODE_MODULE(pow, Init)
Note there is not semicolon (;) at the end of NODE_MODULE. This is done intentionally since NODE_MODULE is not actually a function - it's a macro.
Getting Started with AWS in Node.js
Build the foundation you'll need to provision, deploy, and run Node.js applications in the AWS cloud. Learn Lambda, EC2, S3, SQS, and more!
The above code may seem a bit daunting at first for those that haven't written any C++ in a while (or ever), but it really isn't too hard to understand. The Pow function is the meat of the code where we check the number of arguments passed, the types of the arguments, call the native pow function, and return the result to the Node application. The info object contains everything about the call that we need to know, including the arguments (and their types) and a place to return the result.
The Init function mostly just associates the Pow function with the
"pow" name, and the NODE_MODULE macro actually handles registering the addon with Node.
The package.json file is not much different from a normal Node module. Although it doesn't seem to be required, most Addon modules have "gypfile": true set within them, but the build process seems to still work fine without it. Here is what I used for this example:
{
"version": "0.0.0",
"main": "index.js",
"dependencies": {
"nan": "^2.0.0"
},
"scripts": {
"test": "node index.js"
}
}
Next, this code needs to be built into a 'pow.node' file, which is the binary of the Addon. To do this, you'll need to tell node-gyp what files it needs to compile and the binary's resulting filename. While there are many other options/configurations you can use with node-gyp, for this example we don't need a whole lot. The binding.gyp file can be as simple as:
{
"targets": [
{
"target_name": "pow",
"sources": [ "pow.cpp" ],
"include_dirs": [
"<!(node -e \"require('nan')\")"
]
}
]
}
Now, using node-gyp, generate the appropriate project build files for the given platform:
$node-gyp configure And finally, build the project: $ node-gyp build
This should result in a pow.node file being created, which will reside in the build/Release/ directory. To use this hook in your application code, just require in the pow.node file (sans the '.node' extension):
var addon = require('./build/Release/pow');
Node Foreign Function Interface
Note: The ffi package is formerly known as node-ffi. Be sure to add the newer ffi name to your dependencies to avoid a lot of confusion during npm install :)
While the Addon functionality provided by Node gives you all the flexibility you need, not all developers/projects will need it. In many cases an abstraction like ffi will do just fine, and typically require very little to no C/C++ programming.
ffi only loads dynamic libraries, which can be limiting for some, but it also makes the hooks much easier to set up.
var ffi = require('ffi');
var libm = ffi.Library('libm', {
'pow': [ 'double', [ 'double', 'double' ] ]
});
console.log(libm.pow(4, 2)); // 16
The above code works by specifying the library to load (libm), and specifically which methods to load from that library (pow). The [ 'double', [ 'double', 'double' ] ] line tells ffi what the return type and parameters of the method is, which in this case is two double parameters and a double returned.
Conclusion
While it may seem intimidating at first, creating an Addon really isn't too bad after you've had the chance to work through a small example like this on your own. When possible, I'd suggest hooking in to a dynamic library to make creating the interface and loading the code much easier, although for many projects this may not be possible or the best choice.
Are there any examples of libraries you'd like to see bindings for? Let us know in the comments!
Last Updated: May 13th, 2016
Get tutorials, guides, and dev jobs in your inbox.
Prepping for an interview?
• Improve your skills by solving one coding problem every day
• Get the solutions the next morning via email
• Practice on actual problems asked by top companies, like:
Getting Started with AWS in Node.js
Build the foundation you'll need to provision, deploy, and run Node.js applications in the AWS cloud. Learn Lambda, EC2, S3, SQS, and more! | 2021-06-22 14:50:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3053762912750244, "perplexity": 2894.96037085579}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517820.68/warc/CC-MAIN-20210622124548-20210622154548-00462.warc.gz"} |
http://datalab.lu/ | # Applying Machine Learning to Peer to Peer Lending
Peer to peer lending allows to lend money to unrelated individuals without going through traditional financial service such as bank, credit union, etc. Nevertheless, there is an intermediary - service and platform provider. The provider verifies the identity of the borrower and income status, processes the payments, promotes its platform, deals with bad loans or demands bankruptcy for the borrower.
The advantage of peer to peer lending for the borrowers is lower interest rate and higher rate for lenders. However, higher rate comes with higher risk - the return is more volatile than a bank deposit.
Regardless of many lending platforms such us Prosper.com (US), zopa.co.uk (UK), www.fundingcircle.com (UK), auxmoney.com (DE), pret-dunion.fr (FR), comunitae.com (ES), lendico.com (global), majority of platforms accept only the investment from local investors. However bondora.com (EE) allows invest into three markets - Estonia, Finland and Spain and accepts investors from across Europe. Additionally, the rate of return at bondora.com is not fixed as in other platforms and allows much higher returns. However, there is a possibility that you may lose some or all of your initial investment as it is not protected by any financial compensation scheme.
The company behind bondora.com is isePankur AS, which is based in Estonia, a small country in north of Europe. The really amazing thing about bondora.com, that they share data with everyone. The data-set gives us an opportunity to glimpse at the performance of the company and a possibility to build our own credit scoring model!
The data goes back to 2009 and the chart below shows the total number of loans and funded loans per month. It looks like the business exploded in 2013 and the following charts will give us a few clues.
In 2013 bondora.com became more active in Finland and Spain markets, though it was in 2014, when the grow skyrocketed in these markets.
Another big change, what might ignited the grow of bondora.com is shift in the duration of the loans. Dominant loan duration before 2013 was 1-2 years, but in 2013 the company started issuing 5 years loans, which became the primary duration in 2014 and was half of all loans.
The latest data-set has data about 29688 loans and 172 columns or features (depending which parlance you prefer). Below you can see a partial print screen of the interface and dozen of the features.
### Model building
Now, that we are familiar with bondora’s data-set, let’s move to model building. The prediction model can predict two types of outcomes - categorical (yes/no, true/false classes) or numerical (one is less than one hundred). Although most credit scoring models are built to return a credit score for a borrower, I have opted for simple model, where the outcome has two classes: good or bad.
The definition of “good” class is straight forward - the class in which you are willing to invest or give a credit, but “bad” class definition is complicated. The data-set has data about the borrowers who were late with their payments for 7, 14, 21, 60 days and defaulted loans. How bad is a borrower if he is late for X days? True, he doesn’t respect the schedule and the contract for various reason such as harsh life, distraction or any other reason. However, once he is back on track he pays what he owns, plus late charges, which leads to higher return for additional risk.
The defaulted loans really sounds as bad loans, right? Well, what if the loan defaults, but you get back the principal and partial interest rate? Doesn’t sound that bad, does it? What you really don’t like is the default on the loan and zero payments - these loans are the fraud and you want to avoid them. So let’s mark them as a “bad” ones.
Beside choosing the outcome, it took me awhile to realize another problem with the data-set - the shift in the business model. Nowadays, most of the loans are issued for 5 years and the data-set doesn’t event have data on matured 5 years loans! So I did the trick - I marked all 5 years loans as repaid which are still “alive” after 2-3 years.
While working on a few machine learning projects I quickly learned, that the biggest impact on performance of the model comes not from the fancy machine learning algorithms, but from well engineered features. In the chart below you can find, that 3rd feature is “total_interest”, which is made of “Interest” and “LoanDuration”. The two features perform well, but the derived feature has much bigger weight.
Additionally, I have added data about VIX index. The index tracks volatility of the stock market via S&P500 index - its value increases during the crisis and falls back during calm times. By adding independent source the performance of the model increased 2%.
After initial cleaning of the data-set and feature engineering it was time to build a simple model. My favorite machine learning algorithm is Random Forest for the following reasons: you can feed almost any raw data and it chew happily; the algorithm itself is easy to understand, nevertheless it is kind of black-box; it gives the weights of the features:
### Model metrics
In classification task precision and recall are used frequently for model metrics. The predicted value can be assigned to four classes: True Positive - real fraud (model predicted True and value was True), True Negative - not a fraud (model predicted False value and value was False), False Positive - not fraud marked as a fraud (model predicted True, however value was False) and False Negative - real fraud marked as not fraud (model predicted False, but the value was True).
In case of p2p lending, if you commit False Positive error (Type I), you just miss one investment. You main concern is False Negative (Type II) errors, because you will be loosing money on the bad investment. Positive predictive value metrics is used for performance evaluation: sum(True positive)/sum(Test outcome positive)
### Blending
Funny, but the blending works well with ML algorithms and with the people. Michael Nielsen in his book “Reinventing Discovery: The New Era of Networked Science” gives many examples how the collective intelligence can be more powerful than single mind. The idea is very simple - if you gather predictions from dozen of people or ML models then the average score will better than the best single prediction. There is one pitfall - if the predictions are given by “herd mind” then the result most likely be horrible. So, in ML environment try to include very different algorithms - decision trees, linear models, neural networks, etc. to sustain diversity.
For my final model I blend tree algorithms - Random forest, SVM and generalized boosted modeling (GBM). If all of them predict, that the loan is not a fraud, then I will make an investment.
### Real time data
The modeling part is done, but without real time data it is just waste of time. Fortunately, there is a nice Python framework for web crawling - Scrapy. Initial time investment in the tool might look significant, but because it is robust it won’t be your concern any time soon, unless the platform gets face-lift…
### Automated investment
Selenium is a suite of tools to automate web browsers across many platforms. It is widely used for web interface testing purpose, however any web-based task can be automated.
Once I have real time data I feed my model with it to find out which loans are good for the investment. Acquired list is send to Selenium script, which logins into the platform and makes the investments.
My first idea was to use Raspberry Pi for the project, however I had the problems setting up R-language and Python frameworks. So, I rented an instance on DigitalOcean for 10 dollars a month (there is 5$/month option as well) and it worked well. Meanwhile, I realized, that I don’t need the server for 24 hours a day. The solution was to power-on four times a day and shutdown the server once the job is finished. But as you probably know, powering-off your server is not enough to save you from paying - you need to archive your virtual instance (the same applies to Amazon AWS). So, I came up with the script, which creates a virtual image from the archive, powers it on, runs the crawler, runs R mining module, makes the investments if necessary and then shutdowns the instance, archives the image and deletes virtual machine. Does it sounds good? Well, it was good, until I went on holiday for one week without almost any access to Internet. At the end of my vacation, I found, that every day four new servers were created and were still spinning… Turns out, that Raspberry Pi was able to initiate new instances, but wasn’t able to shutdown and delete. The support at DigitalOcean have asked for the log files from my server and I ended up paying the bill, because it was almost non-existant on my RaspPi. The lesson taken - log as much as possible and incorporate health checks for virtual machine in your scripts. ### Results I started my investments at Bondora in September 2014. At the beginning all my investments were done via Bondora’s investment engine, where you can define investment parameters (country, risk profile, etc.). Somewhere in November - December I decided, that I will rely on my own engine only. Below you can see, that the engine based on machine learning algorithms does good job by avoiding bad borrowers. # Data Based Review of Strapless Mio Link HRM | Comments Our bodies generate a lot of data - blood pressure, heart rate, amount of glucose in blood, etc. However, the struggle is how to collect data? Until recently, heart rate monitors(HRM) were popular only among practitioners of endurance sports - runners, cyclists, swimmers, etc. Nevertheless, a strapless, wristband heart rate monitor seduces a new category of users - data minded people who are interested in quantified self movement. Mio Link looks like a watch on your wrist, allowing to wear it around the clock. Have you ever tried to wear chest based HRM outside workout? The technology behind wristband HRM is quite simple - integrated LEDs beams light into the skin and pulsing volume of blood flow is collected. Wristband HRM might provide better accuracy than chest based HRM, because latter is affected in lower air temperatures. Below you can see data comparison between Mio Link and Garmin Soft Strap from one of my workouts: There some data discrepancy, but most importantly peaks and valleys are intact. However, sport workouts are limited in time and I wanted to know, how does wristband HRM work around the clock. The second chart shows data recorded during the day. As you can see the error rate (red color indicates potential errors) is much higher. This might be explained by the fact that my movements were not constant or repetitive contrary to running or sleeping. The third chart shows the data gathered during the night. The data bears some noise as well, but the spikes indicate shift in data blocks and presumably body movements. It would be interesting to know if sleep stages can be extracted from heart rate data. If you plan to use it on daily basis, keep in mind, that the battery last 8-10 hours. It might sound bad, but during the day you have plenty of time windows when you can charge the battery without loosing a lot sensitive data. For example, while you sit in front of computer your heart rate most likely will be low and stable and it is perfect time for charging. If the idea of wristband HRM sounds appealing, beside Mio link, you can check for Scosche RHYTHM+ as well, which is equivalent of the former. # Credit Card Fraud Detection | Comments During the last Data Science community meeting in Luxembourg Phd Candidate Alejandro Correa Bahnsen gave a presentation about Credit Card Fraud Detection(CCFD). In short - CCFD is just another machine learning problem which is similar to Network Security and Intrusion Detection(NSID) problem, but it has its own obstacles. From business perspectives - card fraud detection system directly impacts profitability of all credit card operators, therefore are very desirable. The cost of card fraud in UK alone in 2006 was estimated ~ 500 millions pounds. Assuming, that CCFD system can identify 30% (Churn models at TelComs are able to save that much) of all fraud would lead to 150 millions pound savings a year. Hence, we have a desirable product with price range from 1.5 to 15 millions pounds in the UK. Here is the catch - in any given country there are a few credit card operators, for ex. only CETREL operates in Luxembourg. So, if you want to sell a solution you play all or nothing game. Additionally, if you wish to build a CCFD system or at least a prototype you most likely missing the data and you won’t get it. Chicken and egg problem. How does the data set might look? Think of millions rows where less than 0.002% of data are the fraud operations. If you train your model with unadjusted data set, it will predict all future events as normal operations. In order to avoid such behavior you need to throw away most of the data with normal operations and balance data where distribution would be 1% of fraud vs 99% normal or 5% vs 95%. You can play with freely available [network intrusion data] to get idea how imbalanced data would look like. Another thing to keep in mind is the size of data set. Conventional wisdom - the more data you have, the better model you can build, but it is not true if you run a real time system, where latency is big deal. In such case you have to think about something similar to map-reduce framework, where you keep only the averages of the variables per client. CCFD systems work as binary classificators, where the response either fraud or normal operation, meaning that it doesn’t take into account how much fraud cost. Alejandro tries to incorporate loss-profit function, where each operation has its own cost. If you think about his approach - sounds as regression problem to me. And the last thing - I suppose, it’s worth to try to run unsupervised learning system in parallel. Unsupervised CCFD would issue a lot false alerts at the beginning, however it would considerably improve over time with good feedback from supervised CCFD. # Kaggle Challenge - Event Recommendation Engine | Comments Event Recommendation Engine Challenge was my second challenge at Kaggle and I finished 15th out of 225 on final (private) leaderboard. I was able to finish 1st on public leaderboard. Believe or not but the difference doesn’t come from over fitting but rather from an external data source (Google Maps) which was forbidden. I did read the rules, but such important restriction was buried under additional layer of rules which I didn’t bother to read. So moral of the story - if you are doing well then read the rules for second time. Nevertheless it is strange, why the host of the challenge didn’t preprocess the data and didn’t convert location of the users into latitude/longitude format? It would definitely lead to better models, as in my case such conversion gave + 4% in precision. For this competition I used random forest almost exclusively and devoted all my time for the feature derivation. For the final prediction I built three models and then combined them together: set.seed(333) final_model3=randomForest(factor((interested-not_interested)/2+.5) ~ .,data=final_model,importance=TRUE,nodesize=4) set.seed(33) final_model1=randomForest(factor((interested-not_interested)/2+.5) ~ .,data=final_model,importance=TRUE,nodesize=4) set.seed(3) final_model2=randomForest(factor((interested-not_interested)/2+.5) ~ .,data=final_model,importance=TRUE,nodesize=4) final_model=combine(final_model3,final_model1,final_model2) Below you can find a chart with most important features of my final model: • time_diff - From early beginning I found that the difference between when the event is scheduled to begin and when the user saw the event ad is important feature which is easy to derive. • popularity - How many users said they are interested in the event. • start_hour - Turns out, that it is important to know at what hour an event is going to begin. • friends - The name of this feature might be misleading, nevertheless it keeps how many user friends are invited to the event. • joinedAt The difference between year when the user joined the service and 2000-01-01. I was surprised to find, that such feature has weight at all. • timezone Had few NA values which I replaced by 0. Then I converted timezone numerical value into factor of two hours: 14-12, 12-10 and etc. • birthyear Numeric value. • weekdays On which weekday did the event happen? (Monday, Tuesday and etc). • friend_yes, friend_no, friend_maybe Number of friends which are interested, not interested or maybe in the event. • c_XXX All c_xxxx features were used without preprocessing. • locale I used the first two letter of locale variable. • location_mat Once I found, that external sources such as Google maps are forbidden then I tried to determine if user location shares the same words with event country, state and city descriptions. If it does I would add +1 (max 3) for location_mat variable. • distance (forbidden) the feature scored high, but I had to remove it. The first step was to obtain latitude and longitude for users with known location. If you are interested here is the source code which shows how easily it can be done in R. Need to say, that I did manual and automated data cleaning - converted states names and some frequent errors like “City name 19”. Once I had the coordinates of the users I was able to calculate the distance to event. Then I used k-means to predict user location for those who did not specified it, based on friends location. For example if 5 out of 8 friends of the user are based in Indonesia then user is given Indonesia location and distance to the event is calculated. Here’s the source code for prediction of user location. Click here if you are interested in the source code of my solution. # Machine Learning for Hackers | Comments Which way do you prefer to learn a new material - deep theoretical background first and practice later or do you like to break things in order to fix them? If latter is your way of learning things, then most likely you will enjoy Machine Learning for Hackers. The book has chapters on machine learning techniques such as PCA, kNN, analysis of social graphs hence even advanced R users might find something interesting. So I want you to show you my example of visualisation of similarity between parliamentarians in Lithuania which idea is taken for chapter 9. In most of the cases you should be able to get access to voting results of legislative body in your country. Nevertheless the data can be buried in “wrong” format such as html or pdf. I use Scrapy framework to parse html pages, however I have faced a problem, when my IP address was blocked due to many requests (10 000) within 2 hours. But in cloud age the problem was quickly solved and I made a delay in my crawler. Here is the examples of the data in CSV format. With data in hand it was easy to proceed further. To find similarities between parliamentarians I took voting results of approximately 4000 legislations and built a matrix, where rows represent parliamentarians and columns - legislations. “Yes” votes were encoded as 1, “No” as -1 and the rest as 0. R has a handy function dist to compute the distances between the rows (parliamentarians) of a data matrix. The result of the function is one dimension data of the distance between parliamentarians, however to reveal the structure of a data set we need two dimensions. Once again, R has a function cmdscale which does Classical Multidimensional Scaling (CMS). I found this document very useful in explaining Multidimensional Scaling. Here is the final result: Click on the image to enlarge. The plot above reveals, that right wing party TSLKD has a majority in parliament and LSDP (socialists) are in opposition and liberals (LSF, JF, MG) are in the center. You might argue, that that is already known, however the plot is based on actual data, therefore differences in voting support outlooks of the parliamentarians(right, central, left). The map shows which members of the party are outliers and which one from the other party can be invited while forming a new parliament (second tour of the election is on the way). Members of the left wing are mixed up and it would make sense to them to merge or form a coalition. Are you looking for source code? Click here. # Garmin Data Visualization | Comments People go on rage, when governments initiate surveillance projects like CleanIT, nevertheless share very private data without a doubt. I have to admit, that some data leaks are well buried in the process. Take for example Garmin which produces GPS training devices for runners. In order to see your workouts you are forced to upload sensitive data on internet. In response you are given a visualization tool and a storage facility. What are alternatives? It seems, that in the past there was a desktop version, however I was not able to find it. So, we are left with the last option - hack it. First of all you need to transfer data from Garmin device to computer. I own Forerunner 610 with relays on ANT network and I found Python script with takes care of data transfer. Once data is transfered there is another obstacle - Garmin uses a proprietary format FIT. In order to tackle this problem I use another Python script which I have adapted to have csv format. Once data is in CSV format R can be used to plot data. I had a lot of fun by trying to understand Garmin longitude and latitude coordinates. Here is a short explantion by Hal Mueller: The mapping Garmin uses (180 degrees to 231 semicircles) allows them to use a standard 32 bit unsigned integer to represent the full 360 degrees of longitude. Thus you get the maximum precision that 32 bits allows you (about double what you get from a floating point value), and they still get to use integer arithmetic instead of floating point. # Building a Presentation, Report or Paper in R | Comments If you need to build a presentation, obviously you have following options: • Powerpoint alike presentation • Online engines • LaTex The first two are beloved by business people and the third one is widely used in academia. The objective of the first group is shiny presentation, contrary to the second where asceticism and demand for automation are top priorities. However, if you are data scientist or any other data specialist with a need to build an automated report, then you know, that LaTex is just wrong. LaTex allows you to build a shiny presentation or outstanding paper, however it can take light years to build something useful for beginners . If you never tried LaTex here is an example of the monster - you literally have to code a document or presentation: <code>\documentclass{article} \title {Investment strategy} \author {Dzidorius Martinaitis} \begin{document} \maketitle</code> So, what do you do, if you need only 1% of all LaTex features and a report/document needs to be build automatically? Turns out, that HTML little brother Markdown is saving the world. Markdown(.md) source files are easy to read and easy to write and you can convert it into .html, .pdf, .docx, .tex or any other format. There are many ways to do conversion, however I use Pandoc utility. By the way this post was written in markdown in Vim and you can check the source file. However, the nicest thing about Markdown is integration with R. You can build your report in one file, where R code would be embed in Markdown. Knitr package will help you to convert R code into Markdown simply by calling this piece of code: <code>require(knitr); knit('workshop.Rmd', 'workshop.md');</code> Below you will find an excerpt of .Rmd file which is mix of R and Markdown: <code>Get the data === Who is tweeting about #Haxogreen {r results=asis,comment=NA, message=FALSE} require(twitteR) load('tweets.Rdata') names=sapply(tweets,function(x)x$screenName)
rez=(aggregate(names,list(factor(names)),length))
rez=rez[order(rez$x),] colnames(rez)=c('name','count') options(xtable.type = 'html') require(xtable) xtable(t(tail(rez,6))) Plot top10 tweeters === {r topspam, figure=TRUE,fig.cap=''} barplot(tail(rez$count,10),names.arg=as.character(tail(rez\$name,10)),cex.names=.7,las=2)
</code>
Here is a workshop presentation which contains the example above - I built it for Haxogreen hackers camp and source code can be found on gitHub.
# Data Mining for Network Security and Intrusion Detection
In preparation for “Haxogreen” hackers summer camp which takes place in Luxembourg, I was exploring network security world. My motivation was to find out how data mining is applicable to network security and intrusion detection.
Flame virus, Stuxnet, Duqu proved that static, signature based security systems are not able to detect very advanced, government sponsored threats. Nevertheless, signature based defense systems are mainstream today - think of antivirus, intrusion detection systems. What do you do when unknown is unknown? Data mining comes to mind as the answer.
There are following areas where data mining is or can be employed: misuse/signature detection, anomaly detection, scan detection, etc.
Misuse/signature detection systems are based on supervised learning. During learning phase, labeled examples of network packets or systems calls are provided, from which algorithm can learn about the threats. This is very efficient and fast way to find know threats. Nevertheless there are some important drawbacks, namely false positives, novel attacks and complication of obtaining initial data for training of the system. The false positives happens, when normal network flow or system calls are marked as a threat. For example, an user can fail to provide the correct password for three times in a row or start using the service which is deviation from the standard profile. Novel attack can be define as an attack not seen by the system, meaning that signature or the pattern of such attack is not learned and the system will be penetrated without the knowledge of the administrator. The latter obstacle (training dataset) can be overcome by collecting the data over time or relaying on public data, such as DARPA Intrusion Detection Data Set. Although misuse detection can be built on your own data mining techniques, I would suggest well known product like Snort which relays on crowd-sourcing.
Anomaly/outlier detection systems looks for deviation from normal or established patterns within given data. In case of network security any threat will be marked as an anomaly. Below you can find two features graph, where number of logins are plotted on x axis and number of queries are plotter on y axis. The color indicates the group to which points are assigned - blue ones are normal, red ones - anomalies.
Anomaly detection systems constantly evolves - what was a norm year ago can be an anomaly today. The algorithm compares network flow with historical flow over given period and looks for outliers with are far away. Such dynamic approach allows to detect novel attacks, nevertheless it generates false positive alerts (marks normal flow as suspicious). Moreover, hackers can mimic normal profile, if they know that such system is deployed.
The first task when implementing anomaly detection (AD) is collection of the data. If AD is going to be network based, there are two possibilities to collect aggregated data from network. Some Cisco products provide aggregated data as Netflow protocol. However, you can use Wireshark or tshark to collect network flow data from the computer. For example:
tshark -T fields -E separator , -E quote d -e ip.src -e ip.dst -e tcp.srcport -e tcp.dstport -e udp.srcport -e upd.dstport -e tcp.len -e ip.len -e eth.type -e frame.time_epoch -e frame.len
Once you have enough data for mining process, you need to preprocess acquired data. In the context of intrusion, anomalous actions happen in bursts rather than single event. Varun Chandola et al. proposed to derive following features:
• Time window based: Number of flows to unique destination IP addresses inside the network in the last T seconds from the same source Number of flows from unique source IP addresses inside the network in the last T seconds to the same destination Number of flows from the source IP to the same destination port in the last T seconds host based - system calls network based - packet information Number of flows to the destination IP address using same source port in the last T seconds
• Connection based: Number of flows to unique destination IP addresses inside the network in the last N flows from the same source Number of flows from unique source IP addresses inside the network in the last N flows to the same destination Number of flows from the source IP to the same destination port in the last N flows Number of flows to the destination IP address using same source port in the last N flows
Below you can find an example of feature creation in R. The dataset was created by calling tshark script, which is specified above.
#load data
#get rid of everything below min. in timestamp
tmp[,10]=as.integer(as.POSIXct(format(as.POSIXct(as.integer(tmp[,10]),origin="1970-01-01"),"%Y-%m-%d %H:%M:00")))
#fix some rows
tmp=tmp[-which(sapply(tmp[,1],function(x) nchar(x)>15)),] tmp=tmp[which(!is.na(tmp[,4])),]
#aggregate date by 5 mins. it assumes, that flow is continuous
factor=as.factor(tmp[1:5000,10])
feature=do.call(rbind, sapply(seq(from=1,to=length(factor),by=4),function(x){ return(list(ddply( subset(tmp,factor==levels(factor)[x:(x+4)]),.(V1,V4),summarize,times=length(V11),.parallel=FALSE ))) }))
After preprocessing the data we can apply local outlier detection, KNN, random forest and others algorithms. I will provide R code and practical implementation of some algorithms in the following post.
While preparing this post, I was looking for the books, I found only few books covering data mining and network security. To my surprise Data Mining and Machine Learning in Cybersecurity book includes both topics and it is well written. However, if you are security specialist looking for data mining books, you can read my summary of “Data Mining: Practical Machine Learning Tools and Techniques”
# My First Competition at Kaggle
For me Kaggle becomes a social network for data scientist, as stackoverflow.com or github.com for programmers. If you are data scientist, machine learner or statistician you better off to have a profile there, otherwise you do not exist.
Nevertheless, I won’t bet on rosy future for data scientist as journalists suggest (sexy job for next decade). For sure, the demand for such specialists is on rise. However, I see one big threat for data scientist - Kaggle and similar service providers. You see, such services allows to tap high end data scientists (think of PhD in hard science) at minuscule fraction of real price. Think of Hollywood business model - top players get majority of the pool and the rest is starving. If you try the same service model on IT projects you will most likely get burned. My reasoning can be wrong, but I suspect, that project timespan is the issue - IT projects can take for while to finish (1-10 years), but main stream ML project won’t take that long.
Notwithstanding these obstacles, machine learning, information retrieval, data mining and etc. is a must with ability to write code for production, deal with streaming big data and cope with performance of intelligent system. Then, in programmers parlance, you will became “data scientist ninja” and every company will die for you. There is a good post on the subject on mikiobraun blog, but I mind you, that it is a bit controversial.
Although for last 4 years I often has been working on financial models and time-series, this competition added a new experience to me and hunger for the knowledge. During competition I found this book very practical and plentiful of ideas what to do next: Data Mining: Practical Machine Learning Tools and Techniques. As complimentary book I used Data Mining: Concepts and Techniques, though most of information can be found in one of them. I will try to summarize some chapters in my own story.
Understanding the data. ”Online Product Sales” competition metadata (data about data) is miserly - there are three types of the data - the date fields, categorical fields, quantitative fields and response data for next 12 months. However metadata is most important element in all ML projects, which can save you a lot of time once you understand it better and it leads to much better forecast if you have “domain knowledge”.
Cleaning data. There is famous phrase: “garbage in garbage out”, meaning, that before any further action you have to detect and fix incorrect, incomplete or missing data. You have many possibilities to deal with missing data - remove all rows, where the data is missing; replace it with mean or regressed value or nearest value and etc. If your data is plentiful and missing values are random (meaning, that NA values do not bear any information) - just get rid of them. Otherwise you need impute new values based on mean or other technique. Mean based replacement worked best for me in this competition. Outliers are another type of the troubles. Suppose, that variable is normally distributed, but few variables are far away from the center. The easiest solution would be to remove such values - as many do in finance by removing “crisis period”. When next crisis hits, the journalists are rushing to learn a new buzzword- black swan. Turns out, that outliers can’t be ignored, because the impact of them is huge. So be precautious while dealing with outliers.
Feature selection. It was surprising to me that too many features or variables can pollute forecast, therefore you need to do feature selection. Such task can be done manually be checking correlation matrix, co-variance and etc. However, random forest or generalized boosted methods can lead to better selection. In R you just call randomForest() or gbm() and job is done.
Variable transformation - a way to get superior performance. “Online Product Sales” competition has two date fields, however these fields encoded as integers. By transforming these variables into date and retrieving year and month led to better performance of the model. In most of cases taking logarithm for numeric fields gives performance boost. Scaling (from 0 to 1 or from -1 to 1) and centering (normal distribution) might be considered when linear models are in use. It is worth to transform categorical variables as well, where 1 would mean, that a feature belongs to the group and 0 otherwise. Check model.matrix function in R for latter transformation and preProcess function in caret package for numerical variables.
Validation stage - helps you to measure performance of the model. If you have huge database to build a model you can divide you set into two/three parts - for training, testing and cross validation and you are ready to start. However, if you are not so lucky, then other methods come to play. Most popular method is division of the set into two groups, namely “training” and “test” and rotating it for 10 times. For example, you have 100 rows, so you take first 75 for training and 25 for test and you check the performance ratio. In the next step you take the rows from 25 to 100 for training and you use first 25 for test. Once you repeat such procedure 10 times, you have 10 performance ratios and you take average of it. Stratified sampling is a buzzword, which you should know when you do a sampling. Keeping all this information in mind I wasn’t able to to implement accurate cross validation and my results differ within 0.05 range.
Model selection and ensemble. Intuitively you want to choose the best performing algorithm, however the mix of them can lead to superior performance. For regression problem I trained four models (two random forest versions, gbm, svm), made the predictions, averaged the results and that led to better prediction.
# GitHub Data Analysis
Few weeks ago GitHub announced, that its timeline data is available on bigquery for analysis. Moreover, it offers prizes for the best visualization of the data. Despite my art skills and minimal chances to win beauty contest, I decided to crunch GitHub data and run data analysis.
After initial trial of bigquery service, I found hard to know, what price, if any, I’m going to pay for the service. Hence, I pulled the data (6.5 GB) from bigquery on my machine and further I used my machine for analysis. Bash scripts have been used to clean up and extract necessary data, R for data analysis and visualization and C++ for text extraction.
GitHub dataset is one table, where each row consist of information about repository (i.e. path, date of creation, name, description, programming language, number of forks/watchers and etc.) and action, which was done by user (i.e. username, location, timestamp and etc.).
As a result, we can check how GitHub users actions are spread over time during the day. The X axis on the graph below is labeled with the hours of the day (GMT) and the Y axis represent median values of the actions for each hour. From it, we can make a deduction, that highest load for GitHub can be expected between 15:00 and 17:00 GMT and lowest to be expected between 05:00 and 07:00 GMT. The color of the line indicates how busy was the day based on quantiles: green are calm days (20% of days), blue - normal days (50% quantile) and red are busy days (80% quantile). I should to mention, that auto-correlation or serial correlation is high (70% for following hour), which means, that busy hours tend to be followed by busy hours and calm hours tend to be followed by calm hours. Moreover, busy days tend happen after busy days.
Second graph below shows median of actions divided by weekdays. There is not big surprise - weekends are more slow than weekdays, nevertheless the programmers are slightly less productive on Mondays and Fridays.
The analysis of creation of new repository shows, that the pattern of busy or calm hours remains over the years. This can be attributed to the fact, that majority of the users comes from North America and Europe. Another hypothesis can be drawn from this information, that number of creation of the new repositories grow exponentially. However, I mind you, that the graph below is biased - most likely, GitHub users update recent projects, consequently more recent projects appeared on timeline. Even though, 2009-2011 years show exponential grow. The X axis of the graph below is labeled with the hour of the day, the Y axis - log of median values of new repositories.
Following graph shows the number of forks per project (the X axis, log scale) versus number of watchers (the Y axis, log scale). As expected, there is linear correlation between forks and watchers. Even so there is something interesting about outliers, which are below bottom line - the projects, where number of watchers is low, but number of forks is high. These are anomalies and worth to check.
The next thing to do is to look at the repository description. Let’s group the repositories by programming language and count most dominant words in the description. The graph below has C++ word cloud on the left and Java - right . C++ projects are about library, game, simple(?), engine, Arduino. Java is dominated by android, plugin, server, minecraft, spring, maven.
Ruby (left) vs Python(right ):
“Surprise”, “surprise” - R projects (left) are largely about data analysis, however “machine” word, which corresponds to Machine learning is very tiny. Shell (right) is dominated by configuration, managing, git(?).
GitHub dataset includes location field. Unfortunately, the users can enter whatever they want - country, city or leave it empty. Nevertheless, I was able to extract good chunk of actions, where location field has meaningful value. The video below shows country based users activity, where dark red corresponds to high activity and light red - minor. Only 30 most active countries are included, the rest are grey. The same pattern persist over the days - activity in Asia increases around midnight, Europe wakes up around 8:00 or 9:00, where America starts around 15:00. Who said, that hackers and programmers work at night?
What else can be done with GitHub dataset? Most repositories have description field, which can be used to find similar projects by implementing tf-idf method. I tried that method and the results are satisfying.
Most of the graphs shown above are reproducible (except word clouds) and the code can be found on GitHub. | 2017-10-22 23:01:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22775693237781525, "perplexity": 1972.1663695729837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825473.61/warc/CC-MAIN-20171022222745-20171023002745-00846.warc.gz"} |
https://ai.stackexchange.com/questions/7996/what-is-the-dimensionality-of-the-output-map-given-the-dimensionality-of-the-in | # What is the dimensionality of the output map, given the dimensionality of the input map, number of filters, stride and padding?
I am trying to understand the dimensionality of the outputs of convolution operations. Suppose a convolutional layer with the following characteristics:
• Input map $$\textbf{x} \in R^{H\times W\times D}$$
• A set of $$F$$ filters, each of dimension $$\textbf{f} \in R^{H'\times W'\times D}$$
• A stride of $$$$ for the corresponding $$x$$ and $$y$$ dimensions of the input map
• Either valid or same padding (explain for both if possible)
What should be the expected dimensionality of the output map expressed in terms of $$H, W, D, F, H', W', s_x, s_y$$?
where, $$p_{x}$$ and $$p_{y}$$ are padding values. (equal on both sides). You can have different padding on left and right side. (similarly top and bottom). If same padding, equation will have $$2*p_{x}$$ or $$2*p_{y}$$, else you can just add values of both padding and replace in the equations. (For example $$p_{xLeft} + p_{xRight}$$) | 2020-09-28 21:56:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6234176158905029, "perplexity": 611.214479841083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401614309.85/warc/CC-MAIN-20200928202758-20200928232758-00145.warc.gz"} |
http://mathhelpforum.com/number-theory/82080-congruence-existence-problem.html | # Thread: congruence existence problem
1. ## congruence existence problem
let p be a prime and suppose $p-1=ab$ for some integers a and b
prove that if $y^a\equiv1\pmod p$ there exists x such that $y\equiv x^b\pmod p$
2. Originally Posted by siegfried
let p be a prime and suppose $p-1=ab$ for some integers a and b
prove that if $y^a\equiv1\pmod p$ there exists x such that $y\equiv x^b\pmod p$
Let $z$ be a primitive root modulo $p$ (we know it exists).
This means $y\equiv z^c(\bmod p)$ for some $1\leq c \leq p-1$.
Since $y^a\equiv 1(\bmod p) \implies (z^c)^a\equiv 1(\bmod p) \implies z^{ac} \equiv 1(\bmod p)$.
Therefore, $p-1$ divides $ac$, thus $ab|ac \implies b|c$.
Therefore, $y\equiv z^c = \left( z^{c/b} \right)^b = x^b(\bmod p)$ where $x=z^{c/b}$.
3. what is a primitive root modulo p?
4. Here’s another proof.
Notice that $A=\{y\in\mathbb Z_P^\times:y^a=1\}$ is a subgroup of $\mathbb Z_p^\times=\{1,2,\ldots,p-1\}$ under multiplication modulo $p.$
Define a mapping $f:\mathbb Z_p^\times\to A$ by $f(x)=x^b.$ This makes sense because $\forall x\in\mathbb Z_p^\times,$ $(x^b)^a=x^{p-1}=1$ by Fermat’s little theorem, so that $x^b\in A.$
$f$ is clearly a homomorphism and so its image is a subgroup of $A.$ $\therefore\ \left|f\left(\mathbb Z_p^\times\right)\right|\leqslant|A|.$
Now $A$ consists of roots of the polynomial $x^a-1$ of degree $a$ in $\mathbb Z_p^\times[x],$ and there cannot be more than $a$ such roots. $\therefore\ |A|\leqslant a$.
Similarly, the kernel of $f,$ $K=\{x\in\mathbb Z_p^\times:x^b=1\}$ consists of roots of the polynomial $x^b-1$ of degree $b$ in $\mathbb Z_p^\times[x],$ and so $|K|\leqslant b$.
Now, by the homomorphism theorem, we have $\mathbb Z_p^\times/K\cong f\left(\mathbb Z_p^\times\right)$. And so we have
$ab=p-1=\left|\mathbb Z_p^\times\right|=\left|f\left(\mathbb Z_p^\times\right)\right|\cdot|K|\leqslant|A|\cdot| K|\leqslant ab$
Therefore we actually have $\left|f\left(\mathbb Z_p^\times\right)\right|=|A|,$ i.e. $f$ is surjective. This proves the theorem.
5. Originally Posted by JaneBennet
Here’s another proof.
Notice that $A=\{y\in\mathbb Z_P^\times:y^a=1\}$ is a subgroup of $\mathbb Z_p^\times=\{1,2,\ldots,p-1\}$ under multiplication modulo $p.$
Define a mapping $f:\mathbb Z_p^\times\to A$ by $f(x)=x^b.$ This makes sense because $\forall x\in\mathbb Z_p^\times,$ $(x^b)^a=x^{p-1}=1$ by Fermat’s little theorem, so that $x^b\in A.$
$f$ is clearly a homomorphism and so its image is a subgroup of $A.$ $\therefore\ \left|f\left(\mathbb Z_p^\times\right)\right|\leqslant|A|.$
Now $A$ consists of roots of the polynomial $x^a-1$ of degree $a$ in $\mathbb Z_p^\times[x],$ and there cannot be more than $a$ such roots. $\therefore\ |A|\leqslant a$.
Similarly, the kernel of $f,$ $K=\{x\in\mathbb Z_p^\times:x^b=1\}$ consists of roots of the polynomial $x^b-1$ of degree $b$ in $\mathbb Z_p^\times[x],$ and so $|K|\leqslant b$.
Now, by the homomorphism theorem, we have $\mathbb Z_p^\times/K\cong f\left(\mathbb Z_p^\times\right)$. And so we have
$ab=p-1=\left|\mathbb Z_p^\times\right|=\left|f\left(\mathbb Z_p^\times\right)\right|\cdot|K|\leqslant|A|\cdot| K|\leqslant ab$
Therefore we actually have $\left|f\left(\mathbb Z_p^\times\right)\right|=|A|,$ i.e. $f$ is surjective. This proves the theorem.
this is a pretty solution! it shows you've got to be creative if you insist on not using standard facts (here the one that ThePerfectHacker used, i.e. $\mathbb{Z}_p^{\times}$ is a cyclic group). | 2018-01-17 09:21:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 71, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9879924058914185, "perplexity": 99.03645405669968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886860.29/warc/CC-MAIN-20180117082758-20180117102758-00357.warc.gz"} |
http://wims.unice.fr/paper/wims/wims_2.html | # Interactive mathematics on the Internet
## 2. Capabilities of the system
### 2.1. Input capabilities
The types of user input Wims can accept is limited by the http protocol. They include multiple choices (menus), numerical data (real, complex or others), text data (equations, expressions, matrices), plane coordinates (clicking on a picture). Multiple input data can be contained in one user request, via the html form protocol.
The input of geometric objects other than points is not supported by ordinary html standard. Therefore this can only be achieved by text data or by using java applets.
The capability of Wims to process user input is only limited by that of the various softwares interfaced by it. In particular, unlike many other Internet exercise systems where user answers are merely compared with stored standard answers, answers to exercises under Wims are mathematically analyzed, providing a much higher freedom for exercise design. For example we are able to create exercises whose good answers are not unique, or those accepting multi-step answers. Sophisticated error analysis mechanism can also be built into exercise modules, helping users to understand the reason of their failures.
Most exercises under Wims incorporates random parameters (numerical, functional or configurational), so that they are highly non-repetitive. They are usually also configurable by several configuration parameters, intended to teachers for setting up the level of difficulty of the exercises for their students (see Section 5).
Users through the Internet are often not very strict on syntax rules, when they input mathematical expressions. For this reason, a special command (rawmath) is implemented in the Wims scripting language, which allows automatic correction of common syntax errors'' whenever there is no ambiguity about the intention of the user. For example, 2x is corrected to 2*x, sint is corrected to sin(t), (x-1)(x+1) is corrected to (x-1)*(x+1), etc. In ambiguous situations such as x2+1 or x^1/2, a warning message prompts the user to enter x^2+1 or x^(1/2), if he wants to say x2+1 or x1/2.
The algorithm of rawmath is constantly improved according to the real behavior of the users shown by log files of the server. Without the use of this command, as much as 30% of expression inputs were rejected due to syntax errors. Under the current version of rawmath, syntax errors are reduced to less than 5% of all expression inputs, of which only a small fraction is attributed to `failures' of rawmath, such as rarely used function names which are not recognized (for example csc).
Wims systematically checks all user input fields for consistency of parentheses, and prompts the user for correction when unmatched parentheses are found.
### 2.2.Rendering of mathematical expressions.
Currently Wims has several approaches for this purpose.
Common mathematical symbols (such as \pi, \beta, \sqrt, \leq, \int) can be directly inserted into the text, using inline directives with a syntax similar to TeX.
The command htmlmath takes an expression and renders it as best as possible, using available html tags. It is fast, but rendering is not very good due to limitations of html language. 2/3 can only be rendered as 2/3. It is intended to rendering simple expressions.
Much more powerful is the command instex , which takes a TeX source and compiles it to a graphics file which is inserted in the place where the command is. This command has mechanism to detect whether the source is constant or dynamic. For the former, the generated graphics file is stored in a permanent and session-independent place, and reused for subsequent calls, in order to gain speed. At each subsequent call, the file time of the graphics file is compared to that of the file containing the command, and the graphics file is renewed if the latter is newer than the former.
If the source is dynamic (that is, if it contains a variable to be substituted), instex generates a temporary graphics file stored in the session's directory, which is never reused for subsequent calls. This guarantees that the formatted expression will follow the changes of the context. In order that the browser does not reuse the graphics file stored in its cache, a unique time stamp is appended to the address of the graphics file.
For the time being, the performance of dynamic instex is not satisfactory. Perceptible delay occurs even for one dynamic instex. In fact, the mechanism of instex is to transform the TeX source into a dvi file, then to a postscript file via dvips, and finally to a gif file using convert. The last step calls Ghostscript to interpret the postscript file, and it is here where the bulk of the delay takes place. Performance can be greatly improved if we have a driver transforming the dvi file directly into gif.
Wims has also a mechanism allowing the user to change the font size of TeX related graphics (instex and inline mathematical symbols), in order to make it correspond to the font size of his browser. This mechanism is still rudimentary, in that for registered users, the choice of font size is not permanently stored (so that he should re-choose the font size at each login).
The MathML standard offers an extremely interesting possibility for mathematical rendering. Wims is prepared to support this standard, but this is not yet implemented because most currently used browsers do not recognize it.
Before we reach a stage where almost all installed browsers become MathML capable (which will not be the case before several years), the server must have a mechanism to detect MathML capable browsers, and send MathML codes only when this is the case. This can eventually be done by analyzing the name and version of the browser contained in the HTTP_USER_AGENT environment variable, but the ideal solution is that in the http protocol, there be a standard variable indicating whether the browser accepts MathML.
Once a MathML capable browser is detected, both htmlmath and instex can be made to output MathML code, allowing MathML support without modification at modules' level. What is needed here is to create routines generating MathML codes from raw mathematical expressions and from TeX sources (planned in the development project of Wims).
One may also create a single output command, which accepts a mathematical expression in its raw form, and automatically prepares an appropriate output (html, TeX, MathML) according to the circumstances.
We have yet to find time to implement a routine translating raw mathematical expressions into TeX source.
### 2.3.Dynamic animated graphics.
Another particularity of the system is the capability to output animated graphics in a very convenient way. The command insplot accepts a raw mathematical expression, and insert a plot of the expression at the place where the command is. The expressions to be plotted can also include an animation parameter, in order to make insplot render animated sequences.
In a similar way, line graphics can be rendered via the command insdraw.
Wims has also an interface to Povray, allowing it to render ray-traced 3D graphics. See Section 3 for more details.
All these graphics insertions are dynamic. Static graphics can be inserted using ordinary html links.
### 2.4. Simultaneous accesses.
The number of simultaneous accesses is limited only by the server resources. This limit varies according to the speed of the server and the network connections, as well as the nature of the modules accessed. It ranges from 2 to 3 for very intensive requests (such as animated graphics), to more than 50 for usual exercises with no intensive computation nor TeX formatted output, on a server equipped with a Pentium II-266 CPU. This indicates that with normally distributed usage, one server site powered by a reasonably fast computer can satisfy the teaching requirements for several hundred students. | 2017-07-25 20:46:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4671073257923126, "perplexity": 1878.668097893211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425381.3/warc/CC-MAIN-20170725202416-20170725222416-00476.warc.gz"} |
https://physics.stackexchange.com/questions/578447/energy-of-poissons-equation-when-viewed-as-a-dynamical-system | # “Energy” of Poisson's equation when viewed as a dynamical system
I was recently exposed to an interesting way to solve the 1-d Poisson equation in electrostatics $$\epsilon_0\frac{d^2\phi}{dx^2} = -\rho$$ for potential $$\phi$$ and charge density $$\rho$$. If the charge density has no explicit spatial dependence, we can introduce a function $$V(\phi)$$ that generates the charge density $$\rho(\phi) = \frac{dV}{d\phi}$$ Then multiplying Poisson's equation through by $$d\phi/dx$$, we can arrive at $$\frac{d}{dx} \left[ \frac{\epsilon_0}{2} \left( \frac{d\phi}{dx} \right)^2 + V(\phi)\right] = 0$$ which shows that the quantity in square brackets is a constant everywhere. Calling this constant $$H$$, we can find that $$x(\phi) = C \pm \int \frac{d\phi}{\sqrt{\frac{2}{\epsilon_0}[H - V(\phi)]}}$$ which gives the inverse of the potential profile. (Integration constants and sign TBD based on boundary conditions.)
I found this result really interesting and was curious if it generalizes to multi-dimensional problems as well. This led me to the formulation of electrostatics as a dynamical system with Lagrangian $$L(\nabla\phi, \phi, \vec x) = \frac{\epsilon_0}{2} \nabla\phi\cdot\nabla\phi-V(\phi)$$ for which Poisson's equation follows from Euler-Lagrange as $$\nabla\cdot\frac{\partial L}{\partial(\nabla\phi)} - \frac{\partial L}{\partial \phi} = \epsilon_0\nabla^2\phi + \frac{dV}{d\phi} = \epsilon_0\nabla^2 \phi + \rho = 0$$ By analogy with the classical mechanics of a particle in a potential, it would seem the "energy", $$H$$, in multiple dimensions should be $$H = \frac{\epsilon_0}{2} \nabla\phi\cdot\nabla\phi + V(\phi)$$ but it is not clear to me that this is actually a constant in the sense of $$\nabla H = 0$$, except in one dimension. Is this the wrong criterion for what constitutes a "constant" in multiple dimensions? Or is there is no such constant in more than one dimension?
I feel like I am stumbling toward some basic result in classical field theory, but I cannot quite put my finger on it.
• I don't think it fully answers your question, but you may find this useful math.stackexchange.com/questions/39822/… – epiliam Sep 9 at 4:45
• Nice find! Taking the energy as a functional whose critical points solve Poisson's equation seems to be the key. Maybe the difference between 1d and higher dimensions is how explicitly you can move forward from there. – Endulum Sep 9 at 15:06
• It is an interesting question and the method from 1D doesn't seem to translate directly to higher dimensions. I have looked a lot at Poisson problems from a math view point - less from a physics view point - and if you have homogeneous BCs (either Dirichlet or Neumann), then you have by IBP that $\epsilon_0\int_\Omega \nabla \phi\cdot\nabla \phi =\int_\Omega\rho\phi$. The term on the left is always referred to as the energy norm. I think both quantities of that equation give the energy (perhaps scaled) and their equality is just conservation of energy. But that interpretation may be wrong. – epiliam Sep 9 at 23:03 | 2020-10-20 23:44:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8377834558486938, "perplexity": 206.72410625913443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874340.10/warc/CC-MAIN-20201020221156-20201021011156-00640.warc.gz"} |
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=sm&paperid=8624&option_lang=eng | RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Forthcoming papers Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Mat. Sb.: Year: Volume: Issue: Page: Find
Mat. Sb., 2017, Volume 208, Number 1, Pages 111–164 (Mi msb8624)
The asymptotic behaviour of the scattering matrix in a neighbourhood of the endpoints of a spectral gap
S. A. Nazarovabc
a Institute of Problems of Mechanical Engineering, Russian Academy of Sciences, St. Petersburg
b St. Petersburg State University, Department of Mathematics and Mechanics
c Saint-Petersburg State Polytechnical University
Abstract: The behaviour of the scattering matrix is investigated as the spectral parameter approaches an endpoint of a spectral gap of a quantum waveguide from the inside or the outside. The waveguide has two sleeves, one is cylindrical and the other periodic. When the spectral parameter traverses the spectral gap, the scattering matrix is reshaped because the number of waves inside and outside the gap is different. Notwithstanding, the smaller scattering matrix (in size) is transformed continuously into an identical block in the bigger scattering matrix and, in addition, the latter takes block diagonal form in the limit at the endpoint of the gap, that is, at the spectral threshold. The unexpected phenomena are related to the other block. It is shown that in the limit this block can only take certain values at the threshold, and taking one or other of these values depends on the structure of the continuous spectrum and also on the structure of the subspace of ‘almost standing’ waves at the threshold, which are solutions of the homogeneous problem that transfer no energy to infinity. A criterion for the existence of such solutions links the dimension of this subspace to the multiplicity of the eigenvalue $-1$ of the threshold scattering matrix. Asymptotic formulae are obtained, which show, in particular, that the phenomenon of anomalous scattering of high-amplitude waves at near-threshold frequencies, discovered by Weinstein in a special acoustic problem, also occurs in periodic waveguides.
Bibliography: 38 titles.
Keywords: junction of a cylindrical and a periodic waveguide, spectrum, threshold, gap, scattering matrix, asymptotic behaviour.
Funding Agency Grant Number Russian Foundation for Basic Research 15-01-02175-à This research was carried out with the support of the Russian Foundation for Basic Research (grant no. 15-01-02175-a).
DOI: https://doi.org/10.4213/sm8624
Full text: PDF file (1048 kB)
First page: PDF file
References: PDF file HTML file
English version:
Sbornik: Mathematics, 2017, 208:1, 103–156
Bibliographic databases:
UDC: 517.956.27+517.958:531.33
MSC: Primary 35J25; Secondary 35B25, 35P05
Citation: S. A. Nazarov, “The asymptotic behaviour of the scattering matrix in a neighbourhood of the endpoints of a spectral gap”, Mat. Sb., 208:1 (2017), 111–164; Sb. Math., 208:1 (2017), 103–156
Citation in format AMSBIB
\Bibitem{Naz17} \by S.~A.~Nazarov \paper The asymptotic behaviour of the scattering matrix in a~neighbourhood of the endpoints of a~spectral gap \jour Mat. Sb. \yr 2017 \vol 208 \issue 1 \pages 111--164 \mathnet{http://mi.mathnet.ru/msb8624} \crossref{https://doi.org/10.4213/sm8624} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=3598768} \adsnasa{http://adsabs.harvard.edu/cgi-bin/bib_query?2017SbMat.208..103N} \elib{http://elibrary.ru/item.asp?id=28172157} \transl \jour Sb. Math. \yr 2017 \vol 208 \issue 1 \pages 103--156 \crossref{https://doi.org/10.1070/SM8624} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000397338200006} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85016610675} | 2019-08-18 04:38:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38514724373817444, "perplexity": 2921.2020223183267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313617.6/warc/CC-MAIN-20190818042813-20190818064813-00500.warc.gz"} |
https://atractor.pt/mat/alg_controlo/diedral_texto-_en.html | ## Identification numbers with check digit algorithms
### An optimal system: the Verhoeff Scheme
The purpose of check digits is to detect errors in the transcription of numbers with many digits. What features should such a code have? It must detect all singular errors (a mistake in a single digit) and all transpositions of adjacent digits (1), since about 90% of usual errors are of one of those two types. On the other hand, for the sake of ease of use, it is often preferable to have systems with identification numbers build only with digits $$0, 1, 2,..., 9$$, with no need of extra symbols, and with a unique check digit (2).
Most implemeted codes (Barcodes, Identity Card, NIB, Visa Cards, banknotes, etc.) are based on modular arithmetic, but none of them have both the above properties. NIB code needs more than one check digit, barcodes and Visa cards do not detect all consecutive transpositions, BI code (if applied correctly) needs $$11$$ symbols (thus digits $$0, 1, 2,..., 9$$ are not enough), and the Euro system do not even detects all singular errors. In fact, one can show that any error-detecting code based on Modular Arithmetic can not satisfy both properties (1) and (2).
In 1969, the Dutch mathematician J. Verhoeff, designed an "optimal" (in the sense that it satisfies both properties (1) and (2)) error-detecting code. It relies on "more sophisticated" concepts and ideas from Group Theory.
Let us see an example of a Verhoeff code that uses some Group Theory. This code could replace more effectively the one currently implemented in ID cards. | 2022-12-05 18:59:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5241274237632751, "perplexity": 1005.2262651807974}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00650.warc.gz"} |
http://runescape.wikia.com/wiki/Weapon_Special_attack | ## FANDOM
38,841 Pages
"Spec" redirects here. For the shield, see Spectral spirit shield.
Release date (Update) Yes Constitution 10 Special Varies% Special attacks 0 seconds Some of the most powerful weapons grant you unique special attacks.
Weapon Special attack is an ability that can only be used when specific weapons that have a special attack are equipped. The effect and adrenaline cost vary from weapon to weapon.
## Normal combat modeEdit
To use the special attack of a weapon, the ability needs to activated which can be found in the Powers interface, under the Defensive tab, under the Constitution sub-tab. When wielding a weapon that has a special attack, the name of the ability changes to the name of the special attack for that weapon, however the icon will always be the same. Alternatively, one may click on the Adrenaline bar on the Action bar to use a special attack.
Items such as the Ring of vigour and the Asylum surgeon's ring can reduce the adrenaline needed to perform a special attack.
Each special attack requires a certain amount of adrenaline to use, which is also consumed on use. For example, the Korasi's sword special attack, Disrupt, may only be used at 60% adrenaline, and once it is used, 60% adrenaline is drained. There are no cooldowns for using most special attacks, one exception being the Zaros godsword's.
## Legacy combat modeEdit
To use the special attack in Legacy combat mode, the Legacy interface mode also needs to enabled; from there, the special attack bar can be found at the bottom of the Combat Settings menu.
Each special attack requires a certain amount of special attack energy to use, which is also consumed on use. For example, the Korasi's sword special attack, Disrupt, requires 60% special attack energy, and once it is used, 60% energy is drained.
### Ability damage in Legacy combat modeEdit
Because ability damage does not exist in Legacy combat, such a number is calculated and used with special attacks. This value is calculated as follows:
$D = 1.2 ad \times sm$
Where:
• $D$ is the result, worth 100% ability damage when using special attacks in Legacy
• $ad$ is the player's auto-attack damage with their main-hand or off-hand, as seen in the Loadout screen
• $sm$ is a multiplier that localises damage from weapons of different speeds:
$s = \begin{cases} \frac{96}{149} \approx 0.644295 & \text{Average}\\ \frac{960}{1225} \approx 0.783673 & \text{Fast}\\ 1 & \text{Fastest}\\ \end{cases}$
If dual-wielding, the results from main-hand and off-hand are added together. Hence off-hands will increase the damage of main-hand special attacks, as they do in the modern combat system.
## Weapons that have a special attackEdit
### Melee weaponsEdit
Abyssal whip
Lucky abyssal whip
Energy Drain 50% Deals 100% weapon damage and steals all your opponent's run energy.
Abyssal vine whip Vine Call 60% Summons a vine near your opponent that will hit them 10 times for 125% of your accuracy, and 1/3rd of your strength. The attacks will only hit as long as your opponent is in range.
Ancient mace
Superior ancient mace
Favour of the War God 100% Attack ignores protection prayers and will recharge Prayer points by 10% of the weapon damage while draining your opponent's Prayer points the same amount.
Annihilation Gravitate 60% For 30 seconds all successful attacks on your opponent(s) adds a stack (one stack per ability, or two stacks per Legacy Mode auto-attack), which increases your damage by 1% per stack (capping at 20%).
Barrelchest anchor Sunder 50% Swing the anchor with increased accuracy to deal 75-150% weapon damage and randomly reduce your target's Defence, Attack, Ranged or Magic level.
Bone dagger Backstab 75% Deals 50-200% weapon damage and lowers your target defence proportional to the damage dealt. Accuracy is increased by 100% when hitting unsuspecting targets.
Brackish blade Rum'ify 75% Doubles the chance of hitting and increases your Attack, Strength and Defence proportional to the damage dealt.
Brine sabre Liquefy 75% Doubles the chance of hitting and increases your Attack, Strength and Defence proportional to the damage dealt. Can only be used underwater.
Darklight Weaken 50% Strike your target reducing their Attack, Strength and Defence by 5%. Against demons, this is twice as effective.
Granite maul Quick Smash 50% Perform an instant attack, dealing up to 150% weapon damage and ignoring any cooldowns.
Keenblade Critical Strike 75% Strike the enemy with 25% increased accuracy for 100-150% weapon damage.
Korasi's sword Disrupt 60% Perform a reliable magic attack on all enemies around you.
Lava whip Get over Here! 75% Stuns and binds an NPC for 6 seconds. When used on a player, it will drag them to an adjacent tile near the user. This attack will work over blocked terrain (e.g a pool of water), but will not work if the user cannot see the target (e.g a wall). If a player is pulled from single way zone into a multiway zone in the wilderness then the multiway effect will be delayed by 10 seconds.
Noxious scythe Mirrorback 100% Summon a mirrorback spider to lower and reflect damage for 10 seconds.
Rune claw & Off-hand rune claw Impale 25% Impale your target for 25-135% weapon damage.
Statius's warhammer
Superior Statius's warhammer
Smash 25% Deal up to 50% more damage and lower your opponents defence by 30%
Vesta's longsword
Superior Vesta's longsword
Feint 25% Deal 100-250% damage with a greater chance to hit.
Vesta's spear
Superior Vesta's spear
Spear Wall 50% Damages all targets adjacent to you and reflects 50% of any damage you receive back at attackers for 5 seconds.
Dragon weapons
Dragon claw & Off-hand dragon claw Slice & Dice 50% Unleash four brutal claw slashes upon your opponent.
Dragon dagger Puncture 25% Two quick slashes with increased accuracy and damage.
Dragon battleaxe Rampage 100% Do 20% more melee damage but reduce your hit chance and defence by 10% for a minute.
Dragon halberd Sweep 30% Swipe all targets in front of you twice.
Dragon hatchet Clobber 100% Attack that lowers your opponents defence and magic by 10%
Dragon longsword Cleave 25% A powerful attack dealing 100-250% weapon damage
Dragon mace Shatter 25% Deals a massive blow of 100-300% weapon damage with 25% increased accuracy.
Dragon spear
Zamorakian spear
Lucky Zamorakian spear
Shove 25% Push your opponent back and stun them for 3s. This attack deals no damage.
Dragon 2h sword Powerstab 60% Hit enemies surrounding you.
Dragon pickaxe
Gilded dragon pickaxe
Crystal pickaxe
Shock 100% Powerful attack that drains your opponents Attack, Ranged and Magic by 5%.
Dragon scimitar Sever 55% A slash with increased accuracy that, if successful, prevents the target from using protection prayers for five seconds.
Godswords
The Judgement 50% Unleash the power of Armadyl with up to 250% weapon damage.
Bandos godsword
Lucky Bandos godsword
Golden Bandos godsword
Superior honourable kyzaj
Superior bloodied kyzaj
Warstrike 100% Deals up to 200% weapon damage and drains one of your targets combat stats by 10% of the damage dealt.
Healing Blade 50% Deals up to 175% weapon damage, heal 50% of your damage you deal and restores your prayer points by 2.5% of the damage you deal.
Zamorak godsword
Lucky Zamorak godsword
Golden Zamorak godsword
Ice Cleave 60% Deals up to 175% weapon damage and freezes the target for 10 seconds.
Zaros godsword Blackhole 50% Summons a black hole over the immediate vicinity for 20 seconds. While this black hole is active, melee damage is increased by 25%, and if your target enters the black hole, it takes 25-50% ability damage every 1.8 seconds.
### Ranged weaponsEdit
Dark bow Descent of Darkness 65% A powerful double attack that hits for 200% weapon damage, or 300% when using dragon or dark arrows.
Decimation Locate 50% All of your attacks hit all available targets within 3 tiles of your main target (i.e. all of your attacks become area-of-effect attacks) for 10 seconds.
Hand cannon Aimed Shot 50% Fire a shot which deals 30-200% weapon damage and has 75% greater accuracy, but takes longer to aim. There is a chance that the hand cannon will explode and be destroyed.
Magic composite bow
Magic shieldbow
Powershot 35% A powerful attack with 20% increased weapon damage and accuracy.
Magic shieldbow Powershot 35% A powerful attack with 20% increased weapon damage and accuracy.
Magic shortbow Snap-Shot 55% A quick double attack that reduces your accuracy by 30% but deals 20-200% weapon damage each shot.
Morrigan's javelin
Superior Morrigan's javelin
Phantom strike 50% Inflicts a bleed on the opposing player that will deal 100% of the damage dealt. Against non-player opponents this will deal 20% extra damage.
Morrigan's throwing axe
Superior Morrigan's throwing axe
Hamstring 50% Deal 20% extra damage[sic] and increase the enemies run energy drain rate for a minute.
Noxious longbow Mirrorback 100% Summon a mirrorback spider to lower and reflect damage for 10 seconds.
Quickbow Twin Shot 75% Fire two arrows with 25% increased accuracy for 100-150% weapon damage.
Rune throwing axe Chain-hit 10% Throw an axe which hits its target then bounces up to 3 nearby targets for up to 100% weapon damage each.
Seercull Soulshot 60% Lowers your target's magic level based on damage dealt.
Strykebow Deep Burn Shield 25% 12.5% of stored damage will be released in the form of a rapid damage over time attack that will last for 4.8 seconds.
Zanik's crossbow Defiance 50% An attack that deals more damage to those with an affinity to the gods.
Godbows
Guthix bow Balanced Shot 55% Deals 150% weapon damage to your opponent and heals the same amount of damage you would have dealt over time. When Guthix arrows are equipped, a random chance of dealing 100% in earth damage.
Saradomin bow Restorative Shot 55% Deal 100% weapon damage to your opponent and heals twice the amount of damage over time. When Saradomin arrows are equipped, a random chance of dealing 100% in water damage.
Seren godbow Crystal Rain 50% Five arrows are launched into the air in a 5x5 area centered randomly on any tile occupied by the player's target, with one arrow always in the center. Each arrow does 50-200% ability damage if it hits, but if the target is hit multiple times, the maximum damage is reduced by 25% per arrow.
Zamorak bow Twin Shot 55% Fire your bow, dealing 200% weapon damage against your opponent. When Zamorak arrows are equipped, a random chance of dealing 100% in fire damage.
### Magic weaponsEdit
Iban's staff Iban Blast 25% A powerful attack dealing 100-250% spell damage.
Mindspike Rune Flame 75% Blast your target with 25% increased accuracy for 100-150% weapon damage.
Noxious staff Mirrorback 100% Summon a mirrorback spider to lower and reflect damage for 10 seconds.
Obliteration Devour 100% For 10 seconds, when your opponent heals themselves, it will only heal for 50% of the amount it normally would.
Penance trident
Penance master trident
Reap 50% Strike your target for 25-200% active spell damage with a 5% increase in your critical chance rate.
Staff of darkness Power of Darkness 100% Absorbs and reflects 25% of all incoming damage for the next 20 seconds.
Staff of light Power of Light 100% Reduce all melee damage by 50% for the next minute.
Staff of Sliske From the Shadows 50% A shadow clone is summoned from the Shadow Realm that attacks your target five times, dealing 20-100% ability damage each attack.
Zuriel's staff
Superior Zuriel's staff
Miasmic Barrage 100% Deal up to 200% weapon damage and slow your opponents attack speed down.
Godstaves | 2017-08-20 01:02:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3213518559932709, "perplexity": 12455.918674321452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105955.66/warc/CC-MAIN-20170819235943-20170820015943-00096.warc.gz"} |
https://zbmath.org/?q=an%3A1459.62054 | # zbMATH — the first resource for mathematics
“Local” vs. “global” parameters – breaking the Gaussian complexity barrier. (English) Zbl 1459.62054
Summary: We show that if $$F$$ is a convex class of functions that is $$L$$-sub-Gaussian, the error rate of learning problems generated by independent noise is equivalent to a fixed point determined by “local” covering estimates of the class (i.e., the covering number at a specific level), rather than by the Gaussian average, which takes into account the structure of $$F$$ at an arbitrarily small scale. To that end, we establish new sharp upper and lower estimates on the error rate in such learning problems.
##### MSC:
62G08 Nonparametric regression and quantile regression 62C20 Minimax procedures in statistical decision theory 60G15 Gaussian processes
##### Keywords:
error rates; Gaussian averages; covering numbers
Full Text: | 2021-07-27 06:34:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38649672269821167, "perplexity": 1055.7909342851258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152236.64/warc/CC-MAIN-20210727041254-20210727071254-00656.warc.gz"} |
https://math.stackexchange.com/questions/1146036/check-my-proof-of-overspill-in-non-standard-models-of-peano-arithmetic | # Check my proof of “overspill” in non-standard models of Peano arithmetic.
Proposition
Let $\mathcal{M}$ be a nonstandard model of Peano arithmetic, $\phi(v,\bar{w})$ a formula in the language of arithmetic, and $\bar{a} \in \mathbb{M}$. Show that if $\mathcal{M} \models \phi(n,\bar{a})$ for all $n < \omega$, then there is an infinite $c \in \mathbb{M}$ s.t. $\mathcal{M} \models \phi(c,\bar{a})$.
My Proof
Let $\mathcal{N}$ be the standard model of Peano arithmetic, and $\pmb{PA}$ the theory of Peano arithmetic so that $\mathcal{N} \models \pmb{PA}$; additionally, we know by definition $\mathcal{M} \models \pmb{PA}$. Suppose for the sake of contradiction that there exists a nonstandard model $\mathcal{M}$ so that $\mathcal{M} \models \phi(n,\bar{a})$ but $\mathcal{M} \not\models \phi(c,\bar{a})$. Since $\mathcal{M} \models \pmb{PA}$, $\mathcal{M} \models \phi(0,\bar{a})$ and by the assumption that $\mathcal{M} \models \phi(n,\bar{a})$ for all $n < \omega$, we know $\mathcal{M} \models \forall x (\phi(x,\bar{a}) \Rightarrow \phi(x + 1, \bar{a}))$. Hence by the induction axiom of $\pmb{PA}$:
$$Ind(\phi) := (\phi(0) \wedge \forall x(\phi(x) \Rightarrow (x+1))) \Rightarrow \forall x \phi(x),$$
we know $\mathcal{M} \models \forall x (\phi(x,\bar{v}))$, therefore $\mathcal{M} \models \phi(c,\bar{a})$. But this contradicts our hypothesis and we conclude: $$\mathcal{M} \models \phi(c,\bar{a}).$$
My Problem
My primary concern is whether I can use the induction principle of Peano arithmetic to argue that I can "reach" this $c$ by induction. Since as I understand, this $c$ lies beyond $\omega$.
Additionally, is there another way to prove the proposition without invoking the details of Peano Arithmetic?
• I have a question: why $\mathcal{M}\models \phi(n,\overline{a})$ for all $n<\omega$ implies $\mathcal{M}\models \forall x (\phi(x,\overline{a})\to \phi(x+1,\overline{a}))$? – Hanul Jeon Feb 13 '15 at 4:25
• I presume it's because all the n's less than $\omega$ can be reached by induction, ie: $0,1,\ldots$. So for every $x$, you can always count up one and $\phi$ is still satisfied. – chibro2 Feb 13 '15 at 4:30
• I can't find a reason that $\phi(x,\overline{a})\to\phi(x+1,\overline{a})$ holds for nonstandard $x$. – Hanul Jeon Feb 13 '15 at 4:35
• So I presume the reason is by the induction principle which is an axiom of PA. This is the part where I am iffy about. Since the principle says if it's true for 0 and x, x+1, then it's true for all x. I presume the universal quantifier is over all elements of $\mathbb{M}$, which includes c. – chibro2 Feb 13 '15 at 4:47
• Ah, I realize it. If $\phi(x,a)$ does not hold for all nonstandard $x$, then $\lnot\phi(x+1,a)\to \lnot\phi(x,a)$ holds for all nonstandard $x$ so you can guarantee that $\forall x[ \phi(x,a)\to\phi(x+1,a)]$ holds on $\mathcal{M}$. – Hanul Jeon Feb 13 '15 at 4:50
The induction principle is OK in $\mathcal M$, but its hypothesis is not necessarily satisfied in your situation. You inferred from (1) $\mathcal M\models\phi(n,\bar a)$ for all $n<\omega$ that (2) $\mathcal M\models\forall x\,(\phi(x,\bar a)\implies\phi(x+1,\bar a))$. Unfortunately, (1) does not imply (2). The error arises because the variable $x$ in (2) ranges over the whole universe of $\mathcal M$, not just over numbers $n<\omega$.
Note also that, if your argument were correct, it would give that $\mathcal M\models\phi(c,\bar a)$ for all elements $c$ of $\mathcal M$. That conclusion is not in general right; the correct conclusion is that some infinite $c$ in $\mathcal M$ satisfies $\mathcal M\models\phi(c,\bar a)$. (One can do a bit better; there is some infinite $d$ in $\mathcal M$ such that $\mathcal M\models\phi(c,\bar a)$ for all $c\leq d$.)
• I think that if we assume (for an argument by contradiction), that $\mathcal{M} \vDash \phi(n,a)$ for all $n \in \mathbb{N}$ but $\mathcal{M} \nvDash \phi(c,a)$ for all $c \in \mathcal{M}\setminus\mathbb{N}$, then the argument of the OP will work. (In the assumption, I have added the quantification for the $n$ and the $c$ which were missing in the original question.) To put it otherwise: If we assume $\mathcal{M}\vDash \phi(n,a)$ only for standard numbers $n$, then (1) would imply (2). Or am I wrong? – russoo Feb 13 '15 at 16:10
• @russoo Yes. From the assumptions that $\phi(x,\bar a)$ holds in $\mathcal M$ for all standard $x$ and that it holds for no nonstandard $x$, then we can infer the assertion (2) in my answer. The OP may or may not have had the second part of the assumption in mind earlier in the argument (as you noticed, the quantification of $c$ is missing), but by the time he wanted (2) he was claiming that it followed from just (1). – Andreas Blass Feb 13 '15 at 16:19
This is my current proof. It incorporates suggestions from comments above and elsewhere.
First recall that $\phi(x) \rightarrow \phi(x+1)$ is short hand for $\neg \phi(x) \vee \phi(x+1)$. Now suppose for some infinite $c$, we have $\mathcal{M} \models \neg\psi(c,\bar{a})$, then the statement:
$$\mathcal{M} \models \neg\psi(c,\bar{a}) \vee \mathcal{M} \models (c + 1, \bar{a}),$$
is true since the first statement of the disjunction is true by definition. But by definition:
$$\mathcal{M} \models \neg\psi(c,\bar{a}) \vee \mathcal{M} \models (c + 1, \bar{a}) \Rightarrow \mathcal{M} \models \psi(c,\bar{a}) \rightarrow \psi(c + 1, \bar{a}),$$
hence the implication is true for all infinite $c$. On the other hand, we know $\mathcal{M} \models \psi(n,\bar{a})$ for all $n < \omega$ so $\mathcal{M} \models \psi(n+1,\bar{a})$. Hence we have:
$$\mathcal{M} \models \neg\psi(n,\bar{a}) \vee \mathcal{M} \models (n + 1, \bar{a}) \Rightarrow \mathcal{M} \models \psi(n,\bar{a}) \rightarrow \psi(n + 1, \bar{a}),$$
where the term on the left hand side of of $\Rightarrow$ is true because the term to the right of the disjunction is true by assumption. But we know all terms in $\mathbb{M}$ is either of form $n < \omega$ or some infinite $c$, so we can say:
$$\forall x (\psi(x,\bar{a}) \rightarrow \psi(x+1,\bar{a})) \rightarrow \forall x \psi(x,\bar{a}).$$ Hence $\mathcal{M} \models \psi(c,\bar{a})$ for some infinite $c \in \mathbb{M}$. | 2019-07-23 22:17:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9484437704086304, "perplexity": 141.6758463546045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529737.79/warc/CC-MAIN-20190723215340-20190724001340-00418.warc.gz"} |
https://planetmath.org/apolloniuscircle | # Apollonius’ circle
Apollonius’ circle. The locus of a point moving so that the ratio of its distances from two fixed points is fixed, is a circle.
If two circles $C_{1}$ and $C_{2}$ are fixed with radii $r_{1}$ and $r_{2}$, then the circle of Apollonius of the two centers with ratio $r_{1}/r_{2}$ is the circle whose diameter is the segment that the two homothety centers of the circles.
Title Apollonius’ circle ApolloniusCircle 2013-03-22 11:44:22 2013-03-22 11:44:22 drini (3) drini (3) 11 drini (3) Definition msc 51-00 msc 35-01 HarmonicDivision | 2021-04-21 10:04:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 5, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9073293209075928, "perplexity": 1038.5158619616716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039536858.83/warc/CC-MAIN-20210421100029-20210421130029-00449.warc.gz"} |
https://pypi.org/project/generate_password/ | ## Project description
This is Python version 2.6.2
============================
Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009
Python Software Foundation.
Copyright (c) 1995-2001 Corporation for National Research Initiatives.
Copyright (c) 1991-1995 Stichting Mathematisch Centrum.
-------------------
See the file "LICENSE" for information on the history of this
software, terms & conditions for usage, and a DISCLAIMER OF ALL
WARRANTIES.
This Python distribution contains no GNU General Public Licensed
(GPLed) code so it may be used in proprietary projects just like prior
Python distributions. There are interfaces to some GNU code but these
are entirely optional.
All trademarks referenced herein are property of their respective
holders.
What's new in this release?
---------------------------
See the file "Misc/NEWS".
------------------------------
Congratulations on getting this far. :-)
To start building right away (on UNIX): type "./configure" in the
current directory and when it finishes, type "make". This creates an
executable "./python"; to install in /usr/local, first do "su root"
and then "make install".
The section Build instructions' below is still recommended reading.
What is Python anyway?
----------------------
Python is an interpreted, interactive object-oriented programming
language suitable (amongst other uses) for distributed application
development, scripting, numeric computing and system testing. Python
is often compared to Tcl, Perl, Java, JavaScript, Visual Basic or
Scheme. To find out more about what Python can do for you, point your
browser to http://www.python.org/.
How do I learn Python?
----------------------
The official tutorial is still a good place to start; see
as a list of other introductions, and reference documentation.
There's a quickly growing set of books on Python. See
http://wiki.python.org/moin/PythonBooks for a list.
Documentation
-------------
All documentation is provided online in a variety of formats. In
order of importance for new users: Tutorial, Library Reference,
Language Reference, Extending & Embedding, and the Python/C API. The
Library Reference is especially of immense value since much of
Python's power is described there, including the built-in data types
and functions!
All documentation is also available online at the Python web site
(http://docs.python.org/, see below). It is available online for occasional
reference, or can be downloaded in many formats for faster access. The
reStructuredText (2.6+) formats; the LaTeX and reStructuredText versions are
primarily for documentation authors, translators, and people with special
formatting requirements.
Web sites
---------
New Python releases and related technologies are published at
http://www.python.org/. Come visit us!
There's also a Python community web site at
http://starship.python.net/.
Newsgroups and Mailing Lists
----------------------------
Python, or comp.lang.python.announce, a low-volume moderated newsgroup
for Python-related announcements. These are also accessible as
mailing lists: see http://www.python.org/community/lists.html for an
overview of these and many other Python-related mailing lists.
Archives are accessible via the Google Groups Usenet archive; see
http://groups.google.com/. The mailing lists are also archived, see
http://www.python.org/community/lists.html for details.
Bug reports
-----------
To report or search for bugs, please use the Python Bug
Tracker at http://bugs.python.org.
Patches and contributions
-------------------------
To submit a patch or other contribution, please use the Python Patch
Manager at http://bugs.python.org. Guidelines
for patch submission may be found at http://www.python.org/dev/patches/.
If you have a proposal to change Python, you may want to send an email to the
comp.lang.python or python-ideas mailing lists for inital feedback. A Python
Enhancement Proposal (PEP) may be submitted if your idea gains ground. All
current PEPs, as well as guidelines for submitting a new PEP, are listed at
http://www.python.org/dev/peps/.
Questions
---------
For help, if you can't find it in the manuals or on the web site, it's
best to post to the comp.lang.python or the Python mailing list (see
above). If you specifically don't want to involve the newsgroup or
mailing list, send questions to [email protected] (a group of volunteers
who answer questions as they can). The newsgroup is the most
efficient way to ask public questions.
Build instructions
==================
Before you can build Python, you must first configure it.
Fortunately, the configuration and build process has been automated
for Unix and Linux installations, so all you usually have to do is
type a few commands and sit back. There are some platforms where
things are not quite as smooth; see the platform specific notes below.
If you want to build for multiple platforms sharing the same source
tree, see the section on VPATH below.
Start by running the script "./configure", which determines your
system configuration and creates the Makefile. (It takes a minute or
two -- please be patient!) You may want to pass options to the
configure script -- see the section below on configuration options and
variables. When it's done, you are ready to run make.
To build Python, you normally type "make" in the toplevel directory.
If you have changed the configuration, the Makefile may have to be
rebuilt. In this case you may have to run make again to correctly
build your desired target. The interpreter executable is built in the
top level directory.
Once you have built a Python interpreter, see the subsections below on
testing and installation. If you run into trouble, see the next
section.
Previous versions of Python used a manual configuration process that
involved editing the file Modules/Setup. While this file still exists
and manual configuration is still supported, it is rarely needed any
more: almost all modules are automatically built as appropriate under
guidance of the setup.py script, which is run by Make after the
interpreter has been built.
Troubleshooting
---------------
If you run into other trouble, see the FAQ
(http://www.python.org/doc/faq) for hints on what can go wrong, and
how to fix it.
If you rerun the configure script with different options, remove all
object files by running "make clean" before rebuilding. Believe it or
not, "make clean" sometimes helps to clean up other inexplicable
problems as well. Try it before sending in a bug report!
If the configure script fails or doesn't seem to find things that
should be there, inspect the config.log file.
If you get a warning for every file about the -Olimit option being no
longer supported, you can ignore it. There's no foolproof way to know
whether this option is needed; all we can do is test whether it is
accepted without error. On some systems, e.g. older SGI compilers, it
is essential for performance (specifically when compiling ceval.c,
which has more basic blocks than the default limit of 1000). If the
warning bothers you, edit the Makefile to remove "-Olimit 1500" from
the OPT variable.
If you get failures in test_long, or sys.maxint gets set to -1, you
are probably experiencing compiler bugs, usually related to
optimization. This is a common problem with some versions of gcc, and
some vendor-supplied compilers, which can sometimes be worked around
by turning off optimization. Consider switching to stable versions
(gcc 2.95.2, gcc 3.x, or contact your vendor.)
From Python 2.0 onward, all Python C code is ANSI C. Compiling using
old K&R-C-only compilers is no longer possible. ANSI C compilers are
available for all modern systems, either in the form of updated
compilers from the vendor, or one of the free compilers (gcc).
If "make install" fails mysteriously during the "compiling the library"
step, make sure that you don't have any of the PYTHONPATH or PYTHONHOME
environment variables set, as they may interfere with the newly built
executable which is compiling the library.
Unsupported systems
-------------------
A number of features are not supported in Python 2.5 anymore. Some
support code is still present, but will be removed in Python 2.6.
If you still need to use current Python versions on these systems,
please send a message to [email protected] indicating that you
volunteer to support this system. For a more detailed discussion
regarding no-longer-supported and resupporting platforms, as well
as a list of platforms that became or will be unsupported, see PEP 11.
More specifically, the following systems are not supported any
longer:
- SunOS 4
- DYNIX
- dgux
- Minix
- NeXT
- Irix 4 and --with-sgi-dl
- Linux 1
- Systems using --with-dl-dld
- Systems using --without-universal-newlines
- MacOS 9
The following systems are still supported in Python 2.5, but
support will be dropped in 2.6:
- Systems using --with-wctype-functions
- Win9x, WinME
Warning on install in Windows 98 and Windows Me
-----------------------------------------------
Following Microsoft's closing of Extended Support for
Windows 98/ME (July 11, 2006), Python 2.6 will stop
supporting these platforms. Python development and
maintainability becomes easier (and more reliable) when
platform specific code targeting OSes with few users
and no dedicated expert developers is taken out. The
vendor also warns that the OS versions listed above
"can expose customers to security risks" and recommends
Platform specific notes
-----------------------
(Some of these may no longer apply. If you find you can build Python
on these platforms without the special directions mentioned here,
submit a documentation bug report to SourceForge (see Bug Reports
above) so we can remove them!)
Unix platforms: If your vendor still ships (and you still use) Berkeley DB
1.85 you will need to edit Modules/Setup to build the bsddb185
module and add a line to sitecustomize.py which makes it the
default. In Modules/Setup a line like
bsddb185 bsddbmodule.c
should work. (You may need to add -I, -L or -l flags to direct the
XXX I think this next bit is out of date:
64-bit platforms: The modules audioop, and imageop don't work.
The setup.py script disables them on 64-bit installations.
Don't try to enable them in the Modules/Setup file. They
contain code that is quite wordsize sensitive. (If you have a
fix, let us know!)
Solaris: When using Sun's C compiler with threads, at least on Solaris
2.5.1, you need to add the "-mt" compiler option (the simplest
way is probably to specify the compiler with this option as
the "CC" environment variable when running the configure
script).
When using GCC on Solaris, beware of binutils 2.13 or GCC
versions built using it. This mistakenly enables the
-zcombreloc option which creates broken shared libraries on
Solaris. binutils 2.12 works, and the binutils maintainers
are aware of the problem. Binutils 2.13.1 only partially
fixed things. It appears that 2.13.2 solves the problem
completely. This problem is known to occur with Solaris 2.7
and 2.8, but may also affect earlier and later versions of the
OS.
libraries, such as
ld.so.1: ./python: fatal: libstdc++.so.5: open failed:
No such file or directory
you need to first make sure that the library is available on
to find it. You can choose any of the following strategies:
1. When compiling Python, set LD_RUN_PATH to the directories
containing missing libraries.
2. When running Python, set LD_LIBRARY_PATH to these directories.
3. Use crle(8) to extend the search path of the loader.
4. Modify the installed GCC specs file, adding -R options into the
The complex object fails to compile on Solaris 10 with gcc 3.4 (at
least up to 3.4.3). To work around it, define Py_HUGE_VAL as
HUGE_VAL(), e.g.:
make CPPFLAGS='-D"Py_HUGE_VAL=HUGE_VAL()" -I. -I$(srcdir)/Include' ./python setup.py CPPFLAGS='-D"Py_HUGE_VAL=HUGE_VAL()"' Linux: A problem with threads and fork() was tracked down to a bug in the pthreads code in glibc version 2.0.5; glibc version 2.0.7 solves the problem. This causes the popen2 test to fail; problem and solution reported by Pablo Bleyer. Red Hat Linux: Red Hat 9 built Python2.2 in UCS-4 mode and hacked Tcl to support it. To compile Python2.3 with Tkinter, you will need to pass --enable-unicode=ucs4 flag to ./configure. There's an executable /usr/bin/python which is Python 1.5.2 on most older Red Hat installations; several key Red Hat tools require this version. Python 2.1.x may be installed as /usr/bin/python2. The Makefile installs Python as /usr/local/bin/python, which may or may not take precedence over /usr/bin/python, depending on how you have set up$PATH.
FreeBSD 3.x and probably platforms with NCurses that use libmytinfo or
similar: When using cursesmodule, the linking is not done in
the correct order with the defaults. Remove "-ltermcap" from
the readline entry in Setup, and use as curses entry: "curses
cursesmodule.c -lmytinfo -lncurses -ltermcap" - "mytinfo" (so
called on FreeBSD) should be the name of the auxiliary library
automatically, but not necessarily in the correct order.
BSDI: BSDI versions before 4.1 have known problems with threads,
which can cause strange errors in a number of modules (for
instance, the 'test_signal' test script will hang forever.)
BSDI 4.1 solves this problem.
DEC Unix: Run configure with --with-dec-threads, or with
default). When using GCC, it is possible to get an internal
compiler error if optimization is used. This was reported for
GCC 2.7.2.3 on selectmodule.c. Manually compile the affected
file without optimization to solve the problem.
DEC Ultrix: compile with GCC to avoid bugs in the native compiler,
and pass SHELL=/bin/sh5 to Make when installing.
AIX: A complete overhaul of the shared library support is now in
place. See Misc/AIX-NOTES for some notes on how it's done.
(The optimizer bug reported at this place in previous releases
has been worked around by a minimal code change.) If you get
testing, try setting CC to a thread-safe (reentrant) compiler,
like "cc_r". For full C++ module support, set CC="xlC_r" (or
AIX 5.3: To build a 64-bit version with IBM's compiler, I used the
following:
export PATH=/usr/bin:/usr/vacpp/bin
./configure --with-gcc="xlc_r -q64" --with-cxx="xlC_r -q64" \
--disable-ipv6 AR="ar -X64"
make
HP-UX: When using threading, you may have to add -D_REENTRANT to the
OPT variable in the top-level Makefile; reported by Pat Knight,
this seems to make a difference (at least for HP-UX 10.20)
even though pyconfig.h defines it. This seems unnecessary when
using HP/UX 11 and later - threading seems to work "out of the
box".
HP-UX ia64: When building on the ia64 (Itanium) platform using HP's
compiler, some experience has shown that the compiler's
optimiser produces a completely broken version of python
(see http://www.python.org/sf/814976). To work around this,
edit the Makefile and remove -O from the OPT line.
To build a 64-bit executable on an Itanium 2 system using HP's
compiler, use these environment variables:
CC=cc
CXX=aCC
BASECFLAGS="+DD64"
LDFLAGS="+DD64 -lxnet"
and call configure as:
./configure --without-gcc
then *unset* the environment variables again before running
make. (At least one of these flags causes the build to fail
if it remains set.) You still have to edit the Makefile and
remove -O from the OPT line.
HP PA-RISC 2.0: A recent bug report (http://www.python.org/sf/546117)
suggests that the C compiler in this 64-bit system has bugs
in the optimizer that break Python. Compiling without
optimization solves the problems.
SCO: The following apply to SCO 3 only; Python builds out of the box
on SCO 5 (or so we've heard).
1) Everything works much better if you add -U__STDC__ to the
defs. This is because all the SCO header files are broken.
Anything that isn't mentioned in the C standard is
conditionally excluded when __STDC__ is defined.
2) Due to the U.S. export restrictions, SCO broke the crypt
stuff out into a separate library, libcrypt_i.a so the LIBS
needed be set to:
LIBS=' -lsocket -lcrypt_i'
UnixWare: There are known bugs in the math library of the system, as well as
problems in the handling of threads (calling fork in one
thread may interrupt system calls in others). Therefore, test_math and
tests involving threads will fail until those problems are fixed.
QNX: Chris Herborth ([email protected]) writes:
configure works best if you use GNU bash; a port is available on
ftp.qnx.com in /usr/free. I used the following process to build,
test and install Python 1.5.x under QNX:
1) CONFIG_SHELL=/usr/local/bin/bash CC=cc RANLIB=: \
./configure --verbose --without-gcc --with-libm=""
2) edit Modules/Setup to activate everything that makes sense for
your system... tested here at QNX with the following modules:
array, audioop, binascii, cPickle, cStringIO, cmath,
crypt, curses, errno, fcntl, gdbm, grp, imageop,
_locale, math, md5, new, operator, parser, pcre,
select, signal, socket, soundex, strop, struct,
syslog, termios, time, timing, zlib, audioop, imageop
3) make SHELL=/usr/local/bin/bash
or, if you feel the need for speed:
make SHELL=/usr/local/bin/bash OPT="-5 -Oil+nrt"
4) make SHELL=/usr/local/bin/bash test
Using GNU readline 2.2 seems to behave strangely, but I
think that's a problem with my readline 2.2 port. :-\
5) make SHELL=/usr/local/bin/bash install
If you get SIGSEGVs while running Python (I haven't yet, but
I've only run small programs and the test cases), you're
probably running out of stack; the default 32k could be a
little tight. To increase the stack size, edit the Makefile
to read: LDFLAGS = -N 48k
BeOS: See Misc/BeOS-NOTES for notes about compiling/installing
Python on BeOS R3 or later. Note that only the PowerPC
platform is supported for R3; both PowerPC and x86 are
supported for R4.
Python can be built satisfactorily on a Cray T3E but based on
my experience with the NIWA T3E (2002-05-22, version 2.2.1)
there are a few bugs and gotchas. For more information see a
thread on comp.lang.python in May 2002 entitled "Building
Python on Cray T3E".
1) Use Cray's cc and not gcc. The latter was reported not to
work by Konrad Hinsen. It may work now, but it may not.
2) To set sys.platform to something sensible, pass the
following environment variable to the configure script:
MACHDEP=unicosmk
2) Run configure with option "--enable-unicode=ucs4".
3) The Cray T3E does not support dynamic linking, so extension
modules have to be built by adding (or uncommenting) lines
in Modules/Setup. The minimum set of modules is
posix, new, _sre, unicodedata
On NIWA's vanilla T3E system the following have also been
included successfully:
_codecs, _locale, _socket, _symtable, _testcapi, _weakref
array, binascii, cmath, cPickle, crypt, cStringIO, dbm
errno, fcntl, grp, math, md5, operator, parser, pcre, pwd
regex, rotor, select, struct, strop, syslog, termios
4) Once the python executable and library have been built, make
will execute setup.py, which will attempt to build remaining
extensions and link them dynamically. Each of these attempts
will fail but should not halt the make process. This is
normal.
5) Running "make test" uses a lot of resources and causes
problems on our system. You might want to try running tests
singly or in small groups.
SGI: SGI's standard "make" utility (/bin/make or /usr/bin/make)
does not check whether a command actually changed the file it
is supposed to build. This means that whenever you say "make"
it will redo the link step. The remedy is to use SGI's much
smarter "smake" utility (/usr/sbin/smake), or GNU make. If
you set the first line of the Makefile to #!/usr/sbin/smake
smake will be invoked by make (likewise for GNU make).
WARNING: There are bugs in the optimizer of some versions of
SGI's compilers that can cause bus errors or other strange
behavior, especially on numerical operations. To avoid this,
try building with "make OPT=".
OS/2: If you are running Warp3 or Warp4 and have IBM's VisualAge C/C++
compiler installed, just change into the pc\os2vacpp directory
and type NMAKE. Threading and sockets are supported by default
in the resulting binaries of PYTHON15.DLL and PYTHON.EXE.
Monterey (64-bit AIX): The current Monterey C compiler (Visual Age)
uses the OBJECT_MODE={32|64} environment variable to set the
compilation mode to either 32-bit or 64-bit (32-bit mode is
the default). Presumably you want 64-bit compilation mode for
this 64-bit OS. As a result you must first set OBJECT_MODE=64
in your environment before configuring (./configure) or
building (make) Python on Monterey.
Reliant UNIX: The thread support does not compile on Reliant UNIX, and
there is a (minor) problem in the configure script for that
platform as well. This should be resolved in time for a
future release.
MacOSX: The tests will crash on both 10.1 and 10.2 with SEGV in
test_re and test_sre due to the small default stack size. If
you set the stack size to 2048 before doing a "make test" the
failure can be avoided. If you're using the tcsh or csh shells,
use "limit stacksize 2048" and for the bash shell (the default
as of OSX 10.3), use "ulimit -s 2048".
On naked Darwin you may want to add the configure option
"--disable-toolbox-glue" to disable the glue code for the Carbon
interface modules. The modules themselves are currently only built
if you add the --enable-framework option, see below.
On a clean OSX /usr/local does not exist. Do a
"sudo mkdir -m 775 /usr/local"
before you do a make install. It is probably not a good idea to
do "sudo make install" which installs everything as superuser,
as this may later cause problems when installing distutils-based
Some people have reported problems building Python after using "fink"
to install additional unix software. Disabling fink (remove all
You may want to try the configure option "--enable-framework"
which installs Python as a framework. The location can be set
as argument to the --enable-framework option (default
/Library/Frameworks). A framework install is probably needed if you
want to use any Aqua-based GUI toolkit (whether Tkinter, wxPython,
Carbon, Cocoa or anything else).
You may also want to try the configure option "--enable-universalsdk"
which builds Python as a universal binary with support for the
i386 and PPC architetures. This requires Xcode 2.1 or later to build.
universal builds.
Cygwin: With recent (relative to the time of writing, 2001-12-19)
Cygwin installations, there are problems with the interaction
of dynamic linking and fork(). This manifests itself in build
failures during the execution of setup.py.
There are two workarounds that both enable Python (albeit
without threading support) to build and pass all tests on
NT/2000 (and most likely XP as well, though reports of testing
on XP would be appreciated).
The workarounds:
(a) the band-aid fix is to link the _socket module statically
rather than dynamically (which is the default).
To do this, run "./configure --with-threads=no" including any
other options you need (--prefix, etc.). Then in Modules/Setup
uncomment the lines:
#SSL=/usr/local/ssl
#_socket socketmodule.c \
# -DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \
# -L$(SSL)/lib -lssl -lcrypto and remove "local/" from the SSL variable. Finally, just run "make"! (b) The "proper" fix is to rebase the Cygwin DLLs to prevent base address conflicts. Details on how to do this can be found in the following mail: http://sources.redhat.com/ml/cygwin/2001-12/msg00894.html It is hoped that a version of this solution will be incorporated into the Cygwin distribution fairly soon. Two additional problems: (1) Threading support should still be disabled due to a known bug in Cygwin pthreads that causes test_threadedtempfile to hang. (2) The _curses module does not build. This is a known Cygwin ncurses problem that should be resolved the next time that this package is released. On older versions of Cygwin, test_poll may hang and test_strftime may fail. The situation on 9X/Me is not accurately known at present. Some time ago, there were reports that the following regression tests failed: test_pwd test_select (hang) test_socket Due to the test_select hang on 9X/Me, one should run the regression test using the following: make TESTOPTS='-l -x test_select' test News regarding these platforms with more recent Cygwin versions would be appreciated! Windows: When executing Python scripts on the command line using file type associations (i.e. starting "script.py" instead of "python script.py"), redirects may not work unless you set a specific registry key. See the Knowledge Base article <http://support.microsoft.com/kb/321788>. Configuring the bsddb and dbm modules ------------------------------------- Beginning with Python version 2.3, the PyBsddb package <http://pybsddb.sf.net/> was adopted into Python as the bsddb package, exposing a set of package-level functions which provide backwards-compatible behavior. Only versions 3.3 through 4.4 of Sleepycat's libraries provide the necessary API, so older versions aren't supported through this interface. The old bsddb module has been retained as bsddb185, though it is not built by default. Users wishing to use it will have to tweak Modules/Setup to build it. The dbm module will still be built against the Sleepycat libraries if other preferred alternatives (ndbm, gdbm) are not found. Building the sqlite3 module --------------------------- To build the sqlite3 module, you'll need the sqlite3 or libsqlite3 packages installed, including the header files. Many modern operating systems distribute the headers in a separate package to the library - often it will be the same name as the main package, but with a -dev or -devel suffix. The version of pysqlite2 that's including in Python needs sqlite3 3.0.8 or later. setup.py attempts to check that it can find a correct version. Configuring threads ------------------- As of Python 2.0, threads are enabled by default. If you wish to compile without threads, or if your thread support is broken, pass the --with-threads=no switch to configure. Unfortunately, on some platforms, additional compiler and/or linker options are required for threads to work properly. Below is a table of those options, collected by Bill Janssen. We would love to automate this process more, but the information below is not enough to write a patch for the configure.in file, so manual intervention is required. If you patch the configure.in file and are confident that the patch works, please send in the patch. (Don't bother patching the configure script itself -- it is regenerated each time the configure.in file changes.) Compiler switches for threads ............................. The definition of _REENTRANT should be configured automatically, if that does not work on your system, or if _REENTRANT is defined incorrectly, please report that as a bug. OS/Compiler/threads Switches for use with threads (POSIX is draft 10, DCE is draft 4) compile & link SunOS 5.{1-5}/{gcc,SunPro cc}/solaris -mt SunOS 5.5/{gcc,SunPro cc}/POSIX (nothing) DEC OSF/1 3.x/cc/DCE -threads ([email protected]) Digital UNIX 4.x/cc/DCE -threads ([email protected]) Digital UNIX 4.x/cc/POSIX -pthread ([email protected]) AIX 4.1.4/cc_r/d7 (nothing) ([email protected]) AIX 4.1.4/cc_r4/DCE (nothing) ([email protected]) IRIX 6.2/cc/POSIX (nothing) ([email protected]) Linker (ld) libraries and flags for threads ........................................... OS/threads Libraries/switches for use with threads SunOS 5.{1-5}/solaris -lthread SunOS 5.5/POSIX -lpthread DEC OSF/1 3.x/DCE -lpthreads -lmach -lc_r -lc ([email protected]) Digital UNIX 4.x/DCE -lpthreads -lpthread -lmach -lexc -lc ([email protected]) Digital UNIX 4.x/POSIX -lpthread -lmach -lexc -lc ([email protected]) AIX 4.1.4/{draft7,DCE} (nothing) ([email protected]) IRIX 6.2/POSIX -lpthread ([email protected]) Building a shared libpython --------------------------- Starting with Python 2.3, the majority of the interpreter can be built into a shared library, which can then be used by the interpreter executable, and by applications embedding Python. To enable this feature, configure with --enable-shared. If you enable this feature, the same object files will be used to create a static library. In particular, the static library will contain object files using position-independent code (PIC) on platforms where PIC flags are needed for the shared library. Configuring additional built-in modules --------------------------------------- Starting with Python 2.1, the setup.py script at the top of the source distribution attempts to detect which modules can be built and automatically compiles them. Autodetection doesn't always work, so you can still customize the configuration by editing the Modules/Setup file; but this should be considered a last resort. The rest of this section only applies if you decide to edit the Modules/Setup file. You also need this to enable static linking of certain modules (which is needed to enable profiling on some systems). This file is initially copied from Setup.dist by the configure script; if it does not exist yet, create it by copying Modules/Setup.dist yourself (configure will never overwrite it). Never edit Setup.dist -- always edit Setup or Setup.local (see below). Read the comments in the file for information on what kind of edits are allowed. When you have edited Setup in the Modules directory, the interpreter will automatically be rebuilt the next time you run make (in the toplevel directory). Many useful modules can be built on any Unix system, but some optional modules can't be reliably autodetected. Often the quickest way to determine whether a particular module works or not is to see if it will build: enable it in Setup, then if you get compilation or link errors, disable it -- you're either missing support or need to adjust the compilation and linking parameters for that module. On SGI IRIX, there are modules that interface to many SGI specific system libraries, e.g. the GL library and the audio hardware. These modules will not be built by the setup.py script. In addition to the file Setup, you can also edit the file Setup.local. (the makesetup script processes both). You may find it more convenient to edit Setup.local and leave Setup alone. Then, when installing a new Python version, you can copy your old Setup.local file. Setting the optimization/debugging options ------------------------------------------ If you want or need to change the optimization/debugging options for the C compiler, assign to the OPT variable on the toplevel make command; e.g. "make OPT=-g" will build a debugging version of Python on most platforms. The default is OPT=-O; a value for OPT in the environment when the configure script is run overrides this default (likewise for CC; and the initial value for LIBS is used as the base set of libraries to link with). When compiling with GCC, the default value of OPT will also include the -Wall and -Wstrict-prototypes options. Additional debugging code to help debug memory management problems can be enabled by using the --with-pydebug option to the configure script. For flags that change binary compatibility, use the EXTRA_CFLAGS variable. Profiling --------- If you want C profiling turned on, the easiest way is to run configure with the CC environment variable to the necessary compiler invocation. For example, on Linux, this works for profiling using gprof(1): CC="gcc -pg" ./configure Note that on Linux, gprof apparently does not work for shared libraries. The Makefile/Setup mechanism can be used to compile and link most extension modules statically. Coverage checking ----------------- For C coverage checking using gcov, run "make coverage". This will build a Python binary with profiling activated, and a ".gcno" and ".gcda" file for every source file compiled with that option. With the built binary, now run the code whose coverage you want to check. Then, you can see coverage statistics for each individual source file by running gcov, e.g. gcov -o Modules zlibmodule This will create a "zlibmodule.c.gcov" file in the current directory containing coverage info for that source file. This works only for source files statically compiled into the executable; use the Makefile/Setup mechanism to compile and link extension modules you want to coverage-check statically. Testing ------- To test the interpreter, type "make test" in the top-level directory. This runs the test set twice (once with no compiled files, once with the compiled files left by the previous test run). The test set produces some output. You can generally ignore the messages about skipped tests due to optional features which can't be imported. If a message is printed about a failed test or a traceback or core dump is produced, something is wrong. On some Linux systems (those that are not yet using glibc 6), test_strftime fails due to a non-standard implementation of strftime() in the C library. Please ignore this, or upgrade to glibc version 6. IMPORTANT: If the tests fail and you decide to mail a bug report, *don't* include the output of "make test". It is useless. Run the failing test manually, as follows: ./python ./Lib/test/test_whatever.py (substituting the top of the source tree for '.' if you built in a different directory). This runs the test in verbose mode. Installing ---------- To install the Python binary, library modules, shared library modules (see below), include files, configuration files, and the manual page, just type make install This will install all platform-independent files in subdirectories of the directory given with the --prefix option to configure or to the prefix' Make variable (default /usr/local). All binary and other platform-specific files will be installed in subdirectories if the directory given by --exec-prefix or the exec_prefix' Make variable (defaults to the --prefix directory) is given. If DESTDIR is set, it will be taken as the root directory of the installation, and files will be installed into$(DESTDIR)$(prefix),$(DESTDIR)$(exec_prefix), etc. All subdirectories created will have Python's version number in their name, e.g. the library modules are installed in "/usr/local/lib/python<version>/" by default, where <version> is the <major>.<minor> release number (e.g. "2.1"). The Python binary is installed as "python<version>" and a hard link named "python" is created. The only file not installed with a version number in its name is the manual page, installed as "/usr/local/man/man1/python.1" by default. If you want to install multiple versions of Python see the section below entitled "Installing multiple versions". The only thing you may have to install manually is the Python mode for Emacs found in Misc/python-mode.el. (But then again, more recent versions of Emacs may already have it.) Follow the instructions that came with Emacs for installation of site-specific files. On Mac OS X, if you have configured Python with --enable-framework, you should use "make frameworkinstall" to do the installation. Note that this installs the Python executable in a place that is not normally on your PATH, you may want to set up a symlink in /usr/local/bin. Installing multiple versions ---------------------------- On Unix and Mac systems if you intend to install multiple versions of Python using the same installation prefix (--prefix argument to the configure script) you must take care that your primary python executable is not overwritten by the installation of a different versio. All files and directories installed using "make altinstall" contain the major and minor version and can thus live side-by-side. "make install" also creates${prefix}/bin/python which refers to ${prefix}/bin/pythonX.Y. If you intend to install multiple versions using the same prefix you must decide which version (if any) is your "primary" version. Install that version using "make install". Install all other versions using "make altinstall". For example, if you want to install Python 2.5, 2.6 and 3.0 with 2.6 being the primary version, you would execute "make install" in your 2.6 build directory and "make altinstall" in the others. Configuration options and variables ----------------------------------- Some special cases are handled by passing options to the configure script. WARNING: if you rerun the configure script with different options, you must run "make clean" before rebuilding. Exceptions to this rule: after changing --prefix or --exec-prefix, all you need to do is remove Modules/getpath.o. --with(out)-gcc: The configure script uses gcc (the GNU C compiler) if it finds it. If you don't want this, or if this compiler is installed but broken on your platform, pass the option --without-gcc. You can also pass "CC=cc" (or whatever the name of the proper C compiler is) in the environment, but the advantage of using --without-gcc is that this option is remembered by the config.status script for its --recheck option. --prefix, --exec-prefix: If you want to install the binaries and the Python library somewhere else than in /usr/local/{bin,lib}, you can pass the option --prefix=DIRECTORY; the interpreter binary will be installed as DIRECTORY/bin/python and the library files as DIRECTORY/lib/python/*. If you pass --exec-prefix=DIRECTORY (as well) this overrides the installation prefix for architecture-dependent files (like the interpreter binary). Note that --prefix=DIRECTORY also affects the default module search path (sys.path), when Modules/config.c is compiled. Passing make the option prefix=DIRECTORY (and/or exec_prefix=DIRECTORY) overrides the prefix set at configuration time; this may be more convenient than re-running the configure script if you change your mind about the install prefix. --with-readline: This option is no longer supported. GNU readline is automatically enabled by setup.py when present. --with-threads: On most Unix systems, you can now use multiple threads, and support for this is enabled by default. To disable this, pass --with-threads=no. If the library required for threads lives in a peculiar place, you can use --with-thread=DIRECTORY. IMPORTANT: run "make clean" after changing (either enabling or disabling) this option, or you will get link errors! Note: for DEC Unix use --with-dec-threads instead. --with-sgi-dl: On SGI IRIX 4, dynamic loading of extension modules is supported by the "dl" library by Jack Jansen, which is ftp'able from ftp://ftp.cwi.nl/pub/dynload/dl-1.6.tar.Z. This is enabled (after you've ftp'ed and compiled the dl library) by passing --with-sgi-dl=DIRECTORY where DIRECTORY is the absolute pathname of the dl library. (Don't bother on IRIX 5, it already has dynamic linking using SunOS style shared libraries.) THIS OPTION IS UNSUPPORTED. --with-dl-dld: Dynamic loading of modules is rumored to be supported on some other systems: VAX (Ultrix), Sun3 (SunOS 3.4), Sequent Symmetry (Dynix), and Atari ST. This is done using a combination of the GNU dynamic loading package (ftp://ftp.cwi.nl/pub/dynload/dl-dld-1.1.tar.Z) and an emulation of the SGI dl library mentioned above (the emulation can be found at ftp://ftp.cwi.nl/pub/dynload/dld-3.2.3.tar.Z). To enable this, ftp and compile both libraries, then call configure, passing it the option --with-dl-dld=DL_DIRECTORY,DLD_DIRECTORY where DL_DIRECTORY is the absolute pathname of the dl emulation library and DLD_DIRECTORY is the absolute pathname of the GNU dld library. (Don't bother on SunOS 4 or 5, they already have dynamic linking using shared libraries.) THIS OPTION IS UNSUPPORTED. --with-libm, --with-libc: It is possible to specify alternative versions for the Math library (default -lm) and the C library (default the empty string) using the options --with-libm=STRING and --with-libc=STRING, respectively. For example, if your system requires that you pass -lc_s to the C compiler to use the shared C library, you can pass --with-libc=-lc_s. These libraries are passed after all other libraries, the C library last. --with-libs='libs': Add 'libs' to the LIBS that the python interpreter is linked against. --with-cxx-main=<compiler>: If you plan to use C++ extension modules, then -- on some platforms -- you need to compile python's main() function with the C++ compiler. With this option, make will use <compiler> to compile main() *and* to link the python executable. It is likely that the resulting executable depends on the C++ runtime library of <compiler>. (The default is --without-cxx-main.) There are platforms that do not require you to build Python with a C++ compiler in order to use C++ extension modules. E.g., x86 Linux with ELF shared binaries and GCC 3.x, 4.x is such a platform. We recommend that you configure Python --without-cxx-main on those platforms because a mismatch between the C++ compiler version used to build Python and to build a C++ extension module is likely to cause a crash at runtime. The Python installation also stores the variable CXX that determines, e.g., the C++ compiler distutils calls by default to build C++ extensions. If you set CXX on the configure command line to any string of non-zero length, then configure won't change CXX. If you do not preset CXX but pass --with-cxx-main=<compiler>, then configure sets CXX=<compiler>. In all other cases, configure looks for a C++ compiler by some common names (c++, g++, gcc, CC, cxx, cc++, cl) and sets CXX to the first compiler it finds. If it does not find any C++ compiler, then it sets CXX="". Similarly, if you want to change the command used to link the python executable, then set LINKCC on the configure command line. --with-pydebug: Enable additional debugging code to help track down memory management problems. This allows printing a list of all live objects when the interpreter terminates. --with(out)-universal-newlines: enable reading of text files with foreign newline convention (default: enabled). In other words, any of \r, \n or \r\n is acceptable as end-of-line character. If enabled import and execfile will automatically accept any newline in files. Python code can open a file with open(file, 'U') to read it in universal newline mode. THIS OPTION IS UNSUPPORTED. --with-tsc: Profile using the Pentium timestamping counter (TSC). --with-system-ffi: Build the _ctypes extension module using an ffi library installed on the system. Building for multiple architectures (using the VPATH feature) ------------------------------------------------------------- If your file system is shared between multiple architectures, it usually is not necessary to make copies of the sources for each architecture you want to support. If the make program supports the VPATH feature, you can create an empty build directory for each architecture, and in each directory run the configure script (on the appropriate machine with the appropriate options). This creates the necessary subdirectories and the Makefiles therein. The Makefiles contain a line VPATH=... which points to a directory containing the actual sources. (On SGI systems, use "smake -J1" instead of "make" if you use VPATH -- don't try gnumake.) For example, the following is all you need to build a minimal Python in /usr/tmp/python (assuming ~guido/src/python is the toplevel directory and you want to build in /usr/tmp/python):$ mkdir /usr/tmp/python
$cd /usr/tmp/python$ ~guido/src/python/configure
[...]
$make [...]$
Note that configure copies the original Setup file to the build
directory if it finds no Setup file there. This means that you can
edit the Setup file for each architecture independently. For this
reason, subsequent changes to the original Setup file are not tracked
automatically, as they might overwrite local changes. To force a copy
of a changed original Setup file, delete the target Setup file. (The
makesetup script supports multiple input files, so if you want to be
fancy you can change the rules to create an empty Setup.local if it
doesn't exist and run it with arguments \$(srcdir)/Setup Setup.local;
however this assumes that you only need to add modules.)
Also note that you can't use a workspace for VPATH and non VPATH builds. The
object files left behind by one version confuses the other.
Building on non-UNIX systems
----------------------------
For Windows (2000/NT/ME/98/95), assuming you have MS VC++ 7.1, the
project files are in PCbuild, the workspace is pcbuild.dsw. See
For other non-Unix Windows compilers, in particular MS VC++ 6.0 and
For the Mac, a separate source distribution will be made available,
for use with the CodeWarrior compiler. If you are interested in Mac
development, join the PythonMac Special Interest Group
(http://www.python.org/sigs/pythonmac-sig/, or send email to
[email protected]).
Of course, there are also binary distributions available for these
platforms -- see http://www.python.org/.
To port Python to a new non-UNIX system, you will have to fake the
effect of running the configure script manually (for Mac and PC, this
has already been done for you). A good start is to copy the file
pyconfig.h.in to pyconfig.h and edit the latter to reflect the actual
configuration of your system. Most symbols must simply be defined as
1 only if the corresponding feature is present and can be left alone
otherwise; however the *_t type symbols must be defined as some
variant of int if they need to be defined at all.
For all platforms, it's important that the build arrange to define the
preprocessor symbol NDEBUG on the compiler command line in a release
build of Python (else assert() calls remain in the code, hurting
release-build performance). The Unix, Windows and Mac builds already
do this.
Miscellaneous issues
====================
Emacs mode
----------
There's an excellent Emacs editing mode for Python code; see the file
Misc/python-mode.el. Originally written by the famous Tim Peters, it
is now maintained by the equally famous Barry Warsaw (it's no
coincidence that they now both work on the same team). The latest
version, along with various other contributed Python-related Emacs
goodies, is online at http://www.python.org/emacs/python-mode. And
if you are planning to edit the Python C code, please pick up the
contains a "python" style used throughout most of the Python C source
files. (Newer versions of Emacs or XEmacs may already come with the
Tkinter
-------
The setup.py script automatically configures this when it detects a
usable Tcl/Tk installation. This requires Tcl/Tk version 8.0 or
higher.
For more Tkinter information, see the Tkinter Resource page:
http://www.python.org/topics/tkinter/
There are demos in the Demo/tkinter directory.
Note that there's a Python module called "Tkinter" (capital T) which
lives in Lib/lib-tk/Tkinter.py, and a C module called "_tkinter"
(lower case t and leading underscore) which lives in
Modules/_tkinter.c. Demos and normal Tk applications import only the
Python Tkinter module -- only the latter imports the C _tkinter
module. In order to find the C _tkinter module, it must be compiled
and linked into the Python interpreter -- the setup.py script does
this. In order to find the Python Tkinter module, sys.path must be
set correctly -- normal installation takes care of this.
Distribution structure
----------------------
Most subdirectories have their own README files. Most files have
Demo/ Demonstration scripts, modules and programs
Doc/ Documentation sources (reStructuredText)
Grammar/ Input for the parser generator
Lib/ Python library modules
Mac/ Macintosh specific resources
Makefile.pre.in Source from which config.status creates the Makefile.pre
Misc/ Miscellaneous useful files
Modules/ Implementation of most built-in modules
Objects/ Implementation of most built-in object types
PC/ Files specific to PC ports (DOS, Windows, OS/2)
PCbuild/ Build directory for Microsoft Visual C++
Parser/ The parser and tokenizer and their input handling
Python/ The byte-compiler and interpreter
RISCOS/ Files specific to RISC OS port
Tools/ Some useful programs written in Python
pyconfig.h.in Source from which pyconfig.h is created (GNU autoheader output)
configure Configuration shell script (GNU autoconf output)
configure.in Configuration specification (input for GNU autoconf)
install-sh Shell script used to install files
setup.py Python script used to build extension modules
The following files will (may) be created in the toplevel directory by
the configuration and build processes:
Makefile Build rules
Makefile.pre Build rules before running Modules/makesetup
buildno Keeps track of the build number
config.cache Cache of configuration variables
config.log Log from last configure run
config.status Status from last run of the configure script
getbuildinfo.o Object file from Modules/getbuildinfo.c
libpython<version>.a The library archive
python The executable interpreter
reflog.txt Output from running the regression suite with the -R flag
tags, TAGS Tags files for vi and Emacs
That's all, folks!
------------------
## Project details
Uploaded source` | 2022-08-13 10:34:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29064446687698364, "perplexity": 13352.819523262751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00199.warc.gz"} |
https://brilliant.org/problems/combinatorial-sum-2/ | # Combinatorial Sum
Discrete Mathematics Level 4
$\sum_{r=1}^{n}r^3 {n \choose r} = 2^{n-3}\left(n^3+an^2+bn+c\right)$
Find the value of $$a^2+b^2+c^2$$ if the above equation is true for all integers $$r$$ and $$n$$.
× | 2016-10-22 11:44:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6465564966201782, "perplexity": 605.0220828496406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718957.31/warc/CC-MAIN-20161020183838-00308-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/433436/using-line-inside-of-title | # Using \line Inside of \title
I am currently working on formatting a title page for my MSc thesis and am having a hard time getting some lines over and under my title to line up.
Here is what I have so far:
\linethickness{0.05cm}
\title{\line(1,0){450} \\ MSc Thesis \line(1,0){450}}
\author{\Large John Smith \\ \normalsize Supervisor: Dr. John Smith}
\maketitle
As you can see from the picture, the bottom line is further from the title than the top line and I cant figure out how to fix it. Does anyone have any suggestions?
Thanks!
The titling package has several tools to customise the layout of title pages:
\documentclass[a4paper, twoside, 11pt]{article}
\usepackage[utf8]{inputenc}
\usepackage{titling}
\makeatletter
\def\supervisor#1{\gdef\@supervisor{#1}}
\supervisor{John Smith, Esq.}
%
\setlength{\droptitle}{2cm}
\maketitlehooka{\thispagestyle{empty}}
\pretitle{\noindent\rule{\textwidth}{0.5mm }\vspace*{3ex}\par\centering\bfseries\LARGE}
\title{MSc Thesis}
\posttitle{\vspace{2ex}\par\rule{\textwidth}{0.5mm}}
\preauthor{\bigskip\begin{center}\Large}
\author{Jack Smith}
\postauthor{\medskip\par\normalsize Supervisor: \@supervisor\par\end{center}}
\predate{\bigskip\begin{center}}
\postdate{\end{center}\cleardoublepage}
\makeatother
\begin{document}
\maketitle
Blablabla
\end{document}
I think that it's better to redefine the environment. In this way you can define a new command for supervisor and don't use author for both you and the supervisor. Also you have more control over the page.
You could create commands like \title, \author, etc... with:
\makeatletter
\newcommand{\@supervisor}{}
\makeatother
You can set the supervisor with \supervisor{Dr. John Smith} and you can use its value with \@supervisor just like \title, \author and similar.
The original definition of the title page is through the titlepage environment.
\begin{titlepage}%
\let\footnotesize\small
\let\footnoterule\relax
\let \footnote \thanks
\null\vfil
\vskip 60\p@
\begin{center}%
{\LARGE \@title \par}%
\vskip 3em%
{\large
\lineskip .75em%
\begin{tabular}[t]{c}%
\@author
\end{tabular}\par}%
\vskip 1.5em%
{\large \@date \par}% % Set date in \large size.
\end{center}\par
\@thanks
\vfil\null
\end{titlepage}%
You could modify it adding lines and supervisor. In the following the complete solution with the result.
\documentclass{article}
\makeatletter
\newcommand{\@supervisor}{}
\makeatother
\title{MSc Thesis}
\author{John Smith}
\supervisor{Dr. John Smith}
\begin{document}
\makeatletter
\begin{titlepage}%
\let\footnotesize\small
\let\footnoterule\relax
\let \footnote \thanks
\null\vfil
\vskip 60\p@
\begin{center}%
\hrule height .5mm
\vspace{25pt}
{\LARGE \@title \par}%
\vspace{25pt}
\hrule height .5mm
\vskip 3em%
{\large
\lineskip .75em%
\begin{tabular}[t]{c}%
{\Large\@author} \\
{\normalsize\@supervisor}
\end{tabular}\par}%
\vskip 1.5em%
{\large \@date \par}% % Set date in \large size.
\end{center}\par
\@thanks
\vfil\null
\end{titlepage}%
\makeatother
\end{document}
Of course another valid solution is to redefine \maketitle command. However in my humble opinion there is no need for that if you're not going to create a class.
It is probably better to use \hrule in this case,
\documentclass[a4paper,12pt]{article}
\usepackage[english]{babel}
\title{
\hrule height 1mm
\vspace{25pt}
MSc Thesis
\vspace{25pt}
\hrule height .25mm
}
\author{\Large John Smith \\ \normalsize Supervisor: Dr. John Smith}
\begin{document}
\maketitle
\end{document}
You can tune the height of each \hrule by passing the right argument, also the width can be adjusted.
Hope that helps.
Romain
• Glad it was what you were looking for, you can accept the answer if you are happy with it :-) – RockyRock May 25 '18 at 22:59
• Using commands like \hrule and \vspace inside the argument of \title is not recommendable: rather, have recourse to the titlepage environment, that lets you format the title page as you wish. @CameronF.: This remarks applies to the original question too. – GuM May 25 '18 at 23:06 | 2019-08-20 17:30:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9678031206130981, "perplexity": 3655.7905493205003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315551.61/warc/CC-MAIN-20190820154633-20190820180633-00197.warc.gz"} |
http://chokedflow.blogspot.co.uk/2012/10/pressure-vessel-burst-example.html | ## Saturday, 20 October 2012
### Pressure Vessel Burst Example
I have been doing some research lately into pressure vessel bursts in order to get an idea of the lethal distance in the event of a burst/explosion. I thought I would share an example as it might be useful to others.
In a pressure vessel burst there are two things that can hurt you, the pressure/shock wave and fragments of the vessel. In this post I will go through calculating the "overpressure" - the pressure above atmospheric and the impulse - The force due to the overpressure/ fast moving air.
What does that mean? Well, in a explosion, the pressure time relationship at a point some distance away from the blast will look like this:
The overpressure is the highest pressure experienced at the point and the impulse being the integral of the overpressure experienced as a force or wind. What does it mean? Well 10PSI or 0.069MPa will result in death of most people.
Internal Pressure:$P_1=3.5MPa$
Ambient Pressure:$P_0=0.1MPa$
Tank Volume: $V=9L$
Height of tank (mid point) $H_s = 1m$
Tank Diameter: $D=0.24m$
Tank Length: $L=0.5m$
Gamma: $\gamma=1.4$ (nitrogen)
Distance at which pressure damage is calculated: $D=5m$
Ambient speed of sound $a_0 = 340m/s$
Assumptions: Cylindrical pressure vessel at ground level with vertical orientation. All energy gets released from the vessel. In reality as much as %30-%40 would get transferred into fragments depending on the material and failure conditions.
## Energy stored in vessel:
$E=\frac{(P_1-P_0)*2*V_1}{\gamma-1}$
$=\frac{(3.5-0.1)*2*9e-3}{\1.4-1}$
$=0.1125MJ$
## Burst Pressure Ratio:
$=P_1/P_0$
$=3.5/0.1$
$=35$
## Scaled Standoff Distance:
$\bar{D}=D(\frac{P_0}{E})^\frac{1}{3}$
$=D(\frac{0.1e6}{0.1125e6})^\frac{1}{3}$
$=4.8$
## Scaled Side-On peak overpressure:
The above plot shows scales standoff distance vs scaled side-on peak overpressure for a variety of burst pressure ratios. So our scaled side-on peak overpressure is around 0.04
$\bar{P_s}=0.04$
## Scaled Side-On Impulse:
Above is scaled standoff distance vs scaled side-on impulse. Scaled side-on impulse is around 0.03
$\bar{i_s}=0.03$
## Correct for tank geometry:
The above plots are only true for a spherical pressure vessel in free air. In reality the ground reflects the shock wave generated when the vessel bursts and increases the pressure. Also a cylindrical pressure vessel can result in a higher overpressure depending on its orientation.
It is generally accepted that the ground doubles the effective length of the vessel for a upright cylinder
$L'/D = 2*L/D$
$=4$
Interestingly, for a horizontal vessel:
$L'/D = L/D^\frac{1}{2}$
The ratio of vessel height to diameter:
$H/R = H/D/2$
$=8$
The above are plots of Scaled standoff radius vs overpressure and impulse ratios (correction factors) for L/D and also H/R we can see we need that we need to multiply our scaled side on peak overpressure by 1.6 and 1.3. Also we need to multiply the impulse by 1.2 twice to account for the height and geometry.
$\bar{P_s}=0.04*1.3*1.6$
$=0.0768$
$\bar{i_s}=0.03*1.2*1.2$
$=0.0468$
## Side-on peak overpressure:
So the side on pressure is simply the scales value multiplied by the atmospheric pressure.
$P_s=P_0*\bar{P_s}$
$=.0768MPa$
And the side on impulse is given by:
$I_s=\frac{\bar{i_s}*P_0^\frac{2}{3}*E^\frac{1}{3}}{a_0}$
$I_s=14.31Pa-s$
In the next post I will go into calculating the distance fragments from the explosion could travel. Fragments are what you would really want to worry about for a small, thin walled pressure vessel. They are much more dificult to account for because the fragmentation depends much more on the
vessel and failure conditions. | 2018-03-18 11:57:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6304861307144165, "perplexity": 2127.2956977192816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645613.9/warc/CC-MAIN-20180318110736-20180318130736-00189.warc.gz"} |
https://tarek.org/wiki/index.php?title=Pharmacokinetics | Pharmacokinetics
Error creating thumbnail: File missing
Models of drug distribution and elimination
Pharmacokinetics is the study of drug movement in the body (the dose-concentration part, whereas pharmacodynamics deals with the concentration-effect part), and involves the studies of aborption, distribution and excretion. Knowledge of pharmacokinetics (aka drug kinetics) is a prerequisite for rational use of any drug used in therapy, and a must for effective therapeutic monitoring of drugs with narrow therapeutic ranges. It is also useful in the prediction of drug-drug and disease-drug interactions.
Absorption
For a drug to be effective, it must first be introduced into the body, which can happen via intravenous injection, the oral route, inhalation, subcutaneously, intramuscularly, rectally and transdermally. The proportion of drug that reaches the systemic circulation in an unaltered form is the bioavailability of the drug (F).
Usually, drug absorption is a first-order (linear) process, with the rate represented by the constant Ka (total drug aborbed per unit time from the site of administration). It can be affected by the rate of diffusion from the gastrointestinal tract, which is in turn affected by the partition coefficient (solubility), surface area, and difference in concentration. Absorption can also be altered by food. For example, tetracycline taken while fasting is much more effective than when it is taken with milk. Also, grapefruit juice can affect the rate of absorption of drugs [1], such as its effect in increasing the efficacy of felodipine.
Distribution
Distribution of a drug is measured by the apparent volume of distribution $\left ( V_d = \frac{dose}{C_{plasma}} \right )$. This measure is the volume into which the drug appears to distribute, though it is obviously not a real volume when compared to the physiological volume capacities involved. Variation in Vd is due to the degree of heterogeneity with which the drug is stored or processed across various tissues of the body. The volume of distribution is necessary for calculating loading doses and for estimating the dose needed to achieve a therapeutic level.
Elimination
Major contributors to the elimination of drugs include the liver (hepatic metabolism), gut metabolism and transporters.
The half-life of a drug is the amount of time required to eliminate half of the drug. Usually, drugs display first-order elimination properties, meaning that their half-life can be plotted on an exponential curve that can then be straightened by logarithmic manipulation.
Clearance $\left ( {CL} = \frac{volume}{time} \right )$ is the volume of circulating blood from which all drug is removed in a unit time. The rate of clearance affects the half-life, and is usually the largest factor in any change in half-life.
Drug Accumulation
When drugs are repeatedly administered, drug accumulation must be taken into effect. This is because it takes an infinite time (in theory) to eliminate all of a given dose. In practical terms, if the dosing interval is shorter than four half-lives, accumulation will be detectable, since it takes 4-5 half-lives for the drug concentration reaches the plateau, also known as the steady-state. Steady-state is the concentration achieved when the rate of drug input into the body is equal to the rate of output.
Usually, a loading dose is first administered to get the body into the therapeutic range, followed by a maintenance dose.
Models
There are two relevant pharmacokinetic models: Single and two compartments. | 2019-09-18 05:58:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7808853387832642, "perplexity": 1722.99107351229}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573184.25/warc/CC-MAIN-20190918044831-20190918070831-00444.warc.gz"} |
https://physics.stackexchange.com/questions/243293/evaluating-the-components-of-maxwells-stress-tensor | # Evaluating the components of Maxwell's stress tensor
I was going through the Maxwell's stress tensor section of Introduction to Electrodynamics by Griffiths. In the example 8.2(screenshot below),
I fail to understand how the equation 8.23 (in the image) gives, $$\frac{\epsilon_{0}}{2}\big(\frac{Q}{4\pi{\epsilon_0} r}\big)^2 \sin\theta \cos\theta \,d{\theta}\,d{\phi}.$$ If we use the equation $$\hat{r}=\sin{\theta} \cos{\phi} \hat{x}+ \sin\theta \sin\phi \hat{y} + \cos\theta \hat{z}$$ in $$da=R^2 \sin\theta \,d\theta \,d\phi \hat{r},$$ and substitute it in $\big(T\cdot da \big)_z$ then it gives a different answer. Can somebody elaborate on the steps linking $\big(T\cdot da \big)_z$ and $$\frac{\epsilon_{0}}{2}\big(\frac{Q}{4\pi{\epsilon_0} r}\big)^2 \sin\theta \cos\theta \,d{\theta}\,d{\phi}~?$$
PS: I really apologize if the question seems too silly but I have spent many hours wondering about this. I also do not have a great background in Tensors.
• If you want your work to be checked for a clumsy mistake, write it out; it's impossible to tell where you might have went wrong. – knzhou Mar 13 '16 at 23:43
$$(\mathbf T \cdot \mathbf{da})_z = \epsilon_0 \left( \frac{Q}{4\pi \epsilon_0 R^2} \right)^2 R^2 \sin \theta \,d \theta d \phi \left[ \sin^2 \theta \cos^2 \phi \cos \theta + \sin^2 \theta \sin^2 \phi \cos \theta + \frac{\cos \theta}{2} (\cos^2 \theta - \sin^2 \theta) \right] = \epsilon_0 \left( \frac{Q}{4\pi \epsilon_0 R^2} \right)^2 R^2 \sin \theta \,d \theta d \phi \left[ \sin^2 \theta \cos \theta + \frac{\cos \theta}{2} (\cos^2 \theta - \sin^2 \theta) \right]$$ I'm sure you can conclude from here | 2020-02-27 09:06:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.869962215423584, "perplexity": 210.50209818593981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146665.7/warc/CC-MAIN-20200227063824-20200227093824-00270.warc.gz"} |
http://nlassignmentmvre.taxiservicecharleston.us/functions-defined-by-integrals-ws.html | # Functions defined by integrals ws
Resulting in the answer for the integral: # remark: maple worksheet output is in eps (encapsulated postscript) # remark: output is left in int(exp(-xx),x=0infinity) # resulting in the answer for the integral: # remark: pi = 4atan(1) integration using a maple user defined function. 0 ws ds in general, integrating an adapted function (ito or riemann integral) gives an- other adapted function options that depends on such integrals are asian op- tions in each case, the value ft is determined by w[0,t] the ito integral (3) is defined as a limit of ito-riemann sums much in the way the riemann integral is. Worksheet # 25: definite integrals and the fundamental theorem of calculus • worksheet # 26: net change and the substitution method • worksheet # 27: transcendental functions and other integrals • worksheet # 28: (a) define what it means for f(x) to be continuous at the point x = a what does it. The int(expression, x) calling sequence computes an indefinite integral of the expression with respect to the variable x note: no constant of integration appears in the if maple cannot find a closed form expression for the integral (or the floating-point value for definite integrals with float limits), the function call is returned.
This unit deals with the definite integral it explains how it is defined, how it is calculated and some of the ways in which it is used we shall assume that you are already familiar with the process of finding indefinite inte- grals or primitive functions (sometimes called anti-differentiation) and are able to 'anti- differentiate' a. Properties of definite integrals, 32c2 definite integrals of functions with discontinuities, 32c3 improper integrals (bc), 32d1 32d2 analyze functions defined by an integral, 33a1 33a2 33a3 second fundamental theorem of calculus, 33b1 first fundamental theorem of calculus, 33b2 indefinite integrals, 33. I use worksheet 2 after introducing the first fundamental theorem of calculus in order to explore the second fundamental theorem of calculus • i use worksheet 3 as a review of graphical analysis using the first and second derivatives of functions defined by integrals worksheets and ap examination questions each of. Define a function g by g(x) = 3 for all real x use the interpretation of the definite integral as a signed area to find a formula for g(x) = ∫ x 4 g(t) dt which is valid for all real x (including x ≤ 4) how is g (x) related to g(x) what happens if we forget what ∫ x 4 g(t) dt is supposed to be when x 4 8.
Integration of functions of a single variable 87 chapter 13 the riemann integral 89 131 background 89 132 exercises 90 133 problems 93 134 answers to odd-numbered exercises 95 chapter 14 the fundamental theorem of calculus 97 141 background 97. Since the definite integral of a positive function f represents area under the graph of f, we may think of f(x) as the area under the graph of f(t) between t = 0 and t = x calculate the approximate derivative of f (difference quotient) and the corresponding value of f at each of the x values specified in your worksheet fill in the.
Free calculus worksheets with questions and problems and detailed solutions to download.
The following problems involve the limit definition of the definite integral of a continuous function of one variable on a closed, bounded interval begin with a for \$ i = 1, 2, 3 , n \$ and define \$ mesh = \displaystyle{ \max_{1 \le i \le n the definite integral of \$ f \$ on the interval \$ [a, b] \$ is most generally defined to be. 34 the graph of consists of line segments and a semicircle, as shown evaluate each definite integral by using geometric formulas (a) (b) (c) (d) (e) (f) 35 consider the function that is continuous on the interval and for which (a) (b) (c) (d) (if f is even) (if f is odd) 36 a function is defined below use geometric formulas. In mathematics, functions on functions are called operators (linear operators in cases like integral ) however, once you accept that functions aren't special, then the word operator can be replaced by function in haskell, there is nothing special about operators they are simply functions, and you define.
## Functions defined by integrals ws
The itô integral in ordinary calculus, the (riemann) integral is defined by a limiting procedure one first defines the integral of a step function, in such a way that ws dws as n → ∞ the second sum is the same sum that occurs in the quadratic variation formula (lecture 5), and so converges, as n → ∞, to 1 therefore. The definition of the definite integral as a limit of riemann sums is given in section 52 the concept of area vs signed area is introduced many properties of integrals are given using area-based geometric arguments worksheet: in integral calculus, we study functions that are defined as the area under the graph of. Functions defined by integrals switched interval video img credit : khanacademy org worksheet on functions defined by integrals answers second fundamental theorem of calculus pbworks second fundamental theorem of calculus justify your answers c worksheet 2 on functions defined by integrals 1 yx22.
Improper integrals like the ones we have been considering in class have many applications, for example in thermodynamics have read to the exercises, start up maple, load the worksheet probability startmws, and go through it carefully by for example, the general exponential probability density function is defined as. Improper integrals 1 infinite limits of integration 2 integrals with vertical asymptotes ie with infinite discontinuity ryan blair (u penn) math 104: improper integrals tuesday march 12 each integral on the previous page is defined as a limit if the limit is finite is 0 when x = 2, so the function is not even. Sal evaluates a function defined by the integral of a graphed function in order to evaluate he must switch the sides of the interval practice this lesson y.
Check your understanding of integration in calculus problems with this interactive quiz and printable worksheet these practice assets will help. Subtraction “cancels out” that constant such integrals are called definite integrals because we are substituting definite values of x worksheet 1 definite integrals 1 evaluate the following functions defined as definite integrals sometimes functions are defined in terms of another integral for example, (. Ws(x)u) б m-zia) for 0 g x g 1 if mx = l, then 1 1 2 = 1 1 and (11) coincides with the cauchy-riemann equations 2 differentiation 2-monogenic functions let/ = «+' 1944] functions defined by partial differential equations 75 in other words: (45a) /•x l r r r dxn — i r2 • • • i cr2 | -in integrals, n odd).
Functions defined by integrals ws
Rated 4/5 based on 19 review | 2018-10-23 10:26:47 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9162731766700745, "perplexity": 632.746155595238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516123.97/warc/CC-MAIN-20181023090235-20181023111735-00298.warc.gz"} |
http://mathhelpforum.com/geometry/173751-triangles.html | # Math Help - Triangles
1. ## Triangles
Hello,
I need the solution of the following question:
Consider a triangle $\triangle ABC$ and let $X$ be a point on the side $AB$ such that $AX=\frac{1}{3}AB$ and let $Y$ be a point on the side $AC$ such that $CY=\frac{1}{3}AC$. Prove that the area of the triangle $\triangle AXC$ equals the area of the triangle $\triangle BYC$.
Best Regards.
2. Hello, raed!
$\text{Consider }\Delta ABC\text{; let }X\text{ be a point on side }AB\text{ such that: }AX\,=\,\frac{1}{3}AB$
$\text{and let }Y\text{ be a point on the side }AC\text{ such that }CY\,=\,\frac{1}{3}AC$.
$\text{Prove that the area of }\Delta AXC\text{ equals the area of }\Delta BYC.$
Side $\,AB$ is divided in the ratio $1:2$
Compare $\Delta AXC$ and $\Delta ABC.$
They have the same height $\,h.$
Code:
C
o
**|*
* * | *
* * | *
* * |h *
* * | *
* * | *
* * | *
A o * * * o * * * * * * * o B
: - 1 - X - - - 2 - - - :
The base of $\Delta AXC$ is one-third the base of $\Delta ABC.$
Hence: . $(\text{area }\Delta AXC) \;=\;\frac{1}{3}(\text{area }\Delta ABC)$
In a similar fashion, we prove that: . $\text{(Area }\Delta BYC)} \;=\;\frac{1}{3}(\text{area }\Delta ABC})$
And we're done!
3. Originally Posted by Soroban
Hello, raed!
Side $\,AB$ is divided in the ratio $1:2$
Compare $\Delta AXC$ and $\Delta ABC.$
They have the same height $\,h.$
Code:
C
o
**|*
* * | *
* * | *
* * |h *
* * | *
* * | *
* * | *
A o * * * o * * * * * * * o B
: - 1 - X - - - 2 - - - :
The base of $\Delta AXC$ is one-third the base of $\Delta ABC.$
Hence: . $(\text{area }\Delta AXC) \;=\;\frac{1}{3}(\text{area }\Delta ABC)$
In a similar fashion, we prove that: . $\text{(Area }\Delta BYC)} \;=\;\frac{1}{3}(\text{area }\Delta ABC})$
And we're done!
Thank you very much for your reply.
Best Regards, | 2015-07-29 20:55:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7676385641098022, "perplexity": 522.2277260583515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986625.58/warc/CC-MAIN-20150728002306-00203-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/167736/analogy-between-integers-and-permutations | # Analogy between Integers and Permutations
I am reading Andrew Granville's Anatomy of Integers and Permutations where it is argued the factorization of a permutation into disjoint cycles is analogous to the factorization of a number into prime factors.
In the blog-sphere you can find these two ways of defining partitions of unity:
• $m = p_1\cdots{p_k} \in \mathbb{Z}$ and $\{ \frac{\log p_1}{\log m}, \dots, \frac{\log p_k}{\log m}\}$.
• $\sigma = C_1\dots C_k \in S_n$ and $\{ \frac{|C_1|}{n}, \dots, \frac{|C_n|}{n} \}$
One can prove both of these converge to the Poisson-Dirichlet process. It looks like $n \approx \log m$ in this analogy and $\mathbb{Z} \simeq S_n$.
This is a correspondence between partitions, but what could be permuted in the $\mathbb{Z}$ side?
It seems necessary to clarify, that the analogy between $\mathbb{Z}, \mathbb{F}_q[t],\mathbb{C}(z)$ has gotten attention recently and I have not read them closely enough:
I have found many individual parts of these papers difficult to grasp - and they are put together - and I maybe I can ask more questions on these topics later?
Today, my question may have to do with the last link... suppose we do have this machine comparing statistics on the function field $\mathbb{F}_q[t]$ to statistics of $S_n$ as Qiaochu say. How do we "dequantize" to get a result in $\mathbb{Z}$? The implication is there is some kind of permutation group action on the integers and I was wondering what it could be.
Maybe it's $q \to 1$ limit?
-
Do you think it's more than "if you compute certain statistics for (a) prime factorization; and (b) cycle decomposition of permutations, then for large $n$, the distributions are close to the same thing"? – Anthony Quas May 20 '14 at 23:44
I think the objects are $\mathbb{Z}/p\mathbb{Z}$ actions on the permutation groups $S_n$. – john mangual May 20 '14 at 23:53
Having a 1 min glance at the Granville article, it seems to me that the objects are integers and permutation. He factorizes integers into primes and gets a "partition of unity" from that (i.e. $(\log p_1/\log n,\ldots,\log p_d/\log n)$). He factorizes permutations into cycles and gets a partition of unity from that $(\ell_1(\sigma),\ldots,\ell_d(\sigma))$. So I think the analogy is 1) pick a prime of size roughly $e^N$ and find its partition of unity; pick a permutation on roughly $N$ symbols and find its partition of unity. Lo and Behold! they have (roughly) the same distribution! – Anthony Quas May 21 '14 at 5:20
Probably I'm telling you what you saw already. Apologies if this is the case. – Anthony Quas May 21 '14 at 5:21
The essential common feature that insures convergence to a Poisson Dirichlet distribution is explained in the book "Logarithmic Combinatorial Structures" by Arratia Barbour and Tavare. They do a great job, I think.
-
it's possible to extend the analogy to the factorization of polynomials over finite fields $\mathbb{F}_q$ (see this blog post for details; one needs to take $q \to \infty$ for the statistics to match, and in the post I only verify convergence in the sense of moments and am a bit sloppy). In this setting the permutation is Frobenius. But I don't think there's an analogous statement on the number field side.
I think results like this should be thought of as central limit-type theorems more than anything else; there are certain kinds of statistics that occur universally in certain general situations which otherwise don't necessarily have much in common.
-
I think I understand the Frobenius map really well $x \mapsto x^q$ but I certainly do not get Étale cohomology. – john mangual May 21 '14 at 0:14
What does that comment have to do with my answer? – Qiaochu Yuan May 21 '14 at 0:16
I am not asking about function fields, I am asking about number fields and $\mathbb{Z}$ – john mangual May 21 '14 at 0:18
I find that comment strange. If you're willing to take an analogy between prime factorization and cycle decomposition seriously I don't see how it could hurt to know that it factors through an analogy to factorization of polynomials over finite fields, which on the one hand is part of a well-established analogy between number fields and function fields (which requires no knowledge of etale cohomology to appreciate) and where on the other hand I can explicitly point to a permutation that determines the factorization. – Qiaochu Yuan May 21 '14 at 0:19
I"m claiming that there isn't anything being permuted if you only look at $\mathbb{Z}$ (obviously the logarithms of primes are not cycle lengths, e.g. they aren't commensurate), but also that there's no reason to restrict your attention to $\mathbb{Z}$. I'm reasonably confident that you get the same kind of statistics for any Dedekind domain $D$ such that $D/P$ is always finite for all prime ideals $P$. – Qiaochu Yuan May 21 '14 at 0:27
If I remember well, there is also a correspondence between the degrees of factors of polynomials in $\mathbb{F}_q[X]$, the size of cycles of riffle-shuffle permutations (a brand of non-uniform random permutations ), and the size of factors in the Lyndon decomposition of random words on an alphabet with $q$ letters, with the Poisson Dirichlet distribution as an asymptotic again. I think I found this in the following paper : Diaconis, M.J. McGrath, and J. Pitman, Riffle shuffles, cycles, and descents, Combinatorica, 1995, in which a correspondence by Ira Gessel was mentioned.
- | 2016-05-30 16:35:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8104169368743896, "perplexity": 321.30221872758324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051036499.80/warc/CC-MAIN-20160524005036-00180-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://www.philippyfred.com/post/nn_by_hand/ | # Constructing a neural network by hand
Image credit: www.pixabay.com
# Data simulation
We have the following data simulation procedure:
We start by choosing the number of points for each class ($$N=100$$) as well as the number of classes ($$K=2$$). So, we have $$N * K = 200$$ points. We also choose the data dimension $$p$$. For our example we choose $$p=2$$, which means that the points are in $$\mathbb{R}^2$$ and that the coordinates of such a point are represented by a vector of length 2.
N = 100 # number of points by class
p = 2 # data dimension
K = 2 # number of classes
We define the following random variables:
\begin{align*} X^{(0)}_1 &\sim 0.3 - 2 \times r \times \cos(\theta) + \mathcal{N}(0,0.02) \\ X^{(0)}_2 &\sim r \times \sin(\theta) + \mathcal{N}(0,0.02) \\ X^{(1)}_1 &\sim -0.3 + 2 \times r \times \cos(\theta) + \mathcal{N}(0,0.02) \\ X^{(1)}_2 &\sim -0.5 \times r \times \sin(\theta) + \mathcal{N}(0,0.02) \end{align*}
where $$\theta \sim \mathcal{U} \left( -\frac{\pi}{2},\frac{\pi}{2} \right)$$
Let $$x^{(i)}=\left(x^{(i)}_1,x^{(i)}_2\right)$$ be a point in $$\mathbb{R}^2$$ pour $$i = 1,\ldots,N*K$$. This is repeated for our $$N \times K$$ points and each point is associated to $$y^{(i)} \in \{0.1\}$$ which indicates the class in which $$x$$ is located. We then have $$x = \begin{pmatrix} x^{(1)} \\ \vdots \\ x^{(N*K)} \end{pmatrix} = \begin{pmatrix} x_1^{(1)} & x_2^{(1)} \\ \vdots & \vdots \\ x_1^{(N*K)} & x_2^{(N*K)} \end{pmatrix} \;$$ and
$$\; y^{(i)} = \begin{cases} y^{(i)}=0 &\text{ si } x^{(i)} \in \mathcal{C}_0 \\ y^{(i)}=1 &\text{ si } x^{(i)} \in \mathcal{C}_1 \end{cases} \quad , \; i=1,\ldots,2N$$.
X = matrix(0,N*K,p) # matrix containing the coordinates of the N*K points
t = runif(N,-pi/2,pi/2) # theta
X[1:N,1] = 0.3-2*r*cos(t)+ rnorm(N,0,0.02)
X[1:N,2] = r*sin(t) + rnorm(N,0,0.02)
t = runif(N,-pi/2,pi/2) # theta
X[N+1:N,1] = -.3+2*r*cos(t)+ rnorm(N,0,0.02)
X[N+1:N,2] = -0.5+r*sin(t) + rnorm(N,0,0.02)
y = c(rep(0,N),rep(1,N)) # class labels
dataclassif = data.frame(x1=X[,1],x2=X[,2],y=y)
# Transformation of the vector y for further steps
y = as.numeric(unlist(dataclassif[3]))
# Plot data
ggplot(data=dataclassif, mapping=aes(x=x1,y=x2, color=factor(y))) +
geom_point() +
scale_colour_discrete("Class")
## Complete network function $$\hat{y}(x,\theta)$$
We have $$W^{[1]} = \begin{bmatrix} w_{1 1}^{[1]} & w_{1 2}^{[1]}\\ \vdots & \vdots\\ w_{d 1}^{[1]} & w_{d 2}^{[1]} \end{bmatrix} \quad ; \quad$$ $$b^{[1]} = \begin{bmatrix} b_1^{[1]}\\ \vdots\\ b_d^{[1]} \end{bmatrix} \quad ; \quad$$ $$W^{[2]} = \begin{bmatrix} w_1^{[2]} & \dots & w_d^{[2]} \end{bmatrix} \quad ; \quad$$ $$b^{[2]} \in \mathbb{R}$$
We note $$\theta = \left(W^{[1]}, b^{[1]}, W^{[2]}, b^{[2]} \right)$$.
We have $$x=(x_1,x_2) \in \mathbb{R}^2$$.
We therefore find
\begin{align*} \hat{y}(x,\theta) &= \sigma^2\left(Z^{[2]} \right) \\ &= \sigma^2\left(W^{[2]} \times A^{[1]} + b^{[2]} \right) \\ &= \sigma^2\left(W^{[2]} \times g\left(Z^{[1]} \right) + b^{[2]} \right) \\ &= \sigma^2\left(W^{[2]} \times g\left(W^{[1]}x + b^{[1]} \right) + b^{[2]} \right) \\ &= \sigma^2 \left( \sum_{i=1}^{d}w_i^{[2]}g\left(w_{i 1}^{[1]}x_1 + w_{i 2}^{[1]}x_2 + b_i^{[1]}\right) + b^{[2]}\right) \end{align*}
# Partial derivatives of $$\mathcal{L}(\cdot)$$
We look for the parameters $$\theta=\left(W^{[1]}, b^{[1]}, W^{[2]}, b^{[2]}\right)$$ which minimize the crossentropy loss function
$\mathcal{L}(\theta)=-\sum_{i=1}^{M} \left[y^{(m)} \log \left(\hat{y}\left(x^{(m)}, \theta\right) \right) + \left(1-y^{(m)}\right) \log \left(1-\hat{y}\left(x^{(m)}, \theta\right)\right) \right]$
However, as we minimize the loss function by gradient descent, we first need to compute the gradients.
Remark (Derivative of sigma). We have $$\sigma(z) = \frac{1}{1+e^{-z}}$$. We therefore find that \begin{align*} \frac{\partial \sigma}{\partial z}(z) &= \frac{e^{-z}}{(1+e^{-z})^2} \\ &= \frac{1}{1+e^{-z}} \times \frac{e^{-z}}{1+e^{-z}} \\ &= \frac{1}{1+e^{-z}} \times \left(1-\frac{1}{1+e^{-z}} \right) \\ &= \sigma \times (1-\sigma) \end{align*}
a) We note $$\hat{y}[m]=\hat{y}\left(x^{(m)}, W^{[1]}, b^{[1]}, W^{[2]}, b^{[2]}\right)$$
\begin{align*} \mathcal{L}(\theta) &=-\sum_{i=1}^{M} \left[y^{(m)} \log \left(y[m] \right) + \left(1-y^{(m)}\right) \log \left(1-y[m]\right) \right] \\ \bullet \; y[m] &=\sigma\left(Z_m^{[2]}\right) = \frac{1}{1 + e^{-Z_m^{[2]}}} \\ \bullet \;\; Z_m^{[2]} &=\sum_{i=1}^{d}w_i^{[2]} A_{i m}^{[1]} + b^{[2]} \\ \bullet \; A_{i m}^{[1]} &=g \left(Z_{i m}^{[1]}\right) \\ \bullet \; Z_{i m}^{[1]} &=w_{i 1}^{[1]} x_1^{(m)} + w_{i 2}^{[1]} x_2^{(m)} + b_{i}^{[1]} \end{align*}
b) \begin{align*} \frac{\partial \mathcal{L}}{\partial \hat{y}[m]} &= \frac{1-y^{(m)}}{1-\hat{y}[m]} - \frac{y^{(m)}}{\hat{y}[m]} \\& = \frac{\hat{y}[m] - y^{(m)}}{\hat{y}[m](1-\hat{y}[m])} \end{align*}
Followingly, the gradient with respect to $$y[m]$$ is given by $$\boxed{\nabla_{y}\mathcal{L} = \frac{y_{pred} - y}{y_{pred}(1-y_{pred})}}$$.
c)
\begin{align*} \frac{\partial \mathcal{L}}{\partial Z_m^{[2]}} &= \frac{\partial \mathcal{L}}{\partial \hat{y}[m]} \frac{\partial \hat{y}[m]}{\partial Z_m^{[2]}} \\& = \frac{\hat{y}[m] - y^{(m)}}{\hat{y}[m](1-\hat{y}[m])} \times \sigma\left(Z_m^{[2]}\right) \times \left(1-\sigma\left(Z_m^{[2]}\right)\right) \end{align*}
We have
$$\boxed{\nabla_{Z^{[2]}} \mathcal{L} = \nabla_{y} \mathcal{L} * \sigma\left(Z^{[2]}\right)*\left(\mathbf{1}-\sigma\left(Z^{[2]}\right)\right)}$$
where $$\mathbf{1}$$ is a matrix of the same size where all coefficients are equal to 1.
d) \begin{align*} \frac{\partial \mathcal{L}}{\partial W_i^{[2]}} &= \sum_{m=1}^{M} \frac{\partial \mathcal{L}}{\partial Z_m^{[2]}} \frac{\partial Z_m^{[2]}}{\partial W_i^{[2]}} \\ &= \sum_{m=1}^{M} \frac{\hat{y}[m] - y^{(m)}}{\hat{y}[m](1-\hat{y}[m])} \times \sigma\left(Z_m^{[2]}\right) \times \left(1-\sigma\left(Z_m^{[2]}\right)\right) \times A_{i m}^{[1]} \end{align*}
We have $$\boxed{\nabla_{W^{[2]}} \mathcal{L} = \nabla_{Z^{[2]}} \mathcal{L} \times \left(A^{[1]}\right)'}$$
e)
\begin{align*} \frac{\partial \mathcal{L}}{\partial b^{[2]}} &= \sum_{m=1}^{M} \frac{\partial \mathcal{L}}{\partial Z_m^{[2]}} \frac{\partial Z_m^{[2]}}{\partial b^{[2]}} \\ &= \sum_{m=1}^{M} \frac{\hat{y}[m] - y^{(m)}}{\hat{y}[m](1-\hat{y}[m])} \times \sigma\left(Z_m^{[2]}\right) \times \left(1-\sigma\left(Z_m^{[2]}\right)\right) \end{align*}
f)
\begin{align*} \frac{\partial \mathcal{L}}{\partial A_{i m}^{[1]}} &= \frac{\partial \mathcal{L}}{\partial Z_m^{[2]}} \frac{\partial Z_m^{[2]}}{\partial A_{i m}^{[1]}} \\ &= \frac{\hat{y}[m] - y^{(m)}}{\hat{y}[m](1-\hat{y}[m])} \times \sigma\left(Z_m^{[2]}\right) \times \left(1-\sigma\left(Z_m^{[2]}\right)\right) \times W_i^{[2]} \end{align*}
We find $$\boxed{\nabla_{A^{[1]}} \mathcal{L} = \left(W^{[2]}\right)' \nabla_{Z^{[2]}} \mathcal{L}}$$
g)
\begin{align*} \frac{\partial \mathcal{L}}{\partial Z_{i m}^{[1]}} &= \frac{\partial \mathcal{L}}{\partial A_{i m}^{[1]}} \frac{\partial A_{i m}^{[1]}}{\partial Z_{i m}^{[1]}} \\ &= \frac{\hat{y}[m] - y^{(m)}}{\hat{y}[m](1-\hat{y}[m])} \times \sigma\left(Z_m^{[2]}\right) \times \left(1-\sigma\left(Z_m^{[2]}\right)\right) \times W_i^{[2]} \times g'(Z_{i m}^{[1]}) \end{align*}
On a
$$\boxed{\nabla_{Z^{[1]}} \mathcal{L} = \begin{cases} \nabla_{A^{[1]}} \mathcal{L} \times (\mathbf{1}-A^{[1]}) & \text{ if } g=tanh \\ \nabla_{A^{[1]}} \mathcal{L} \times \mathbb{1}(Z^{[1]}>1) & \text{ if } g=ReLU \end{cases}}$$
h) \begin{align*} \frac{\partial \mathcal{L}}{\partial W_{i j}^{[1]}} &= \sum_{m=1}^{M} \frac{\partial \mathcal{L}}{\partial Z_{i m}^{[1]}} \frac{\partial Z_{i m}^{[1]}}{\partial W_{i j}^{[1]}} \\ &= \sum_{m=1}^{M} \frac{\hat{y}[m] - y^{(m)}}{\hat{y}[m](1-\hat{y}[m])} \times \sigma\left(Z_m^{[2]}\right) \times \left(1-\sigma\left(Z_m^{[2]}\right)\right) \times g'(Z_{i m}^{[1]}) \times x_j^{(m)} \end{align*}
We find that $$\boxed{\nabla_{W^{[1]}} \mathcal{L} = \nabla_{Z^{[1]}} \mathcal{L} \times x}$$
i)
\begin{align*} \frac{\partial \mathcal{L}}{\partial b_i^{[1]}} &= \sum_{m=1}^{M} \frac{\partial \mathcal{L}}{\partial Z_{i m}^{[1]}} \frac{\partial Z_{i m}^{[1]}}{\partial b_i^{[1]}} \\ &= \sum_{m=1}^{M} \frac{\hat{y}[m] - y^{(m)}}{\hat{y}[m](1-\hat{y}[m])} \times \sigma\left(Z_m^{[2]}\right) \times \left(1-\sigma\left(Z_m^{[2]}\right)\right) \times W_i^{[2]} \times g'(Z_{i m}^{[1]}) \end{align*}
# Training the neural network
We initialize the algorithm by choosing starting weights and offset vectors, i.e. choose the starting value of $$\theta = \left(W^{[1]}, b^{[1]}, W^{[2]}, b^{[2]} \right)$$. We then use $$\theta$$ and the $$x$$ matrix to make a first prediction (which will probably be very bad) using the $$\hat{y}(x,\theta)$$ function previously defined. These predictions then allow to successively compute all the gradients of the previous section. Following the principle of gradient descent, the weights and offset vectors are updated as follows (for a step size $$\alpha > 0$$):
We repeat this procedure with the new value of $$\theta$$ by making a new prediction of $$y$$ (which will be more accurate for each new iteration). In order to be able to implement the algorithm in R we start by defining the functions necessary for the training part of the network as well as the loss function.
# tanh activation function (possible choice for g)
tanh <- function(z){
return((exp(z)-exp(-z))/(exp(z)+exp(-z)))
}
# ReLU activation function (possible choice for g)
ReLU = function(z){
return(z*(z>0))
}
# sigmoid activation function
sigmoid = function(z){
return(1/(1+exp(-z)))
}
# Prediction function (depends on the choice of g)
prediction <- function(x,w1,b1,w2,b2,g){
Z <- w1 %*% t(x) + b1%*% matrix(1,1,dim(x)[1])
if (g == "tanh"){
return(as.vector(sigmoid(w2 %*% tanh(Z) + b2 * matrix(1,1,dim(x)[1]))))
}
if (g == "ReLU"){
return(as.vector(sigmoid(w2 %*% ReLU(Z) + b2 * matrix(1,1,dim(x)[1]))))
}
}
# Cross entropy loss function
loss <- function(y_pred,y){
return(sum(-y*log(y_pred)-(1-y)*log(1-y_pred)))
}
We continue by coding the network training algorithm. We store this algorithm in a function named NNet which takes as argument the $$x^{(m)}$$, the associated class $$y^{(m)}$$, the gradient descent step size $$\alpha$$, the number of iterations niter, the number d of neurons on the hidden layer, the activation function g and a positive number k defining the interval $$[-k;k]$$ to simulate the initial weights in a more or less random way.
NNet = function(X, y, rate, niter, d, g, k){
# Simulation of starting weights and offset vectors
w1 = matrix(runif(n=2*d,-k,k),d,2)
b1 = matrix(runif(n=d,-k,k),d,1)
w2 = matrix(runif(n=d,-k,k),1,d)
b2 = runif(n=1,-k,k)
for(iter in 1:niter){
y_pred = prediction(X,w1,b1,w2,b2,g)
Z1 <- w1 %*% t(X) + b1%*% matrix(1,1,dim(X)[1])
if (g=="tanh") {A1 <- tanh(Z1)}
if (g=="ReLU") {A1 <- ReLU(Z1)}
Z2 <- w2 %*% A1 + b2 * matrix(1,1,dim(X)[1])
if(g=="tanh"){
if(g=="ReLU"){
# Updating the parameters
}
return(list(y=y_pred,w1=w1,b1=b1,w2=w2,b2=b2))
}
In order to test the algorithm we arbitrarily choose d, g, $$\alpha$$ and niter and use the data generated at the beginning of the exercise to train the network. The algorithm is initialized with random initial weights in $$[-1;1]$$.
# Number of neurones ont the hidden layer
d = 5
# Activation function g
g = "tanh"
# Model training
NNet_model = NNet(X=X, y=y, rate=0.01, niter=10000, d, g, k=1)
The values of $$W^{[1]}, b^{[1]}, W^{[2]}, b^{[2]}$$ are then stored in NNet_model.
# Plane segmentation
We define a segmentation function that uses the weights $$W^{[1]}$$ and $$W^{[2]}$$ as well as the offset vectors $$b^{[1]}$$ and $$b^{[2]}$$ obtained by training the neural network to graphically present the plane segmentation. In order to do this, the segmentation function calculates the probability of belonging to the class $$\mathcal{C}_0$$, using the neural network and the weights previously determined during the training phase, for a large number of points in the plane and then returns the colored plane according to these probabilities.
segmentation = function(w1, b1, w2, b2){
# Setting up the grid
x1grid <- seq(min(dataclassif$x1)-.5,max(dataclassif$x1)+.5,by=.02)
x2grid <- seq(min(dataclassif$x2)-.5,max(dataclassif$x2)+.5,by=.02)
xgrid <- expand.grid(x1grid,x2grid)
# Computation of y_pred for every grid point
ygrid <- prediction(xgrid,w1,b1,w2,b2,g)
datapred <- data.frame(x1=xgrid[,1],x2=xgrid[,2],y=ygrid)
# Plane segmentation graphic
predviz <- ggplot() + geom_point(datapred,mapping=aes(x=x1, y=x2,color=y)) +
geom_point(dataclassif,mapping=aes(x=x1,y=x2,shape=factor(y))) +
labs(shape="y",colour=TeX("\\widehat{y}"))
return(predviz)
}
The parameters of the previously created NNet_model are used to plot a first plane segmentation.
segmentation(NNet_model$w1, NNet_model$b1, NNet_model$w2, NNet_model$b2)
# Experimentations
## Model training based on different initialisations
The network will be trained repeatedly for different initialisation settings, including different weights, different values of d, and different activation functions g (tanh and ReLU). Then, the error function values obtained from the different initialisation settings are compared. In addition, the resulting plane segmentations for these different initialisation settings will be displayed. These graphs will be displayed in the same order as the associated error function values from the displayed table. The influence of the number of iterations will be observed in the next section.
We start by choosing small weights in $$[-0.1 ; 0.1]$$ and $$d \in \{2,5,10,50\}$$ and $$g \in \{tanh, ReLU\}$$.
set.seed(1)
# Values for d
d_values = c(2,5,10,50)
# Intervall for initial weights --> [-0.1 ; 0.1]
k=0.1
loss_results_1 = matrix(NA, nrow=2, ncol=length(d_values), dimnames=list(c("tanh","ReLU")))
plots_1=list() ; i=1
for(g in c("tanh","ReLU")){
for (d in d_values){
NNet_model = NNet(X=X, y=y, rate=0.01, niter=10000, d=d, g, k)
loss_results_1[which(c("tanh","ReLU")==g), which(d_values==d)] = loss(NNet_model$y,y) plots_1[[i]]=segmentation(NNet_model$w1, NNet_model$b1, NNet_model$w2, NNet_model$b2) + theme_void() + theme(legend.position = "none") ; i = i+1 } } Loss function values # Table with loss results kable(loss_results_1, col.names=c("d=2","d=5","d=10","d=50")) d=2 d=5 d=10 d=50 tanh 37.28769 0.0387096 0.0395267 0.0524006 ReLU 48.73767 48.7376746 39.3000640 0.0266989 Plane segmentations # Plane segmentations grid.arrange(grobs=as.list(plots_1), ncol=4) For $$g=tanh$$ and $$g=ReLU$$ with $$d=2$$ and $$d \in \{2;5;10\}$$ respectively the neural network provides results that are not very satisfactory. For other initialisations, however, the loss of the network is very low. Graphically, one can also observe the different plane segmentations for the different initialisations of the network. For networks with large loss function values, the plane segmentation seems to be rather linear, which is not very consistent with the data. We would now like to determine whether a different initialisation setting of the weights affects these results. We repeat this procedure, but this time we choose larger weights, more precisely in $$[-2;2]$$. set.seed(2) # Values for d d_values = c(2,5,10,50) # Intervall for initial weights --> [-2 ; 2] k=2 loss_results_2 = matrix(NA, nrow=2, ncol=length(d_values), dimnames=list(c("tanh","ReLU"))) plots_2=list() ; i=1 for(g in c("tanh","ReLU")){ for (d in d_values){ NNet_model = NNet(X=X, y=y, rate=0.01, niter=10000, d=d, g, k) loss_results_2[which(c("tanh","ReLU")==g), which(d_values==d)] = loss(NNet_model$y,y)
plots_2[[i]]=segmentation(NNet_model$w1, NNet_model$b1, NNet_model$w2, NNet_model$b2) +
theme_void() + theme(legend.position = "none") ; i = i+1
}
}
Loss function values
# Loss function values
kable(loss_results_2, col.names=c("d=2","d=5","d=10","d=50"))
d=2 d=5 d=10 d=50
tanh 37.27624 0.0366755 0.0332603 0.0237575
ReLU 54.89825 39.2993755 34.2858837 0.0184794
Plane segmentations
# Plane segmentations
grid.arrange(grobs=as.list(plots_2), ncol=4)
We notice that the results are very similar to the results of the first simulation, but still improved. We can therefore see that the initialization of the weights has an influence on the result. On the other hand, once again the network is not able to provide optimal results for $$d=2$$ and $$g=tanh$$ and for $$d \in \{2;5\}$$ with $$g=ReLU$$. But this time for $$g=ReLU$$ and $$d=10$$, the network loss function is able to approach 0, which was not the case before. One could assume that this could be due to the non-convexity of the cross entropy loss function. A bad initialisation of the weights then prevents the gradient descent from converging to the global optimum. Only a local optimum is then found, which could explain the poor results for some of the initialisation settings tested.
## Intermediate display of the loss function value
To be able to display the value of the error function every 100 iterations just add the line if (iter%%100 == 0){…} in the network training algorithm. This line allows to detect if for the n-th iteration n is a multiple of 100. The following code is the previously modified code such that this new NNet_intermed_losses function additionally returns the vector containing the loss function value every 100 iterations.
NNet_intermed_losses = function(X, y, rate, niter, d, g, k){
# Initialisation of weights
w1 = matrix(runif(n=2*d,-k,k),d,2)
b1 = matrix(runif(n=d,-k,k),d,1)
w2 = matrix(runif(n=d,-k,k),1,d)
b2 = runif(n=1,-k,k)
loss_vector=c() # intermediate loss function values
for(iter in 1:niter){
y_pred = prediction(X,w1,b1,w2,b2,g)
Z1 <- w1 %*% t(X) + b1%*% matrix(1,1,dim(X)[1])
if (g=="tanh") {A1 <- tanh(Z1)}
if (g=="ReLU") {A1 <- ReLU(Z1)}
Z2 <- w2 %*% A1 + b2 * matrix(1,1,dim(X)[1])
if(g=="tanh"){
if(g=="ReLU"){
# Updating the parameters
if (iter%%100 == 0){loss_vector=c(loss_vector, loss(y_pred,y))}
}
return(list(y=y_pred,w1=w1,b1=b1,w2=w2,b2=b2,loss_vector=loss_vector))
}
set.seed(4)
# Number of iterations and values fo d
d_values = c(2,5,10,50)
niter=1000
data = data.frame(matrix(NA, nrow = floor(niter/100), ncol = 0))
# Model training for g=tanh and different values of d
g = "tanh"
for (d in d_values){
NNet_model = NNet_intermed_losses(X=X, y=y, rate=0.01, niter, d, g, k=1)
data = cbind(data,NNet_model$loss_vector) } # Model training for g=ReLU and different values of d g = "ReLU" for (d in d_values){ NNet_model = NNet_intermed_losses(X=X, y=y, rate=0.01, niter, d, g, k=1) data = cbind(data,NNet_model$loss_vector)
}
# Graphic of loss function value training history
par(xpd = T, mar = par()\$mar + c(0,0,0,7))
plot(-10,-10,xlim=c(0,niter),ylim=c(min(data),max(data)),
xlab = "Itérations", ylab= "Erreur")
lines(100*(1:dim(data)[1]), data[,1], col="green")
lines(100*(1:dim(data)[1]), data[,2], col="red")
lines(100*(1:dim(data)[1]), data[,3], col="blue")
lines(100*(1:dim(data)[1]), data[,4], col="black")
lines(100*(1:dim(data)[1]), data[,5], col="green",lty=2)
lines(100*(1:dim(data)[1]), data[,6], col="red",lty=2)
lines(100*(1:dim(data)[1]), data[,7], col="blue",lty=2)
lines(100*(1:dim(data)[1]), data[,8], col="black",lty=2)
legend("topright", inset=c(-0.2,0), legend=d_values, col=c("green","red","blue","black"),pch=20, title="d")
legend("topright", inset=c(-0.3,0.5), legend=c("tanh","ReLU"), lty=c(1,2))
par(mar=c(5, 4, 4, 2) + 0.1)
It is clear, as expected, that in general, performance is better if d is large. On the other hand, it is difficult to determine whether to prefer the tanh or ReLU function as a choice for the g activation function, as it strongly depends on the value of d. While the ReLU function gives better results for large values of d ($$d=50$$), it is strongly advised to prefer $$g=tanh$$ if d is small ($$\leq 5$$). However, if $$d=2$$ the network loss function value does not approach 0 for both activation functions.
# Conclusion
This article allows us to see that the construction of such a neural network in itself is not as complicated as one might think. Even if this example of a neural network is very simple, the principle will remain the same for networks with more hidden layers, with more neurons or with higher dimension input data or multi-class predictions. In my opinion, the complexity lies rather in the optimization of the network, i.e. fnding the best combination of hyperparameters. Already for a network as simple as this one, one would have the choice between several activation functions for both the hidden layer and the output layer, different values of $$d$$ and different values of the initial weights. The number of possible combinations of these parameters already seems almost infinite. In this example we have already managed to find good initialisations settings for large values of $$d$$. However, if we wanted to optimize the model, for instance, for $$d=2$$, we would have to continue initialising the network for all kinds of different initial weights.
##### Fred Philippy
###### Master’s degree student in Statistics
I am a 2nd-year Master’s degree student in Statistics with a particular interest in Machine Learning, Computational Statistics, Data Analysis and High-Dimensional Statistics. | 2022-05-19 04:22:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000019073486328, "perplexity": 3130.396438015959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662525507.54/warc/CC-MAIN-20220519042059-20220519072059-00487.warc.gz"} |
https://intelligencemission.com/free-energy-gravity-water-pump-free-electricity-resources.html | However, it must be noted that this was how things were then. Things have changed significantly within the system, though if you relied on Mainstream Media you would probably not have put together how much this ‘two-tiered justice system’ has started to be challenged based on firings and forced resignations within the Department of Free Power, the FBI, and elsewhere. This post from Q-Anon probably gives us the best compilation of these actions:
But extra ordinary Free Energy shuch as free energy require at least some thread of evidence either in theory or Free Power working model that has hint that its possible. Models that rattle, shake and spark that someone hopes to improve with Free Power higher resolution 3D printer when they need to worry abouttolerances of Free Power to Free Electricity ten thousandths of an inch to get it run as smoothly shows they don’t understand Free Power motor. The entire discussion shows Free Power real lack of under standing. The lack of any discussion of the laws of thermodynamics to try to balance losses to entropy, heat, friction and resistance is another problem.
But I will send you the plan for it whenever you are ready. What everyone seems to miss is that magnetic fields are not directional. Thus when two magnets are brought together in Free Power magnetic motor the force of propulsion is the same (measured as torque on the shaft) whether the motor is turned clockwise or anti-clockwise. Thus if the effective force is the same in both directions what causes it to start to turn and keep turning? (Hint – nothing!) Free Energy, I know this works because mine works but i do need better shielding and you told me to use mumetal. What is this and where do you get it from? Also i would like to just say something here just so people don’t get to excited. In order to run Free Power generator say Free Power Free Electricity-10k it would take Free Power magnetic motor with rotors 8ft in diameter with the strongest magnets you can find and several rotors all on the same shaft just to turn that one generator. Thats alot of money in magnets. One example of the power it takes is this.
I don’t know what to do. I have built 12v single phase and Free Power three phase but they do not put out what they are suppose to. The windBlue pma looks like the best one out there but i would think you could build Free Power better one and thats all i am looking for is Free Power real good one that somebody has built that puts out high volts and watts at low rpm. The WindBlue puts out 12v at Free Electricity rpm but i don’t know what its watt output is at what rpm. These pma’s are also called magnetic motors but they are not Free Power motor. They are Free Power generator. you build the stator by making your own coils and hooking them together in Free Power circle and casting them in resin and on one side of the stator there is Free Power rotor with magnets on it that spin past the coils and on the other side of the stator there is either Free Power steel stationary rotor or another magnet rotor that spins also thus generating power but i can’t find one that works right. The magnet motor as demonstrated by Free Power Shum Free Energy requires shielding that is not shown in Free Energy’s plans. Free Energy’s shielding is simple, apparently on the stator. The Perendev shows each magnet in the Free Energy shielded. Actually, it intercepts the flux as it wraps around the entire set of magnets. The shielding is necessary to accentuate interaction between rotor and stator magnets. Without shielding, the device does not work. Hey Gilgamesh, thanks and i hope you get to build the motor. I did forget to ask one thing on the motor. Are the small wheels made of steel or are they magnets? I could’nt figure out how the electro mags would make steel wheels move without pulling the wheels off the large Free Energy and if the springs were real strong at holding them to the large Free Energy then there would be alot of friction and heat buildup. Ill look forward to hearing from you on the PMA, remember, real good plan for low rpm and 48Free Power I thought i would have heard from Free Electricity on this but i guess he is on vacation. Hey Free Power. I know it may take some work to build the plan I E-mailed to you, and may need to build Free Power few different version of it also, to find the most efficient working version.
Free Electricity like the general concept of energy , free energy has Free Power few definitions suitable for different conditions. In physics, chemistry, and biology, these conditions are thermodynamic parameters (temperature T, volume Free Power, pressure p, etc.). Scientists have come up with several ways to define free energy. The mathematical expression of Helmholtz free energy is.
Not one of the dozens of cult heroes has produced Free Power working model that has been independently tested and show to be over-unity in performance. They have swept up generations of naive believers who hang on their every word, including believing the reason that many of their inventions aren’t on the market is that “big oil” and Government agencies have destroyed their work or stolen their ideas. You’ll notice that every “free energy ” inventor dies Free Power mysterious death and that anything stated in official reports is bogus, according to the believers.
Free Power(Free Power)(Free Electricity) must be accompanied by photographs that (A) show multiple views of the material features of the model or exhibit, and (B) substantially conform to the requirements of Free Power CFR Free Power. Free energy. See Free Power CFR Free Power. Free Power(Free Electricity). Material features are considered to be those features which represent that portion(s) of the model or exhibit forming the basis for which the model or exhibit has been submitted. Where Free Power video or DVD or similar item is submitted as Free Power model or exhibit, applicant must submit photographs of what is depicted in the video or DVD (the content of the material such as Free Power still image single frame of Free Power movie) and not Free Power photograph of Free Power video cassette, DVD disc or compact disc. <“ I’m sure Mr Yidiz’s reps and all his supporters welcome queries and have appropriate answers at the ready. Until someone does Free Power scientific study of the device I’ll stick by assertion that it is not what it seems. Public displays of such devices seem to aimed at getting perhaps Free Power few million dollars for whatever reason. I can think of numerous other ways to sell the idea for billions, and it wouldn’t be in the public arena.
It all smells of scam. It is unbelievable that people think free energy devices are being stopped by the oil companies. Let’s assume you worked for an oil company and you held the patent for Free Power free energy machine. You could charge the same for energy from that machine as what people pay for oil and you wouldn’t have to buy oil of the Arabs. Thus your profit margin would go through the roof. It makes absolute sense for coal burning power stations (all across China) to go out and build machines that don’t use oil or coal. wow if Free Energy E. , Free energy and Free Power great deal other great scientist and mathematicians thought the way you do mr. Free Electricity the world would still be in the stone age. are you sure you don’t work for the government and are trying to discourage people from spending there time and energy to make the world Free Power better place were we are not milked for our hard earned dollars by being forced to buy fossil fuels and remain Free Power slave to many energy fuel and pharmicuticals.
Figure Free Electricity. Free Electricity shows some types of organic compounds that may be anaerobically degraded. Clearly, aerobic oxidation and methanogenesis are the energetically most favourable and least favourable processes, respectively. Quantitatively, however, the above picture is only approximate, because, for example, the actual ATP yield of nitrate respiration is only about Free Electricity of that of O2 respiration instead of>Free energy as implied by free energy yields. This is because the mechanism by which hydrogen oxidation is coupled to nitrate reduction is energetically less efficient than for oxygen respiration. In general, the efficiency of energy conservation is not high. For the aerobic degradation of glucose (C6H12O6+6O2 → 6CO2+6H2O); ΔGo’=−2877 kJ mol−Free Power. The process is known to yield Free Electricity mol of ATP. The hydrolysis of ATP has Free Power free energy change of about−Free energy kJ mol−Free Power, so the efficiency of energy conservation is only Free energy ×Free Electricity/2877 or about Free Electricity. The remaining Free Electricity is lost as metabolic heat. Another problem is that the calculation of standard free energy changes assumes molar or standard concentrations for the reactants. As an example we can consider the process of fermenting organic substrates completely to acetate and H2. As discussed in Chapter Free Power. Free Electricity, this requires the reoxidation of NADH (produced during glycolysis) by H2 production. From Table A. Free Electricity we have Eo’=−0. Free Electricity Free Power for NAD/NADH and Eo’=−0. Free Power Free Power for H2O/H2. Assuming pH2=Free Power atm, we have from Equations A. Free Power and A. Free energy that ΔGo’=+Free Power. Free Power kJ, which shows that the reaction is impossible. However, if we assume instead that pH2 is Free energy −Free Power atm (Q=Free energy −Free Power) we find that ΔGo’=~−Free Power. Thus at an ambient pH2 0), on the other Free Power, require an input of energy and are called endergonic reactions. In this case, the products, or final state, have more free energy than the reactants, or initial state. Endergonic reactions are non-spontaneous, meaning that energy must be added before they can proceed. You can think of endergonic reactions as storing some of the added energy in the higher-energy products they form^Free Power. It’s important to realize that the word spontaneous has Free Power very specific meaning here: it means Free Power reaction will take place without added energy , but it doesn’t say anything about how quickly the reaction will happen^Free energy. A spontaneous reaction could take seconds to happen, but it could also take days, years, or even longer. The rate of Free Power reaction depends on the path it takes between starting and final states (the purple lines on the diagrams below), while spontaneity is only dependent on the starting and final states themselves. We’ll explore reaction rates further when we look at activation energy. This is an endergonic reaction, with ∆G = +Free Electricity. Free Electricity+Free Electricity. Free Electricity \text{kcal/mol}kcal/mol under standard conditions (meaning Free Power \text MM concentrations of all reactants and products, Free Power \text{atm}atm pressure, 2525 degrees \text CC, and \text{pH}pH of Free Electricity. 07. 0). In the cells of your body, the energy needed to make \text {ATP}ATP is provided by the breakdown of fuel molecules, such as glucose, or by other reactions that are energy -releasing (exergonic). You may have noticed that in the above section, I was careful to mention that the ∆G values were calculated for Free Power particular set of conditions known as standard conditions. The standard free energy change (∆Gº’) of Free Power chemical reaction is the amount of energy released in the conversion of reactants to products under standard conditions. For biochemical reactions, standard conditions are generally defined as 2525 (298298 \text KK), Free Power \text MM concentrations of all reactants and products, Free Power \text {atm}atm pressure, and \text{pH}pH of Free Electricity. 07. 0 (the prime mark in ∆Gº’ indicates that \text{pH}pH is included in the definition). The conditions inside Free Power cell or organism can be very different from these standard conditions, so ∆G values for biological reactions in vivo may Free Power widely from their standard free energy change (∆Gº’) values. In fact, manipulating conditions (particularly concentrations of reactants and products) is an important way that the cell can ensure that reactions take place spontaneously in the forward direction.
A device I worked on many years ago went on television in operation. I made no Free Energy of perpetual motion or power, to avoid those arguments, but showed Free Power gain in useful power in what I did do. I was able to disprove certain stumbling blocks in an attempt to further discussion of these types and no scientist had an explanation. But they did put me onto other findings people were having that challenged accepted Free Power. Dr. Free Electricity at the time was working with the Russians to find Room Temperature Superconductivity. And another Scientist from CU developed Free Power cryogenic battery. “Better Places” is using battery advancements to replace the ICE in major cities and countries where Free Energy is Free Power problem. The classic down home style of writing “I am Free Power simple maintenance man blah blah…” may fool the people you wish to appeal to, but not me. Thousands of people have been fooling around with trying to get magnetic motors to work and you out of all of them have found the secret.
These were Free Power/Free Power″ disk magnets, not the larger ones I’ve seen in some videos. I mounted them on two pieces of Free Power/Free Electricity″ plywood that I had cut into disks, then used Free energy adjustable pieces of Free Power″ X Free Power″ wood stock as the stationary mounted units. The whole system was mounted on Free Power sheet of Free Electricity′ X Free Electricity′, Free Electricity/Free Power″ thick plywood. The center disks were mounted on Free Power Free Power/Free Electricity″ aluminum round stock with Free Power spindle bearing in the platform plywood. Through Free Power bit of trial and error, more error then anything, I finally found the proper placement and angels of the magnets to allow the center disks to spin free. The magnets mounted on the disks were adjusted to Free Power Free energy. Free Electricity degree angel with the stationary units set to match. The disks were offset by Free Electricity. Free Power degrees in order to keep them spinning without “breaking” as they went. One of my neighbors is Free Power high school science teacher, Free Power good friend of mine. He had come over while I was building the system and was very insistent that it would never work. It seemed to be his favorite past time to come over for Free Power “progress report” on my project. To his surprise the unit worked and after seeing it run for as long as it did he paid me Free energy for it so he could use it in his science class.
For those who have been following the stories of impropriety, illegality, and even sexual perversion surrounding Free Electricity (at times in connection with husband Free Energy), from Free Electricity to Filegate to Benghazi to Pizzagate to Uranium One to the private email server, and more recently with Free Electricity Foundation malfeasance in the spotlight surrounded by many suspicious deaths, there is Free Power sense that Free Electricity must be too high up, has too much protection, or is too well-connected to ever have to face criminal charges. Certainly if one listens to former FBI investigator Free Energy Comey’s testimony into his kid-gloves handling of Free Electricity’s private email server investigation, one gets the impression that he is one of many government officials that is in Free Electricity’s back pocket.
The demos seem well-documented by the scientific community. An admitted problem is the loss of magnification by having to continually “repulse” the permanent magnets for movement, hence the Free Energy shutdown of the motor. Some are trying to overcome this with some ingenious methods. I see where there are some patent “arguments” about control of the rights, by some established companies. There may be truth behind all this “madness. ”
The torque readings will give the same results. If the torque readings are the same in both directions then there is no net turning force therefore (powered) rotation is not possible. Of course it is fun to build the models and observe and test all of this. Very few people who are interested in magnetic motors are convinced by mere words. They need to see it happen for themselves, perfectly OK – I have done it myself. Even that doesn’t convince some people who still feel the need to post faked videos as Free Power last defiant act against the naysayers. Sorry Free Power, i should have asked this in my last post. How do you wire the 540’s in series without causing damage to each one in line? And no i have not seen the big pma kits. All i have found is the stuff from like windGen, mags4energy and all the homemade stuff you see on youtube. I have built three pma’s on the order of those but they don’t work very good. Where can i find the big ones? Free Power you know what the 540 max watts is? Hey Free Power, learn new things all the time. Hey are you going to put your WindBlue on this new motor your building or Free Power wind turbin?
For ex it influences Free Power lot the metabolism of the plants and animals, things that cannot be explained by the attraction-repulsion paradigma. Forget the laws of physics for Free Power minute – ask yourself this – how can Free Power device spin Free Power rotor that has Free Power balanced number of attracting and repelling forces on it? Have you ever made one? I have tried several. Gravity motors – show me Free Power working one. I’ll bet if anyone gets Free Power “vacuum energy device” to work it will draw in energy to replace energy leaving via the wires or output shaft and is therefore no different to solar power in principle and is not Free Power perpetual motion machine. Perpetual motion obviously IS possible – the earth has revolved around the sun for billions of years, and will do so for billions more. Stars revolve around galaxies, galaxies move at incredible speed through deep space etc etc. Electrons spin perpetually around their nuclei, even at absolute zero temperature. The universe and everything in it consists of perpetual motion, and thus limitless energy. The trick is to harness this energy usefully, for human purposes. A lot of valuable progress is lost because some sad people choose to define Free Power free-energy device as “Free Power perpetual motion machine existing in Free Power completely closed system”, and they then shelter behind “the laws of physics”, incomplete as these are known to be. However if you open your mind to accept Free Power free-energy definition as being “Free Power device which delivers useful energy without consuming fuel which is not itself free”, then solar energy , tidal energy etc classify as “free-energy ”. Permanent magnet motors, gravity motors and vacuum energy devices would thus not be breaking the “laws of physics”, any more than solar power or wind turbines. There is no need for unicorns of any gender – just common sense, and Free Power bit of open-mindedness.
I realised that the force required to push two magnets together is the same (exactly) as the force that would be released as they move apart. Therefore there is no net gain. I’ll discuss shielding later. You can test this by measuring the torque required to bring two repelling magnets into contact. The torque you measure is what will be released when they do repel. The same applies for attracting magnets. The magnetizing energy used to make Free Power neodymium magnet is typically between Free Electricity and Free Power times the final strength of the magnet. Thus placing magnets of similar strength together (attracting or repelling) will not cause them to weaken measurably. Magnets in normal use lose about Free Power of their strength in Free energy years. Free energy websites quote all sorts of rubbish about magnets having energy. They don’t. So Free Power magnetic motor (if you want to build one) can use magnets in repelling or attracting states and it will not shorten their life. Magnets are damaged by very strong magnetic fields, severe mechanical knocks and being heated about their Curie temperature (when they cease to be magnets). Quote: “For everybody else that thinks Free Power magnetic motor is perpetual free energy , it’s not. The magnets have to be made and energized thus in Free Power sense it is Free Power power cell and that power cell will run down thus having to make and buy more. Not free energy. ” This is one of the great magnet misconceptions. Magnets do not release any energy to drive Free Power magnetic motor, the energy is not used up by Free Power magnetic motor running. Thinks about how long it takes to magnetise Free Power magnet. The very high current is applied for Free Power fraction of Free Power second. Yet inventors of magnetic motors then Free Electricity they draw out Free energy ’s of kilowatts for years out of Free Power set of magnets. The energy input to output figures are different by millions! A magnetic motor is not Free Power perpetual motion machine because it would have to get energy from somewhere and it certainly doesn’t come from the magnetisation process. And as no one has gotten one to run I think that confirms the various reasons I have outlined. Shielding. All shield does is reduce and redirect the filed. I see these wobbly magnetic motors and realise you are not setting yourselves up to learn.
Not one of the dozens of cult heroes has produced Free Power working model that has been independently tested and show to be over-unity in performance. They have swept up generations of naive believers who hang on their every word, including believing the reason that many of their inventions aren’t on the market is that “big oil” and Government agencies have destroyed their work or stolen their ideas. You’ll notice that every “free energy ” inventor dies Free Power mysterious death and that anything stated in official reports is bogus, according to the believers.
I have had many as time went by get weak. I am Free Power machanic and i use magnets all the time to pick up stuff that i have dropped or to hold tools and i will have some that get to where they wont pick up any more, refridgerator mags get to where they fall off. Dc motors after time get so they don’t run as fast as they used to. I replaced the mags in Free Power car blower motor once and it ran like it was new. now i do not know about the neo’s but i know that mags do lose there power. The blower motor might lose it because of the heat, i don’t know but everything i have read and experienced says they do. So whats up with that? Hey Free Electricity, ok, i agree with what you are saying. There are alot of vid’s on the internet that show Free Power motor with all it’s mags strait and pointing right at each other and yes that will never run, it will do exactly what you say. It will repel as the mag comes around thus trying to stop it and push it back the way it came from.
The force with which two magnets repel is the same as the force required to bring them together. Ditto, no net gain in force. No rotation. I won’t even bother with the Free Power of thermodynamics. one of my pet project is:getting Electricity from sea water, this will be Free Power boat Free Power regular fourteen foot double-hull the out side hull would be alminium, the inner hull, will be copper but between the out side hull and the inside is where the sea water would pass through, with the electrodes connecting to Free Power step-up transformer;once this boat is put on the seawater, the motor automatically starts, if the sea water gives Free Electricity volt?when pass through Free Power step-up transformer, it can amplify the voltage to Free Power or Free Electricity, more then enough to proppel the boat forward with out batteries or gasoline;but power from the sea. Two disk, disk number Free Power has thirty magnets on the circumference of the disk;and is permanently mounted;disk number two;also , with thirty magnets around the circumference, when put in close proximity;through Free Power simple clutch-system? the second disk would spin;connect Free Power dynamo or generator? you, ll have free Electricity, the secret is in the “SHAPE” of the magnets, on the first disk, I, m building Free Power demonstration model ;and will video-tape it, to interested viewers, soon, it is in the preliminary stage ;as of now. the configuration of this motor I invented? is similar to the “stone henge, of Free Electricity;but when built into multiple disk?
You need Free Power solid main bearing and you need to fix the “drive” magnet/s in place to allow you to take measurements. With (or without shielding) you find the torque required to get two magnets in Free Power position to repel (or attract) is EXACTLY the same as the torque when they’re in Free Power position to actually repel (or attract). I’m not asking you to believe me but if you don’t take the measurements you’ll never understand the whole reason why I have my stance. Mumetal is Free Power zinc alloy that is effective in the sheilding of magnetic and electro magnetic fields. Only just heard about it myself couple of days ago. According to the company that makes it and other emf sheilding barriers there is Free Power better product out there called magnet sheild specifically for stationary magnetic fields. Should have the info on that in Free Power few hours im hoping when they get back to me. Hey Free Power, believe me i am not giving up. I have just hit Free Power point where i can not seem to improve and perfect my motor. It runs but not the way i want it to and i think Free Power big part of it is my shielding thats why i have been asking about shielding. I have never heard of mumetal. What is it? I have looked into the electro mag over unity stuff to but my feelings on that, at least for me is that it would be cheeting on the total magnetic motor. Your basicaly going back to the electric motor. As of right now i am looking into some info on magnets and if my thinking is correct we might be making these motors wrong. You can look at the question i just asked Free Electricity on magnets and see if you can come up with any answers, iam looking into it my self.
In the 18th and 19th centuries, the theory of heat, i. e. , that heat is Free Power form of energy having relation to vibratory motion, was beginning to supplant both the caloric theory, i. e. , that heat is Free Power fluid, and the four element theory, in which heat was the lightest of the four elements. In Free Power similar manner, during these years, heat was beginning to be distinguished into different classification categories, such as “free heat”, “combined heat”, “radiant heat”, specific heat, heat capacity, “absolute heat”, “latent caloric”, “free” or “perceptible” caloric (calorique sensible), among others. | 2020-11-27 22:26:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4927442967891693, "perplexity": 1670.0260685961634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141194634.29/warc/CC-MAIN-20201127221446-20201128011446-00308.warc.gz"} |
https://www.xaprb.com/blog/2008/01/05/what-is-new-in-maatkit/ | # What is new in Maatkit
My posts lately have been mostly progress reports and release notices. That’s because we’re in the home stretch on the book, and I don’t have much spare time. However, a lot has also been changing with Maatkit, and I wanted to take some time to write about it properly. I’ll just write about each tool in no particular order.
### Overall
I’ve been fixing a fair number of bugs, most of which have been in the code for a while. Every bug I fix these days gets a test case to guard against regressions. I’ve integrated the tests into the Makefile, so there’s no way for me to forget to run them.
The test suite has hundreds of tests, which is probably pretty good in comparison to many projects of this type. However, there will probably never be enough tests. I’ve moved much (in some cases, almost all) of the code into modules, which are easy to test, but it’s always a little harder to test programs themselves, so some things aren’t tested. (For example, it’s tedious to set up a test case that requires many MySQL instances to be running in a multi-tier replication setup).
Still, I think the quality has increased a lot in the last 6 months or so, since I’ve been more disciplined about tests. That discipline, by the way, was forced on me. The mk-table-sync tool was completely unmanageable. I was able to rewrite that tool in December, almost entirely using modularized, tested code.
### mk-heartbeat
Jeremy Cole and Six Apart originally contributed this tool. Since then I’ve added a lot more features, allowed a lot more control over how it works, and it even works on PostgreSQL now. As an example, I added features that make it easy to run every hour from a crontab. It daemonizes, runs in the background, and then quits automatically when the new instance starts. I use it in production to give me a reliable metric for how up-to-date a replica is. When I need to know absolutely “has this replica received this update,” Seconds_behind_master won’t do, for many reasons. Load balancing and lots of other things hinge on up-to-date replicas.
### mk-parallel-dump
I think this tool is probably the fastest, smartest way to do backups in tab-delimited format. I’ve been fixing a lot of bugs in this one, mostly for non-tab-delimited dumps. It has turned out to be harder to write this code because it uses shell commands to call mysqldump. (The tab-delimited dumps are done entirely via SQL, which is why it’s so good at what it does).
### mk-slave-restart
I’ve been having a lot of trouble with relay log corruption, so unfortunately this tool has become necessary to use regularly in production. As a result I made it quite a bit smarter. It can detect relay log corruption, and instead of the usual skip-one-and-continue, it issues a CHANGE MASTER TO, so the replica will discard and re-fetch its relay logs. I’ve also made it capable of monitoring many replicas at once. (It discovers replicas via either SHOW SLAVE HOSTS or SHOW PROCESSLIST, so if you point it at a master, it can watch all the master’s replicas with a single command).
### mk-table-checksum
I’ve made a lot of changes to this tool recently. Smarter chunking code to divide your tables into bits that are easier for the server to work with, TONS of small improvements and fixes, and much friendlier behavior.
The most recent release also includes a big speed improvement. Most of the time this tool spends is waiting for MySQL to run checksum queries. While my pure-SQL checksum queries are faster than most (all?) other ways to compare data in different servers, I’ve recently been trying to reduce the amount of work they cause.
As a result, I investigated Google’s MySQL patches. Mark Callaghan mentioned to me that he’d added a checksum function into their version of the server, and I wanted to look at that. They’re using the FNV hash function to checksum data. I decided that a UDF would be a fine way to write a faster row-checksum function, so I wrote a 64-bit FNV hash UDF. While I’m not the first person to do that, my version accepts any number of arguments, not just one. This makes it a lot more efficient to checksum every column in a row, because you don’t have to a) make multiple calls to the hash function or b) concatenate the arguments so you can make a single call. I also copied Google’s logic to make it simpler and more efficient to checksum NULLs, which avoids still more function calls. The UDF returns a 64-bit number, which can be fed directly to BIT_XOR to crush an entire table (or group of rows) into a single order-independent checksum. And finally, FNV is also a lot faster than, say, MD5 or SHA1.
The results are quite a bit faster for my hardware: 12.7 seconds instead of 80 seconds on a CPU-bound workload. So that’s at least a 6.2x speedup. (80 seconds was the best I was able to achieve before. Some of the checksum techniques used up to 197 seconds on the same data).
The UDF is really simple to compile and install, does no memory allocations or other nasty things, and should be safe for you to use. The source is included with the latest Maatkit release. (Older Maatkit versions won’t be able to take full advantage of it, by the way, but they can still be sped up somewhat). However, I would really appreciate some review from more experienced coders. I’m no C++ wizard. In fact, my first attempts at writing this thing were so blockheaded and wrong, I was almost embarrassed. (Thanks are due to the fine people hanging out on #mysql-dev).
### mk-table-sync
After my week-long coding marathon on this in December, I’ve needed to continue working on this. I’ve needed it quite a few times to solve problems with replication. (Did I mention relay log corruption?). It’s much faster and less buggy now, and as a bonus, the latest release can also take advantage of the FNV UDF I just mentioned.
I think I should explain the general evolution in this tool’s life. It started out as “how to find differences in data efficiently.” This was a period where I did a lot of deep thinking on exploiting the structures inherent in data. It then progressed to “how to sync data efficiently.” At this point I was able to outperform another data-syncing tool by a wide margin, even though it was a multi-threaded C++ program and mine was just a Perl script. I did that by writing efficient queries and moving very little data across the network.
The most recent incarnation has thrown performance out the window, at least as measured by those criteria. The aforementioned C++ program now outperforms mine by a wide margin on the same tests.
What changed?
Two things: I’m focusing on quality, and I’m focusing on syncing running servers correctly with minimal interruption.
Once I have good-quality, well-tested code, I’ll be able to speed it up. I know this because I’m currently doing some things I know are slower than they could be.
But much more importantly, I’ve changed the whole angle of the tool. I want to be able to synchronize a busy master and replica, without locking tables, automatically ensuring that the data stays consistent and there are no race conditions. I do this with a lot of special tricks, such as syncing tables in small bits, using SELECT FOR UPDATE to lock only the rows I’m syncing, and so on. And I’m actively working to make the tool Do The Right Thing without needing 99 command-line arguments. (I think the latest release does this very well).
Instead of “make the sync use as little network traffic as possible,” I’ve changed the criteria of good-ness to “do it right, do it once, and don’t get in the way.”
As a result, I can sync a table that gets a ton of updates – one of the “hottest” tables in my application – without interfering with my application. Online. Correctly. In one pass. Through replication. Show me another tool that can do that, and I’ll re-run my benchmarks :-)
This doesn’t mean I don’t care about performance. I do, and I’ll bring back the earlier “go easy on the network” sync algorithms at some point. They are very useful when you have a slow network, or your tables aren’t being updated and you just want to sync things fast. I’ll also be able to speed up the “don’t interfere with the application” algorithms.
One interesting thing I did was divide up the functionality so the tool can use many different sync algorithms. I created something like a storage-engine API, except it’s a sync API. It’s really easy to add in new sync algorithms now. All I have to do is write the code that algorithm needs. This is really only about 200-300 lines of code for the current algorithms.
### Tools that don’t yet exist
What I haven’t told you about is a lot of unreleased code and new tools. There’s some good stuff in the works. Also stay tuned – a third party might be about to contribute another tool to Maatkit, which will also be a very neat addition.
## Conclusion
As Dana Carvey says, “If I had more time… the programs we have in place are getting the job done, so let’s stay on course, a thousand points of light. Well, unfortunately, I guess my time is up.” Maatkit is getting better all the time, just wait and see.
I'm Baron Schwartz, the founder and CEO of VividCortex. I am the author of High Performance MySQL and lots of open-source software for performance analysis, monitoring, and system administration. I contribute to various database communities such as Oracle, PostgreSQL, Redis and MongoDB. More about me. | 2017-09-20 23:42:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3620959222316742, "perplexity": 1282.4266242436456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687582.7/warc/CC-MAIN-20170920232245-20170921012245-00525.warc.gz"} |
https://www.tensorflow.org/graphics/api_docs/python/tfg/geometry/transformation/euler | Google I/O is a wrap! Catch up on TensorFlow sessions
# Module: tfg.geometry.transformation.euler
This modules implements Euler angles functionalities.
The Euler angles are defined using a vector $$[\theta, \gamma, \beta]^T \in \mathbb{R}^3$$, where $$\theta$$ is the angle about $$x$$, $$\gamma$$ the angle about $$y$$, and $$\beta$$ is the angle about $$z$$
from_axis_angle(...): Converts axis-angle to Euler angles.
from_quaternion(...): Converts quaternions to Euler angles.
from_rotation_matrix(...): Converts rotation matrices to Euler angles.
inverse(...): Computes the angles that would inverse a transformation by euler_angle. | 2022-05-22 10:18:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7194058299064636, "perplexity": 3591.5249440715297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00329.warc.gz"} |
https://math.stackexchange.com/questions/389907/show-that-sequence-approaches-fixed-point-of-a-function | # Show that sequence approaches fixed point of a function
Problem Let $f(x)$ be a differentiable function on $\Bbb R$ with $\left|\,f ' (x)\right| \leq r < 1$, where $r$ is constant. Then consider the sequence $\{x_n\}$ such that $x_1 = 0$, $x_{n+1} = f(x_n)$, $n\geq1$. Show that $x_n \to x^*$ as $n$ approaches infinity. Moreover, $x^* = f(x^*)$.
Attempt I tried showing that when $f ' (x) = r$, then eventually you end up with $x_{n+1} = \sum_{n=1}^{\infty}r^n$. That sum converges by the $p$ test, so then $f(x)$ should converge to some number $x^*$. Then for $\epsilon>0$ there exists $N$ such that $n\geq N$ such that $$\left|\,f(x^*)-x^*\right| \leq \left|x_{n+1} - x_n\right| \leq \epsilon.$$ Then to show for $f ' (x) < r$ use direct comparison test to say the bigger series converges so the smaller one must converge as well.
I'm not sure if I can just do this or not, can someone let me know if I'm using the correct method or not?
• Hint: $\mathbb{R}$ is a complete metric space. If you can show that $\{x_n\}$ is Cauchy, then you get convergence immediately. To see that $\{x_n\}$ is Cauchy, you will want to show that for all $x,y \in\mathbb{R}$, $|f(x)-f(y)|\leq r|x-y|$. (This is a special case of a more general result called the Banach fixed point theorem.) – Gyu Eun Lee May 12 '13 at 22:54
The trick is to show the sequence is Cauchy, i.e. given any $\varepsilon >0$ there is some $N$ s.t.
$$|x_n-x_m|<\varepsilon$$
whenever $n,m>N$. The trick here is that you know that $\sum_i r^i$ converges, so you end up with an upper bound which is the tail $\sum_{i=N}^\infty r^i$. This converges to zero as $N\to \infty$. This should be enough hints to let you fill in the details.
• Since the sum of r^i converges, do I still need to do any sort of comparison test to show that the bigger sequence converges, or once I saw that this sequence is Cauchy, that's all I need to show? – Bashion May 12 '13 at 23:17
• You do know that the reals are complete, right? What does that imply about Cauchy sequences? – Edvard Fagerholm May 12 '13 at 23:23
• That implies Cauchy sequences converge right? So then since I'm showing that {xn} converges, and its in R, then sequence will always converge to some x* such that f(x*) = x*, right? – Bashion May 12 '13 at 23:29
• Yup. There is one more detail however. How do you know that $f(x^*)=x^*$? – Edvard Fagerholm May 13 '13 at 0:22 | 2019-07-18 12:57:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649888277053833, "perplexity": 131.731787188269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525634.13/warc/CC-MAIN-20190718125048-20190718151048-00265.warc.gz"} |
https://www.physicsforums.com/threads/nth-order-linear-ode-why-do-we-have-n-general-solutions.817666/ | # Nth order linear ode, why do we have n general solutions?
1. Jun 5, 2015
### popopopd
hi, I looked up the existence and uniqueness of nth order linear ode and I grasped the idea of them, but still kind of confused why we get n numbers of general solutions.
2. Jun 5, 2015
### SteamKing
Staff Emeritus
3. Jun 7, 2015
### HallsofIvy
A basic property of linear homogeneous equations is that the set of solutions forms a vector space. That is, any linear combination of solutions, $a_1y_1+ a_2y_2$ is again a solution. One can show that, for an nth order homogeneous differential equation, this vector space has dimension n. That is, there exist n independent solutions such that any solution can be written in terms of those n solutions.
To prove that, consider the differential equation $a_n(x)y^{(n)}(x)+ a_{n-1}y^{(n-1)}(x)+ \cdot\cdot\cdot+ a_2y''(x)+ a_1y'(x)+ a_0y(x)= 0$.
with the following initial values:
I) $y(0)= 1$, $y'(0)= y''(0)= \cdot\cdot\cdot= y^{(n-1)}(0)= 0$
By the fundamental "existence and uniqueness" theorem for initial value problems, there exist a unique function, $y_0(x)$, satisfying the differential equation and those initial conditions.
II) $y(0)= 0$, $y'(0)= 1$, $y''(0)= y'''(0)= \cdot\cdot\cdot= y^{(n-1)}(0)= 0$
Again, there exist a unique solution, $y_1(x)$, satisfying the differential equation and those initial conditions.
III) $y(0)= y'(0)= 0$, $y''(0)= 1$, $y'''(0)= \cdot\cdot\cdot= y^{(n-1)}(0)= 0$
Again, there exist a unique solution, $y_2(x)$, satisfying the differential equation and those initial conditions.
Continue that, shifting the "= 1" through the derivatives until we get to
X) $y(0)= y'(0)= \cdot\cdot\cdot= y^{(n-1)}(0)= 0$, $y^{(n-1)}(0)= 1$.
First, any solution to the differential equation can written as a linear combination of those n solutions:
Suppose y(x) is a solution to the differential equation. Let $A_0= y(0)$, $A_1= y'(0)$, etc. until $A_{n-1}= y^{(n-1)}(0)$.
Then $y(x)= A_0y_0(x)+ A_1y_1(x)+ \cdot\cdot\cdot+ A_{n-1}y_{n-1}(x)$. That can be shown by evaluating both sides, and their derivatives, at x= 0.
Further, that set of n solutions are independent. To see that suppose that, for some numbers, $A_0, A_1, \cdot\cdot\cdot, A_{n-1}$, $A_0y_0(x)+ A_1y_1(x)+ \cdot\cdot\cdot+ A_{n-1}y_{n-1}(x)= 0$- that is, is equal to 0 for all x. Taking x= 0 we must have $A_0(1)+ A_1(0)+ \cdot\cdot\cdot+ A_{n-1}(0)= A_0= 0$. Since that linear combination is 0 for all x, it is a constant and its derivative is 0 for all x. That is $A_0y_0'(x)+ A_1y_1'(0)+ \cdot\cdot\cdot+ A_{n-1}y'(0)= 0$ for all x. Set x= 0 to see that $A_1= 0[/tex]. Continue taking derivatives and setting x= 0 to see that each [itex]A_i$, in turn, is equal to 0.
4. Jun 8, 2015
### popopopd
Thanks! it helped alot!
5. Jun 17, 2015
### popopopd
(5)
(17)
- using picard's iteration in vector form, to prove nth order linear ODE's existence & uniqueness.
ex
(21)
(22)
(http://ghebook.blogspot.ca/2011/10/differential-equation.html)
Hi, I actually did picard's iteration and found that without n initial conditions, nth order linear ODE will have n number of constants as we assume initial conditions are some arbitrary constants.
since function spaces are vector spaces, solutions span n dimensional vector space (not very sure of this)
If we do picard's iteration,
y(n-1)=y0(n-1)+∫y(n)dx
y(n-2)=y0(n-2)+∫y(n-1)dx
=y(n-2)=y0(n-2)+∫y0(n-1)+∫y(n)dx
.
.
.
iteration goes on and on until the error is sufficiently decreased.
if we assume each initial conditions are some constants, we will eventually sort out the solution function y w.r.t constants after sufficient number of iteration is done. Then it will look like,
y (c0 c1 c2 - - - - - - )[y1 y2 y3 y4 y5 - - - - - - ] <-- (should be vertical)
which is in the form of y = c1y1+c2y2+c3y3 . . .
and so on
is this correct?
If not, how can we show that solution space has n number of basis?
---------------------------------------------------------------------------------------------------------------------------
Also, I have two questions about Strum Liouville 2nd order ODE.
1. if we look at the Strum-Liouville 2nd order ODEs, there is an eigenvalue term within the equation. it seems like we are adding one more constant in the equation, which imposes a restriction to find a solution (n+1 constants with n conditions).
[m(x)y']+[λr(x)-q(x)]y
=m(x)[y''+P(x)y'+Q(x)y]
=m(x)[y''+P(x)y'+(λr(x)-q(x))]
=0
∴ [y''+P(x)y'+(λr(x)-q(x))]=0
if we do Picard's iteration, then we have one more constant λ along with n constants..
2. I don't understand how eigenvalue directly influence the solutions of 2nd order ODE
Thanks always, Your answers help me a lot!
Last edited: Jun 17, 2015 | 2018-05-26 18:04:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9198445081710815, "perplexity": 462.86270771962916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867841.63/warc/CC-MAIN-20180526170654-20180526190654-00455.warc.gz"} |
https://handwiki.org/wiki/Astronomy:Type_Ib_and_Ic_supernovae | # Astronomy:Type Ib and Ic supernovae
Short description: Types of supernovae caused by a star collapsing
The Type Ib supernova SN 2008D[1][2] in galaxy NGC 2770, shown in X-ray (left) and visible light (right), at the corresponding positions of the images. (NASA image.)[3]
Type Ib and Type Ic supernovae are categories of supernovae that are caused by the stellar core collapse of massive stars. These stars have shed or been stripped of their outer envelope of hydrogen, and, when compared to the spectrum of Type Ia supernovae, they lack the absorption line of silicon. Compared to Type Ib, Type Ic supernovae are hypothesized to have lost more of their initial envelope, including most of their helium. The two types are usually referred to as stripped core-collapse supernovae.
## Spectra
When a supernova is observed, it can be categorized in the MinkowskiZwicky supernova classification scheme based upon the absorption lines that appear in its spectrum.[4] A supernova is first categorized as either a Type I or Type II, then subcategorized based on more specific traits. Supernovae belonging to the general category Type I lack hydrogen lines in their spectra; in contrast to Type II supernovae which do display lines of hydrogen. The Type I category is subdivided into Type Ia, Type Ib and Type Ic.[5]
Type Ib/Ic supernovae are distinguished from Type Ia by the lack of an absorption line of singly ionized silicon at a wavelength of 635.5 nanometres.[6] As Type Ib and Ic supernovae age, they also display lines from elements such as oxygen, calcium and magnesium. In contrast, Type Ia spectra become dominated by lines of iron.[7] Type Ic supernovae are distinguished from Type Ib in that the former also lack lines of helium at 587.6 nm.[7]
## Formation
The onion-like layers of an evolved, massive star (not to scale).
Prior to becoming a supernova, an evolved massive star is organized like an onion, with layers of different elements undergoing fusion. The outermost layer consists of hydrogen, followed by helium, carbon, oxygen, and so forth. Thus when the outer envelope of hydrogen is shed, this exposes the next layer that consists primarily of helium (mixed with other elements). This can occur when a very hot, massive star reaches a point in its evolution when significant mass loss is occurring from its stellar wind. Highly massive stars (with 25 or more times the mass of the Sun) can lose up to 10−5 solar masses (M) each year—the equivalent of 1 M every 100,000 years.[8]
Type Ib and Ic supernovae are hypothesized to have been produced by core collapse of massive stars that have lost their outer layer of hydrogen and helium, either via winds or mass transfer to a companion.[6] The progenitors of Types Ib and Ic have lost most of their outer envelopes due to strong stellar winds or else from interaction with a close companion of about 3–4 M.[9][10] Rapid mass loss can occur in the case of a Wolf–Rayet star, and these massive objects show a spectrum that is lacking in hydrogen. Type Ib progenitors have ejected most of the hydrogen in their outer atmospheres, while Type Ic progenitors have lost both the hydrogen and helium shells; in other words, Type Ic have lost more of their envelope (i.e., much of the helium layer) than the progenitors of Type Ib.[6] In other respects, however, the underlying mechanism behind Type Ib and Ic supernovae is similar to that of a Type II supernova, thus placing Types Ib and Ic between Type Ia and Type II.[6] Because of their similarity, Type Ib and Ic supernovae are sometimes collectively called Type Ibc supernovae.[11]
There is some evidence that a small fraction of the Type Ic supernovae may be the progenitors of gamma ray bursts (GRBs); in particular, type Ic supernovae that have broad spectral lines corresponding to high-velocity outflows are thought to be strongly associated with GRBs. However, it is also hypothesized that any hydrogen-stripped Type Ib or Ic supernova could be a GRB, dependent upon the geometry of the explosion.[12] In any case, astronomers believe that most Type Ib, and probably Type Ic as well, result from core collapse in stripped, massive stars, rather than from the thermonuclear runaway of white dwarfs.[6]
As they are formed from rare, very massive stars, the rate of Type Ib and Ic supernova occurrence is much lower than the corresponding rate for Type II supernovae.[13] They normally occur in regions of new star formation, and are extremely rare in elliptical galaxies.[14] Because they share a similar operating mechanism, Type Ibc and the various Type II supernovae are collectively called core-collapse supernovae. In particular, Type Ibc may be referred to as stripped core-collapse supernovae.[6]
## Light curves
The light curves (a plot of luminosity versus time) of Type Ib supernovae vary in form, but in some cases can be nearly identical to those of Type Ia supernovae. However, Type Ib light curves may peak at lower luminosity and may be redder. In the infrared portion of the spectrum, the light curve of a Type Ib supernova is similar to a Type II-L light curve.[15] Type Ib supernovae usually have slower decline rates for the spectral curves than Ic.[6]
Type Ia supernovae light curves are useful for measuring distances on a cosmological scale. That is, they serve as standard candles. However, due to the similarity of the spectra of Type Ib and Ic supernovae, the latter can form a source of contamination of supernova surveys and must be carefully removed from the observed samples before making distance estimates.[16]
• Type Ia supernova
• Type II supernova
## References
1. Malesani, D. (2008). "Early spectroscopic identification of SN 2008D". Astrophysical Journal 692 (2): L84–L87. doi:10.1088/0004-637X/692/2/L84. Bibcode2009ApJ...692L..84M.
2. Soderberg, A. M. (2008). "An extremely luminous X-ray outburst at the birth of a supernova". Nature 453 (7194): 469–474. doi:10.1038/nature06997. PMID 18497815. Bibcode2008Natur.453..469S.
3. Naeye, R.; Gutro, R. (21 May 2008). "NASA's Swift Satellite Catches First Supernova in the Act of Exploding". NASA/GSFC.
4. da Silva, L. A. L. (1993). "The Classification of Supernovae". Astrophysics and Space Science 202 (2): 215–236. doi:10.1007/BF00626878. Bibcode1993Ap&SS.202..215D.
5. Montes, M. (12 February 2002). "Supernova Taxonomy". Naval Research Laboratory.
6. Filippenko, A.V. (2004). "Supernovae and Their Massive Star Progenitors". The Fate of the Most Massive Stars 332: 34. Bibcode2005ASPC..332...33F.
7. "Type Ib Supernova Spectra". COSMOS – The SAO Encyclopedia of Astronomy. Swinburne University of Technology. Retrieved 2010-05-05.
8. Dray, L. M.; Tout, C. A.; Karaks, A. I.; Lattanzio, J. C. (2003). "Chemical enrichment by Wolf-Rayet and asymptotic giant branch stars". Monthly Notices of the Royal Astronomical Society 338 (4): 973–989. doi:10.1046/j.1365-8711.2003.06142.x. Bibcode2003MNRAS.338..973D.
9. Pols, O. (26 October – 1 November 1995). "Close Binary Progenitors of Type Ib/Ic and IIb/II-L Supernovae". Chiang Mai, Thailand. pp. 153–158. Bibcode1997ASPC..130..153P.
10. Woosley, S. E.; Eastman, R.G. (June 20–30, 1995). "Type Ib and Ic Supernovae: Models and Spectra". Begur, Girona, Spain: Kluwer Academic Publishers. pp. 821. doi:10.1007/978-94-011-5710-0_51. Bibcode1997ASIC..486..821W.
11. Williams, A. J. (1997). "Initial Statistics from the Perth Automated Supernova Search". Publications of the Astronomical Society of Australia 14 (2): 208–213. doi:10.1071/AS97208. Bibcode1997PASA...14..208W.
12. Ryder, S. D. (2004). "Modulations in the radio light curve of the Type IIb supernova 2001ig: evidence for a Wolf-Rayet binary progenitor?". Monthly Notices of the Royal Astronomical Society 349 (3): 1093–1100. doi:10.1111/j.1365-2966.2004.07589.x. Bibcode2004MNRAS.349.1093R.
13. Sadler, E. M.; Campbell, D. (1997). "A first estimate of the radio supernova rate". Astronomical Society of Australia.
14. Perets, H. B.; Gal-Yam, A.; Mazzali, P. A.; Arnett, D.; Kagan, D.; Filippenko, A. V.; Li, W.; Arcavi, I. et al. (2010). "A faint type of supernova from a white dwarf with a helium-rich companion". Nature 465 (7296): 322–325. doi:10.1038/nature09056. PMID 20485429. Bibcode2010Natur.465..322P.
15. Tsvetkov, D. Yu. (1987). "Light curves of type Ib supernova: SN 1984l in NGC 991". Soviet Astronomy Letters 13: 376–378. Bibcode1987SvAL...13..376T.
16. Homeier, N. L. (2005). "The Effect of Type Ibc Contamination in Cosmological Supernova Samples". The Astrophysical Journal 620 (1): 12–20. doi:10.1086/427060. Bibcode2005ApJ...620...12H. | 2022-05-26 05:13:02 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8777058720588684, "perplexity": 3598.2552941829913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662601401.72/warc/CC-MAIN-20220526035036-20220526065036-00503.warc.gz"} |
http://libros.duhnnae.com/2017/jun7/149810524428-Holomorphic-geometric-models-for-representations-of-C-algebras-Mathematics-Operator-Algebras.php | # Holomorphic geometric models for representations of $C^*$-algebras - Mathematics > Operator Algebras
Holomorphic geometric models for representations of $C^*$-algebras - Mathematics > Operator Algebras - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
Abstract: Representations of $C^*$-algebras are realized on section spaces ofholomorphic homogeneous vector bundles. The corresponding section spaces areinvestigated by means of a new notion of reproducing kernel, suitable fordealing with involutive diffeomorphisms defined on the base spaces of thebundles. Applications of this technique to dilation theory of completelypositive maps are explored and the critical role of complexified homogeneousspaces in connection with the Stinespring dilations is pointed out. The generalresults are further illustrated by a discussion of several specific topics,including similarity orbits of representations of amenable Banach algebras,similarity orbits of conditional expectations, geometric models ofrepresentations of Cuntz algebras, the relationship to endomorphisms of${\mathcal B}{\mathcal H}$, and non-commutative stochastic analysis.
Autor: Daniel Beltita, Jose E. Gale
Fuente: https://arxiv.org/ | 2019-01-24 08:17:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7386907935142517, "perplexity": 4703.917026893398}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584519757.94/warc/CC-MAIN-20190124080411-20190124102411-00356.warc.gz"} |
https://www.key2physics.org/metric-system | Metric System
A physical quantity is something that is measured such as mass, length and time. Each of these units is defined by a number, value or its magnitude and a unit. Each physical quantity has its specific unit and these unit help us to analyze what the given or the asked quantity is.
These units are identified through different systems. Using these different systems in one solution may be confusing, thus, scientists around the world use one system as the commonly used system in dealing with measurements. This system of measurement is the International System of Units (SI).
A list of base quantities with the corresponding unit they are measured is given in the table below.
Quantity Unit Symbol Mass kilogram kg Length meter m Time second s Electric current ampere A Temperature kelvin K Amount of substance mole mol Luminous intensity candela cd
Prefixes are used to define smaller or larger values or measurement of the same quantities. Below is the list of prefixes and their corresponding factor:
Prefix Symbol Multiplying Factor pico- p 10-12 nano- n 10-9 micro- µ 10-6 milli- m 10-3 centi- c 10-2 deci- d 10-1 kilo- k 103 mega- M 106 giga- G 109 tera- T 1012
To convert units, the following must be followed:
1. Identify the given unit and the unit in which the given unit must be converted.
2. Detemine the conversion factor. The unit to be converted must be placed in the position(numerator or denominator) in which it can be cancelled out so that the only unit left is the desired unit.
Example 1.
The length of the pole is 15 meter. What is its length in decimeters?
Given: 15 meters
Desired unit: decimeter
Conversion Factor: 1 dm = 10-1 m
Solution: $$15\;m\;\times\;{1\;dm \over 10^{-1}m}=150\;dm$$
Example 2.
The speed of the race car is 140 km/h. Express the race car's speed in m/s.
Given: speed = 140 km/h
Conversion Factors: 1 km = 103 m and 1 hr = 3,600 s
Solution:
$$140\;\frac{km}{h}\times {10^3\;m \over 1\;km}\times {1\;h \over 3,600\;s}=38.89\;m/s$$
Derived Units
Derived units from the term itself is derived from the base units. It is a combination of two or more base units. These base units are multiplied or divided by one another and not added or subtracted. The following are some commonly used derived units:
Quantity
Unit
Derived Unit
velocity
m/s m; s-1
frequency
herts (Hz) 1/s or s-1
energy
joule (J) kg m2 s-2
force
newton (N) kg m s-2
electric charge
coulomb (C) A s
acceleration
m/s2 m s-2
Example 1.
A 5-kg object is pushed then accelerates at 2 m/s2 . How much force is exerted on the object?
Solution:
$$F=ma=(5\;kg)(2\;m/s^2)=10\;kg\;m/s^2=10\;N$$ | 2021-07-26 04:51:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.764178454875946, "perplexity": 2251.083140385922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00224.warc.gz"} |
https://stats.stackexchange.com/questions/125089/define-prior-probabilities-in-naive-bayes-with-unbalanced-classes-and-asymetric | define prior probabilities in naive bayes with unbalanced classes and asymetric cost
I'm trying to apply Naive bayes to the following supervised problem:
• It's a binary classification problem
• The classes are unbalanced. The target class represents the 0.004266432 of the total and the mayoritary class the 0.995733568.
• There is also an unbalanced cost scheme given by the following formula:
Profit = 5000 * TP - 100 * FP
TP: True Positive - FP: False Positive
The objective is to maximize the Profit function.
I'm using the klaR package in R to fit the model, so it's posible to adjust the priors.
Questions:
1) Is it posible using the prior probabilities to improve the model taking in consideration the asymetric cost scheme or/and the class inbalance?
2) The predict() function outputs a class prediction and a probability. The problem is that the probabilities of the minoritary class are too small. Is it posible to use the priors or scale the probabilities in a clever way to get a better cut off point?
So far, the results I get using Bayes are half as good compared to other methods (random forest, lasso). So I'm pretty sure there is a way to improve the naive bayes approach.
The NaiveBayes function in klaR already computes class priors from the proportions in the training set. If the numbers you have given are computed from the training set, then there is nothing to gain by specifying priors to the function. The cost scheme is irrelevant when fitting a naïve Bayes model. | 2020-06-06 18:05:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.73724764585495, "perplexity": 838.2305990984383}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348517506.81/warc/CC-MAIN-20200606155701-20200606185701-00311.warc.gz"} |
http://tex.stackexchange.com/questions/3490/bad-spacing-of-math-letters-within-italic-text?answertab=active | # Bad spacing of math letters within italic text
I use the standard AMS theorem style, which means that my theorems are set in italic. In combination with math variables, this sometimes gives horrible spacing: The input If $U$ or $V$ \dots yields
I see two spacing problems here: The space between "If" and "U" is too small, and the space between "U" and "or" is too large. Thus, the output would look a lot better if the "U" would be moved a bit to the right. One non-solution is to remove the dollar signs: If U or V \dots yields
Here the spacing is a lot better, but now the problem is that a different font (namely italic) is used for "U" and "V", which is similar but not quite the same. Another non-solution is to use italic correction \/ after "If": This only corrects the first space (and it is not nice if one has to remember typing \/ all the time).
My present "solution" is to apply manual corrections where I find it appropriate, which of course is a real nuisance. Does anyone have a better solution? Do XeTeX or LuaTeX offer something?
(I think I do understand what causes the problem. The idea is to show the bounding boxes of the relevant characters in both examples:
What you see is that the spacing of the bounding boxes is good in both cases. But the italic letters tend to stick out of their boxes to the right, and with "U" (in the right picture) and "o" you see that they have some white space in the left of the box. The math "U" (in the left picture), however, does not have this white space in the left, and it doesn't stick out to the right. As a result, the math "U" sits too far to the left.)
EDIT:
Khaled is quite right, the space between the math "U" and "or" is so large since the math "U" includes an italic correction. This is explicitly described in the infamous Appendix G of the TeXbook, rule 17. So the math "U" doesn't stick out of its box since the box includes the italic correction, and this is quite alright if the math in embedded in roman text. I just have no idea how to get rid of the italic correction if the math is already in some italic text!
-
I don't no much TeX to have an answer, but you need to prevent italic correction inside the second formula after the U, the italic correction is the extra white space you see after the U. – Khaled Hosny Sep 26 '10 at 17:58
nobody has mentioned that if a line in an italic paragraph begins with a word like "Very", that line will look indented relative to the lines above and below. so it's not only math that's a problem ... the solution will need to know the shapes of the characters, not just the metrics. – barbara beeton Sep 20 '11 at 20:26
@barbara: Thanks for your comment. I've mentioned (actually rather complained about) this in another question. The solution is using microtype with an improved protrusion table. I've mailed this improved table to Robert Schlicht, but I don't know if it's incorporated into microtype already. – Hendrik Vogt Sep 21 '11 at 7:43
Update: this previous answer to another, related, question already mentioned the \noic macro which is discussed here.
Here is how to suppress the italic correction when exiting math after a letter.
\documentclass[a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage[vscale=0.82]{geometry}
\begin{document}
\ttfamily
\def\noic{\sb{}\kern-\scriptspace }
\def\mathfont{\usefont{OML}{cmm}{m}{it}}
\mathsurround0pt % is default anyhow
\newbox\letterbox
\newcount\letter
%\the\scriptspace
\begin{verbatim}
\def\noic{\sb{}\kern-\scriptspace }
$<letter>\noic$ gives the same as \usefont{OML}{cmm}{m}{it}<letter>
<letter> <letter>\/ $<letter>$ $<letter>_{}$ $<letter>\noic$
\end{verbatim}
\lettera
\noindent\loop
\makebox[.05\linewidth]{$\char\letter$}%
\setbox\letterbox=\hbox{\mathfont\char\letter}%
\makebox[.18\linewidth][r]{\the\wd\letterbox}%
\setbox\letterbox=\hbox{\mathfont\char\letter\/}%
\makebox[.18\linewidth][r]{\the\wd\letterbox}%
\setbox\letterbox=\hbox{$\char\letter$}%
\makebox[.18\linewidth][r]{\the\wd\letterbox}%
\setbox\letterbox=\hbox{$\char\letter_{}$}%
\makebox[.18\linewidth][r]{\the\wd\letterbox}%
\setbox\letterbox=\hbox{$\char\letter\noic$}%
\makebox[.18\linewidth][r]{\the\wd\letterbox}\\
\ifnum\letter<z
\repeat
%\clearpage
\letterA
\noindent\loop
\makebox[.05\linewidth]{$\char\letter$}%
\setbox\letterbox=\hbox{\mathfont\char\letter}%
\makebox[.18\linewidth][r]{\the\wd\letterbox}%
\setbox\letterbox=\hbox{\mathfont\char\letter\/}%
\makebox[.18\linewidth][r]{\the\wd\letterbox}%
\setbox\letterbox=\hbox{$\char\letter$}%
\makebox[.18\linewidth][r]{\the\wd\letterbox}%
\setbox\letterbox=\hbox{$\char\letter_{}$}%
\makebox[.18\linewidth][r]{\the\wd\letterbox}%
\setbox\letterbox=\hbox{$\char\letter\noic$}%
\makebox[.18\linewidth][r]{\the\wd\letterbox}\\
\ifnum\letter<Z
\repeat
\end{document}
On popular request,
\clearpage
\newgeometry{hscale=0.9}
\thispagestyle{empty}
\def\original{If $U$ or $V$ and $X$, and $f$ from $j$. Let $T$ be $S$ if $Y$.}
\def\improved{If\/ $U\noic$ or\/ $V\noic$ and\/ $X\noic$, and\/ $f$ from\/ $j\noic$. Let\/ $T\noic$ be\/ $S\noic$ if\/ $Y\noic$.}
\normalfont\itshape
\small
\original
\improved
\medskip
\normalsize
\original
\improved
\medskip
\large
\original
\improved
\medskip
\huge
\original
\improved
\medskip
\bigskip
\bfseries\boldmath
\small
\original
\improved
\medskip
\normalsize
\original
\improved
\medskip
\large
\original
\improved
\medskip
\huge
\original
\improved
-
you've produced a table of the values of the italic correction for cmmi10, but you haven't shown how to use the values in a simple example. i'd like to see a minimal example using the sentence shown in the accepted answer, as \textit{...}, using only the defined macro in the recommended manner. oh, please do it also in several sizes: normal size, \small and \large, without changing the macro definition. – barbara beeton Jan 11 '13 at 19:09
@barbarabeeton what do you mean by 'how to use the values'? the values are not used, they were just displayed to see the effect of the \noic macro I defined. – jfbu Jan 11 '13 at 20:07
i was mistaken about how to use \noic. the use example clarifies that very nicely. thanks. however, the input is really too complicated for most people (and rather error prone for your average typist); i think that using mathtools and \mathtoolsset{mathic=true} us probably the best compromise, even though it requires using $$...$$ rather than $...$. – barbara beeton Jan 11 '13 at 22:47
@HendrikVogt yes you are right on it not being a foolproof fix for "post math" and I am sorry for the duplicate _{}\kern-\scriptspace! I did not know about this other answer of yours ... – jfbu Jan 12 '13 at 13:09
@jfbu: No need to be sorry! Your answer still gives a huge improvement over the default spacing, and I already gave it a +1. Thanks again! – Hendrik Vogt Jan 12 '13 at 15:25
This is fixed in the mathtools package (see section 4.1 of the package documentation).
Here is an example. Note that math must be typed using $$ and $$:
\documentclass{article}
\usepackage{amsmath,mathtools}
\begin{document}
\mathtoolsset{mathic=false}
\textit{If $$U$$ or $$V$$ \dots.}
\par Good:
\mathtoolsset{mathic=true}
\textit{If $$U$$ or $$V$$ \dots.}
\end{document}
-
Hmm, no, this only solves half of my problem: It automatically puts the italic correction after "If" that I mentioned in my question (so it saves me typing \/ all the time), but it does not correct the space between "U" and "or". Moreover, I don't like using $$ and $$. Still, this answer is not bad: The output does look better with mathic=true. – Hendrik Vogt Sep 26 '10 at 16:46
I see, it doesn't fully do what it advertises.... And perhaps some guru can explain why mathtools needs the ( )' delimiters to work. – Konrad Swanepoel Sep 26 '10 at 18:18
I'm not a guru, but mathtools just redefines '$$' so that it can do the italic correction, but it doesn't alter '' – Hendrik Vogt Oct 7 '10 at 16:52 So there's a need for a better package for italic correction in maths than mathtools. – Konrad Swanepoel Oct 12 '10 at 5:29 Has Knuth given any recommendations about how to deal with this problem? I can't find anything explicitly about it in the TeXbook. Interestingly, on page 340 is, effectively, {\sl ... of math is ...}. (It does look odd in typeset form on page 341.) – MSC Mar 10 '11 at 19:35 OK, I produced an absolutely crazy "solution" myself. This is mostly to make clearer what the problems are; I wouldn't suggest using the (very long) code below. This "solution" only provides italic correction for single letters A to Z and a to z, and it works by making active. (I could also have used \( and $$, but I don't like those.) Moreover, everything is adjusted "by hand" for 10pt CM fonts, so it won't work for other fonts (but should approximately work for other font sizes). Here's the output:
In the 1st line you see the result of If $U$ or $V$ ... without any correction; in the 2nd line my correction is applied, and in the 3rd line the $s are omitted, i.e., the usual italic font is used. I'm not claiming that the 2nd line is good on all counts, e.g. the space between "f" and "from" is rather small. What I wanted to achieve is that the spacing is just as with the "normal" italic font, that is, in the second and third lines the spacing is (almost) the same. (The 2nd line is slightly longer since the math letters are wider.) Note in particular that the spacing before punctuation in the 2nd line is different from the 1st line. (I'm not sure which version is the better one.) Clearly, the positioning of "U", "V" and "Y" in the 1st line is not good (I would say horrible); in the 2nd line it's a lot better. Of course one could change all these numbers in my code to try and further improve the spacing. But I only wanted to point out something else: If you look at the numbers, then you see that it would be very hard indeed to have this correction "automatically" and without changing the font metrics. \documentclass{article} \makeatletter \let\mydollar=$
\catcode\$=\active \def\my@testtoken{\my@testtoken} \def$#1${\ifx\my@testtoken#1\my@testtoken \mydollar\mydollar \else \test@single@character#1\my@testtoken \fi } \def\test@single@character#1#2\my@testtoken{% \def\math@format##1{\mydollar##1\mydollar}% \ifx\mytesttoken#2\mytesttoken \ifcat#1a% \ifdim\fontdimen\@ne\font>\z@ \def\math@format##1{\mydollar\xdef\currentfont{\the\textfont1}\mydollar {\corrected{##1}}%\currentfont##1}% }% \fi \fi \fi \math@format{#1#2}% } \def\corrected#1{\csname @correct@#1\endcsname} \def\correct#1#2,#3,{\expandafter\def\csname @correct@#1\endcsname{\mydollar\mskip#2mu#1\mskip-#3mu\mydollar}} \makeatother \correct A0.15,0, %1st number is the correction before the letter, \correct B0.3,1.5, %2nd number is (minus) the correction after it. \correct C1.75,2.2, \correct D0.25,1.4, \correct E0.3,1.7, \correct F0.3,1.95, \correct G1.8,1.15, \correct H0.25,2.6, \correct I0.3,2.6, \correct J0.1,2.2, \correct K0.3,2.4, \correct L0.25,0.6, \correct M0.3,2.6, \correct N0.3,2.6, \correct O1.75,1.3, \correct P0.2,1.5, \correct Q1.75,1.3, \correct R0.2,0.2, \correct S0.4,1.8, \correct T2.7,1.9, \correct U2.4,2.6, \correct V2.4,2.95, \correct W2.4,2.9, \correct X0.4,2.5, \correct Y2.6,3.1, \correct Z0.4,2.2, \correct a1.2,1, \correct b1.2,0.3, \correct c1.2,0.2, \correct d1.2,0.8, \correct e1.2,0.85, \correct f-1.5,3.5, \correct g0.7,1.2, \correct h0.4,1, \correct i1,1.4, \correct j-0.5,2.2, \correct k0.4,1.5, \correct l0.9,1.8, \correct m1,0.95, \correct n1,0.95, \correct o1.2,0.3, \correct p1,0.3, \correct q1.2,1.2, \correct r1,2, \correct s0.5,1, \correct t1,1.25, \correct u1,0.95, \correct v1,1.55, \correct w1,1.5, \correct x0.4,1.75, \correct y1,1.2, \correct z0.4,1.75, \newcommand\test[1]{% {\let$\mydollar #1} \par
#1 \par
\let$\relax #1 } \begin{document} \it \test{If$U$or$V$and$X$, and$f$from$j$. Let$T$be$S$if$Y\$.}
\end{document}
-
Nice job. Maybe you should submit this patch to mathtools? – Matthew Leingang Nov 4 '10 at 18:27
@Matthew: Glad that you like it, but as I wrote above: It's more a case study since it is tested only with 10pt CM, and works only for single latin letters. I'd need someone professional to make a good patch out of it, I think. – Hendrik Vogt Nov 5 '10 at 9:17 | 2014-09-20 01:53:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185711741447449, "perplexity": 3702.06534920086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132495.49/warc/CC-MAIN-20140914011212-00326-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://calculator-online.org/mathlogic/expr/bcf8d3232a4bc7ff27b23d71818a02c5 | Mister Exam
# Expression (p⇒(q⇒r))⇒((p⇒q)⇒(q⇒r))
### The solution
You have entered [src]
(p⇒(q⇒r))⇒((p⇒q)⇒(q⇒r))
$$\left(p \Rightarrow \left(q \Rightarrow r\right)\right) \Rightarrow \left(\left(p \Rightarrow q\right) \Rightarrow \left(q \Rightarrow r\right)\right)$$
Detail solution
$$q \Rightarrow r = r \vee \neg q$$
$$p \Rightarrow \left(q \Rightarrow r\right) = r \vee \neg p \vee \neg q$$
$$p \Rightarrow q = q \vee \neg p$$
$$\left(p \Rightarrow q\right) \Rightarrow \left(q \Rightarrow r\right) = r \vee \neg q$$
$$\left(p \Rightarrow \left(q \Rightarrow r\right)\right) \Rightarrow \left(\left(p \Rightarrow q\right) \Rightarrow \left(q \Rightarrow r\right)\right) = p \vee r \vee \neg q$$
Simplification [src]
$$p \vee r \vee \neg q$$
p∨r∨(¬q)
Truth table
+---+---+---+--------+
| p | q | r | result |
+===+===+===+========+
| 0 | 0 | 0 | 1 |
+---+---+---+--------+
| 0 | 0 | 1 | 1 |
+---+---+---+--------+
| 0 | 1 | 0 | 0 |
+---+---+---+--------+
| 0 | 1 | 1 | 1 |
+---+---+---+--------+
| 1 | 0 | 0 | 1 |
+---+---+---+--------+
| 1 | 0 | 1 | 1 |
+---+---+---+--------+
| 1 | 1 | 0 | 1 |
+---+---+---+--------+
| 1 | 1 | 1 | 1 |
+---+---+---+--------+
DNF [src]
$$p \vee r \vee \neg q$$
p∨r∨(¬q)
PDNF [src]
$$p \vee r \vee \neg q$$
p∨r∨(¬q)
CNF [src]
$$p \vee r \vee \neg q$$
p∨r∨(¬q)
$$p \vee r \vee \neg q$$
p∨r∨(¬q) | 2022-10-07 21:34:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21283498406410217, "perplexity": 872.4058159364208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00096.warc.gz"} |
https://www.techwhiff.com/issue/if-y-is-the-principal-square-of-root-5-what-must-be--4172 | If y is the principal square of root 5,what must be true
Question:
if y is the principal square of root 5,what must be true
A triangle has a base of 10 in. and a height of 12 in. What is the area of the triangle? 22 in² 40 in² 60 in² 120 in²
A triangle has a base of 10 in. and a height of 12 in. What is the area of the triangle? 22 in² 40 in² 60 in² 120 in²...
How would you cook meat to ensure that the roast is tender but not greasy?
How would you cook meat to ensure that the roast is tender but not greasy?...
Argue why a product launch that exceeded expectations may actually be a negative result.
Argue why a product launch that exceeded expectations may actually be a negative result....
What is the answer to 2(8 + 7) - 3(5)
what is the answer to 2(8 + 7) - 3(5)...
Ya'll like slawbunnies
Ya'll like slawbunnies...
Using the map of Australia, what is the MOST precise location of Melbourne? A) South East B) Southeast of Canberra C) 35S 120E D) 37S 147E
Using the map of Australia, what is the MOST precise location of Melbourne? A) South East B) Southeast of Canberra C) 35S 120E D) 37S 147E...
You have 3 spinners that are divided into equal parts. One spinner is divided into colors (red, blue, and green). The second is labeled with numbers (1,2,3,4,5,6). The third spinner is labeled with letters (A, B, C, D). •If you spin each of the spinners, how many possible outcomes are there? •Draw a tree diagram to show these outcomes. •What is the probability of the spinners landing on red, an even number, and a consonant?
You have 3 spinners that are divided into equal parts. One spinner is divided into colors (red, blue, and green). The second is labeled with numbers (1,2,3,4,5,6). The third spinner is labeled with letters (A, B, C, D). •If you spin each of the spinners, how many possible outcomes are there? •Dr...
Plz I need help Nueve mil seiscientos ÷ cuatro = Dos mil cuatrocientos Mil cuatrocientos Tres mil doscientos Un mil doscientos
Plz I need help Nueve mil seiscientos ÷ cuatro = Dos mil cuatrocientos Mil cuatrocientos Tres mil doscientos Un mil doscientos...
Match the layers of Earth with their most significant dynamic quality. 1. core layer where tectonic plates collide, spread, and rub, creating surface features and events, like volcanoes and earthquakes 2. mantle source of Earth's internal energy and magnetic field 3. crust layer where convection currents circulate, driving plate tectonics
Match the layers of Earth with their most significant dynamic quality. 1. core layer where tectonic plates collide, spread, and rub, creating surface features and events, like volcanoes and earthquakes 2. mantle source of Earth's internal energy and magnetic field 3. crust layer where convection c...
Solve |x|>4 A. {-4,4} B. {x|-4 C. {x|x<-4 or x>4}
Solve |x|>4 A. {-4,4} B. {x|-4 C. {x|x<-4 or x>4}...
A person is walking at a rate of 9 feet every 4 seconds. (a) Find their speed, i.e. the rate the distance
A person is walking at a rate of 9 feet every 4 seconds. (a) Find their speed, i.e. the rate the distance...
Help please! Click on the other pictures to see the answer choices for the blanks.
Help please! Click on the other pictures to see the answer choices for the blanks....
A client admitted 72 hours ago with a diagnosis of major depression presents for breakfast today appropriately dressed and well groomed, and appears to be calm and relaxed, yet more energetic than before. which initial action should the nurse take after noting this client's behavior?
A client admitted 72 hours ago with a diagnosis of major depression presents for breakfast today appropriately dressed and well groomed, and appears to be calm and relaxed, yet more energetic than before. which initial action should the nurse take after noting this client's behavior?...
The branch of the ga government that makes laws
The branch of the ga government that makes laws...
Determine the intercepts of the line. Y-intercept (____,____) X-intercept(____,____)
Determine the intercepts of the line. Y-intercept (____,____) X-intercept(____,____)...
At his job, the first 40 hours of each week that Thomas works is regular time, and any additional time that he works is overtime. Thomas gets paid $15 per hour during regular time. During overtime Thomas gets paid 1.5 times as much as he gets paid during regular time. Thomas works 46 hours in 1 week and gets$117 in deductions taken out of his pay for this week. After the deductions are taken out, how much of Thomas’s pay for this week remains?
At his job, the first 40 hours of each week that Thomas works is regular time, and any additional time that he works is overtime. Thomas gets paid \$15 per hour during regular time. During overtime Thomas gets paid 1.5 times as much as he gets paid during regular time. Thomas works 46 hours in 1 week...
Given the graph, the axis of symmetry is: x = - 4.646 x = - 2 x = 0.646 x = - 3 x = -7
Given the graph, the axis of symmetry is: x = - 4.646 x = - 2 x = 0.646 x = - 3 x = -7...
The Indian leader who sent Buddhist missionaries across his empire to spread Buddhism was
The Indian leader who sent Buddhist missionaries across his empire to spread Buddhism was... | 2022-08-08 08:25:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3669026792049408, "perplexity": 3746.7989401555797}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00130.warc.gz"} |
https://www.vanessabenedict.com/1-4-oz-to-grams-gold/ | # How many grams is 1 3 oz of gold?
#### ByVanessa
Jun 9, 2022
Equals: 7.78 grams (g) in gold mass. Calculate grams of gold per 1/4 troy ounces unit. The gold converter.
Untitled Document
## How many grams is 1 3 oz of gold
Equivalent to: 10.37 grams B (g) is the mass of the element gold. Calculate the number of grams of gold in 1/3 troy ounce.
## How much does a 1/4 grain of gold weigh
Equivalent: 0.016 (g) w in weight of gold. Calculate f using the 1/4 gold per grain method.
## What does 1 oz of gold weigh
The exact weight of an international is currently a troy ounce, if you will, 31.1034768 grams. One troy ounce of the yellow metal is equal to 31.1034807 grams. The ounce is also used to test bulk liquids.
Untitled Document
## How many weight of 200 grams is 1000 grams
Convert 220 grams to kilograms.
## How many grams of nitrogen are in a diet consisting of 100 grams of protein
Why? When reporting the amount of amino acids or key acids in the diet, you can use one of these values ??to determine the amount of nitrogen in the amount of healthy protein provided. Protein contains at least 16% nitrogen, when converted to total by dividing 100% by 16%, 6.25 is obtained.
## Why are Grams called Grams
By mass, a thin gram is equal to one thousandth of a liter (one cubic centimeter) under water at a temperature of 4 degrees Celsius. The word “gram” comes from the late Latin “gramma”, which means insignificant weight compared to the French “gram”. Symbol gg g.
## What is the amount in grams of quick lime can be obtained from 25 grams of CaCO3 on calculation
Full answer: Step by step So option C is the correct answer for a person who decomposes with $\text25 g$ calcium carbonate, giving men and women $\text14 g$ calcium oxide or which was also called quicklime .
## How many grams of 80% pure marble stone on calcination can give for 3 grams of quicklime
How many grams of flint with a purity of 80% give 13 grams of quicklime when lighting cans? ! 80 g CaCO3 = 100 g marble stone. 25 g for CaCO3=? 2580×100=31.25 g.
Untitled Document | 2022-12-01 17:08:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6097529530525208, "perplexity": 3821.3827830612695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00821.warc.gz"} |
https://myassignments-help.com/2022/09/07/feynman-diagram-dai-xie-phys501/ | # 物理代写|费曼图代写Feynman diagram代考|PHYS501
## 物理代写|费曼图代写Feynman diagram代考|High density limit
Under what conditions could the Coulomb interaction between electrons be treated as a small perturbation? To answer this question. we define a dimensionless parameter $r_{\mathrm{y}}$ by
$$(4 \pi / 3) r_s^3 a_0^2=V / N .$$
where $a_0=\hbar^2 /\left(m e^2\right.$ ) is the Bohr radius (in SI units. $\left.e^2 \rightarrow e^2 / 4 \pi \varepsilon_0\right) . r_s a_0$ is the radius of a sphere whose volume is equal to the average volume occupied by one electron. Defining the dimensionless quantities
$$V^{\prime}=V /\left(r_s a_0\right)^3 . \quad \mathbf{K}=r_s a_0 \mathbf{k} . \quad \mathbf{Q}=r_s a_0 \mathbf{q} .$$
we may recast the Hamiltonian in Eq. $(4.11)$ into the following form:
$$H=\frac{e^2}{r_s^2 a_0}\left[\sum_{\mathrm{K} / r} K^2 c_{\mathrm{k} \sigma}^{\dagger} c_{\mathrm{k} \sigma}+\frac{r_{\mathrm{s}}}{V^{\prime}} \sum_Q^{\prime} \sum_{\mathrm{k} \sigma} \sum_{\mathrm{k}^{\prime} \sigma^{\prime}} \frac{2 \pi}{Q^2} c_{\mathrm{k}+\mathrm{q} \sigma}^{\dagger} c_{\mathrm{k}^{\prime}-\mathbf{q} \sigma^{\prime}}^{\dagger} c_{\mathbf{k}^{\prime} \cdot \sigma^{\prime} c_{\mathrm{k}}}\right]{(4.13)}$$ This expression for $H$ is very telling: compared to the kinetic energy of electrons, the Coulomb interaction is negligible in the high density limit, $r_s \rightarrow 0$. This conclusion appears to be counterintuitive, but a moment’s reflection reveals its validity. Coulomb repulsion scales as $1 / r{\mathrm{s}}$, and from Heisenberg’s uncertainty principle, the electron’s momentum also scales as $1 / r_{\text {s }}$. Therefore, the kinetic energy scales as $1 / r^2$. Thus, as $r_s \rightarrow 0$, even though the Coulomb energy grows larger. the kinetic energy of the electrons grows larger at a faster rate. We conclude that in the high-density limit. the Coulomb repulsion is weak in comparison with the kinetic energy, and it is permissible to treat it within the framework of perturbation theory. In real metals, $r_s=2-6$, which is neither too small nor too large. Nevertheless, in most metals, the single-particle approximation explains many of their low energy properties. This is because the Coulomb interaction, even when it is strong, is not very effective at changing the momentum distribution of the electrons: most of the states into which they could scatter are already occupied.
## 物理代写|费曼图代写Feynman diagram代考|First order perturbation
Treating $V_C$ as a perturbation, the energy per electron in the ground state is written as a perturbation series
$$E / N=E_0 / N+E_1 / N+E_2 / N+\cdots$$
$E_1$ is given by
$$E_1=\frac{1}{2 V} \sum_q^{\prime} \sum_{\mathrm{k} \sigma} \sum_{\mathrm{k}^{\prime} \sigma^{\prime}} \frac{4 \pi e^2}{q^2}\left\langle F\left|c_{\mathrm{k}+\mathrm{q} \sigma}^{\dagger} c_{\mathrm{k}^{\prime}-\mathrm{q}^{\prime}}^{\dagger} \cdot c_{\mathrm{k}^{\prime} \sigma^{\prime} \cdot c_{\mathrm{k} \sigma}}\right| F\right\rangle .$$
The action of $c_{\mathbf{k}^{\prime} \sigma} \cdot c_{\mathrm{k} \sigma}$ on $|F\rangle$, for $k, k^{\prime}k_F$. Hence
$$c_{\mathrm{k} \pi}^{\dagger} c_{\mathrm{k} \sigma}|F\rangle=\theta\left(k_F-k\right)|F\rangle$$
## 物理代写|费曼图代写Feynman diagram代考|High density limit
$$(4 \pi / 3) r_s^3 a_0^2=V / N .$$
$$V^{\prime}=V /\left(r_s a_0\right)^3 . \quad \mathbf{K}=r_s a_0 \mathbf{k} . \quad \mathbf{Q}=r_s a_0 \mathbf{q} .$$
## 物理代写|费曼图代写Feynman diagram代考|First order perturbation
$$E / N=E_0 / N+E_1 / N+E_2 / N+\cdots$$
$E_1$ 是 (谁) 给的
$$E_1=\frac{1}{2 V} \sum_q^{\prime} \sum_{\mathrm{k} \sigma} \sum_{\mathbf{k}^{\prime} \sigma^{\prime}} \frac{4 \pi e^2}{q^2}\left\langle F\left|c_{\mathrm{k}+4 \sigma}^{\dagger} c_{\mathrm{k}^{\prime}-\mathrm{q}^{\prime}}^{\dagger} \cdot c_{\mathrm{k}^{\prime} \sigma^{\prime} \cdot c_{\mathrm{k} \sigma}}\right| F\right\rangle .$$
$$c_{\mathrm{k} \pi}^{\dagger} c_{\mathrm{k} \sigma}|F\rangle=\theta\left(k_F-k\right)|F\rangle$$
myassignments-help数学代考价格说明
1、客户需提供物理代考的网址,相关账户,以及课程名称,Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明,让您清楚的知道您的钱花在什么地方。
2、数学代写一般每篇报价约为600—1000rmb,费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵),报价后价格觉得合适,可以先付一周的款,我们帮你试做,满意后再继续,遇到Fail全额退款。
3、myassignments-help公司所有MATH作业代写服务支持付半款,全款,周付款,周付款一方面方便大家查阅自己的分数,一方面也方便大家资金周转,注意:每周固定周一时先预付下周的定金,不付定金不予继续做。物理代写一次性付清打9.5折。
Math作业代写、数学代写常见问题
myassignments-help擅长领域包含但不是全部: | 2023-03-24 03:33:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8878138661384583, "perplexity": 425.5051408186195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945242.64/warc/CC-MAIN-20230324020038-20230324050038-00129.warc.gz"} |
https://mathoverflow.net/questions/382327/lagrangian-of-reshetikhin-turaev-tfts | # Lagrangian of Reshetikhin-Turaev TFT's
One of the results from the Reshetikhin-Turaev package is that given a modular tensor category $$\mathscr{C}$$ one can construct a TFT $$Z$$. In the case where $$\mathscr{C}$$ is the category of positive energy representations of the loop group, it is accepted that $$Z$$ coincides with the Chern-Simons theory. Now, the latter has a well-known Lagrangian. Is there a way to recover this Lagrangian from the RT construction? More generally, can one always have a Lagrangian for any of these RT theories? How are these constructed?
I don't think there's a way to extract a Lagrangian from the Reshetikhin-Turaev construction. There's certainly not a unique way to do so.
Physicists believe that most QFTs are "non-Lagrangian," meaning that these QFTs cannot be produced by quantizing classical field theories. To make this statement precise, one has to say what exactly is meant by QFT, and this is out of reach for now. But what this means is that if there were some fully general classification of QFTs in a given dimension, people expect that QFTs studied with a Lagrangian and path integral don't give you all of them.
The Reshetikhin-Turaev construction is a very general construction of 3d anomalous TFTs: Bartlett-Douglas-Schommer-Pries-Vicary show that it sees all once-extended 3d TFTs for a particular target 2-category. So I'd expect that unless there's some good reason to believe otherwise, there are 3d TFTs produced by the Reshetikhin-Turaev construction that are not Chern-Simons theories. However, I do not know of an example asserting this, so it's in principle possible that there could be a proof that all RT theories arise from Chern-Simons theories.
One thing which we do have examples of is 3d TFTs admitting multiple inequivalent descriptions as Chern-Simons theories. For example, if $$G = \mathbb Z/2$$, the possible choices of level are given by $$H^4(B\mathbb Z/2;\mathbb Z)\cong\mathbb Z/2$$; let $$Z$$ be the Chern-Simons theory for the nonzero level. We could also consider $$G = \mathrm U(1)\times\mathrm U(1)$$, whose levels are given by $$2\times 2$$ matrices; Chern-Simons theory for the level
$$\begin{pmatrix}2 & 0\\0 & -2\end{pmatrix}$$
is isomorphic to $$Z$$. | 2021-02-26 21:50:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8177748322486877, "perplexity": 261.6016626893494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357984.22/warc/CC-MAIN-20210226205107-20210226235107-00483.warc.gz"} |
https://openmx.ssri.psu.edu/thread/4121 | # Nested models
7 posts / 0 new
Offline
Joined: 10/05/2015 - 18:51
Nested models
A fixed effect model is a more restricted version of a random effects model, when meta-analyzing correlations using tssem1(), correct? Is there a way to do a nested model test? I know the degrees of freedom for the difference test would be the number of random effects estimated. I'm tripped up because the fixed effect model gives many fit stats, whereas the random effects model only gives a -2 log likelihood.
Thank you!
Offline
Joined: 10/08/2009 - 22:37
Hi Katie, The answer is both
Hi Katie,
The answer is both yes and no. There are two versions of fixed-effects models implemented in tssem1(). When you specify the argument method="REM", it uses the multiple-group SEM approach (Cheung & Chan, 2005). This fixed-effects model is not nested within the random-effects model.
Another fixed-effects model is to fix the variance component of the random effects at zero by specifying both method="REM" and RE.type="Zero". This fixed-effects model is nested within the random-effects model (see the following R code). Please note that this is tested on the boundary of the parameter space. The likelihood ratio test is likely wrong without the adjustment.
Best,
Mike
## Fixed-effects model by using a multiple-group SEM
fixed_no <- tssem1(Digman97$data, Digman97$n, method="FEM")
## Fixed-effects model by fixing the random-effects heterogeneity covariance matrix at zero
fixed_yes <- tssem1(Digman97$data, Digman97$n, method="REM", RE.type="Zero")
## Random-effects model
random <- tssem1(Digman97$data, Digman97$n, method="REM", RE.type="Diag")
## Nested models: testing on the boundary
anova(random, fixed_yes)
Offline
Joined: 10/05/2015 - 18:51
Fit stats
Thanks! I have done what you suggest, and I see in the anova() command that I get negative AIC values. I attach my code (code will also pull down data). In another dataset, in the -2LL column, the model with random effects specified has a negative -2LL and the fixed effects model, specified with method="REM" & RE.type="Zero" has a positive -2LL, which seems strange. Is there an error in my code?
Offline
Joined: 10/08/2009 - 22:37
Different SEM packages produce different AIC (http://openmx.psyc.virginia.edu/thread/89). AIC is usually used for model comparisons. You may choose the model with the smallest AIC as long as you are using the same package.
There is nothing wrong with positive -2LL. For example, http://openmx.psyc.virginia.edu/thread/1831 and http://blog.stata.com/2011/02/16/positive-log-likelihood-values-happen/
Offline
Joined: 10/05/2015 - 18:51
Nested models - clusters
OK - just to confirm - a negative AIC value (not when comparing models, just each model on its own) is OK?
Also - you might have missed my question above (threading is a bit confusing here):
Is it also the case that when you add a cluster variable to tssem1(), and then you sum the chi-square associated with each group, that the total chi-square can be compared to a tssem1() with no cluster as a nested model? The df of the difference here would be the number of parameters estimated in each group.
Is there a way to fix or free individual correlations in tssem1() across subgroups, or must the model be specified in tssem2()?
Thanks again! It's really amazing to be able to get your feedback.
Katie
Offline
Joined: 10/08/2009 - 22:37
Yes, the (negative) AIC is
Yes, the (negative) AIC is not very useful on its own.
Sorry, I missed that part. Yes, The chi-square and its dfs are based on the sum of the individual chi-squares and dfs. You may compare them against the model without the cluster variable.
I haven't implemented this in metaSEM though it can be done in OpenMx. Suzanne Jak ([email protected]) and I have written a paper on this topic. She is the corresponding author. If you write to her, I am sure that she is willing to share it.
Offline
Joined: 10/05/2015 - 18:51
Nested models - clusters
Hi again!
Is it also the case that when you add a cluster variable to tssem1(), and then you sum the chi-square associated with each group, that the total chi-square can be compared to a tssem1() with no cluster as a nested model? The df of the difference here would be the number of parameters estimated in each group.
Is there a way to fix or free individual correlations in tssem1() across subgroups, or must the model be specified in tssem2()?
Thanks!
Katie | 2020-10-24 04:12:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5435856580734253, "perplexity": 1869.4712680763184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881640.29/warc/CC-MAIN-20201024022853-20201024052853-00062.warc.gz"} |
https://gamedev.stackexchange.com/questions/14269/variable-step-update-in-game-loop-is-falling-behind-how-can-i-get-around-this | # Variable-step update() in game loop is falling behind, how can I get around this?
I'm working on a minimal game engine for my next game. I'm using the delta update method like shown:
void update(double delta) {
// Update code that uses delta goes here
}
I have a deep hierarchy of updatable objects, with a root updatable that contains several updatables, each of which contains more updatables, etc. Normally I'd just iterate through each of the root's children and update each one, which would then do the same for its children, and so on. However, passing a fixed value of delta to the root means that by the time the leaf updatables are reached, it's been longer since delta seconds that have elapsed. This is causing noticable desyncing in my game, and time synchronization is very important in my case (I'm working on a rhythm game).
Any ideas on how I should tackle this? I've considered using StopWatches and a global readable timer, but any advice would be helpful. I'm also open to moving to fixed timesteps as opposed to variable.
• Would be handy to know the language and platform here as I don't think this is a purely abstract problem. – Kylotan Jun 28 '11 at 23:18
• The time continuous is sampled in every iteration: this means that the time is "freezed", in "updating" sense , for the whole duration of your tree walking. This is proven to work for small intervals so the entire process have to be short. If you see someting going wrong probably the updating is taking too long. – FxIII Jun 29 '11 at 13:10
It's not clear why this should be a problem. If the delta is the time between updates, then it doesn't matter if it's called 1ms late providing it's consistently called 1ms late, which would appear to be the case. It's also not clear how you are observing that this is a problem given that presumably there is no visual output until the whole lot has returned.
It's also hard to imagine any time-based game that has an update() so slow that it takes even 1ms to update the object. What are you doing in these updates?
I would suspect that your delta calculations are simply wrong in some other way. Could you post a simplified main loop showing how you are obtaining this value?
I guess you have to have a fixed delta time for all your update scene class. It doesn't matter if the accual time is passed while you were processing previous nodes, but it's important that all your nodes move step forward with same measure.
in general you can update your scene using two different methods:
1. one is to have all objects in your scene to be in same time frame at the end of update call. It doesn't matter if they are updated 2 miliseconds or two hours, the only important thing is to have a snapshot of your game objects at specific time T. this way you can have show your gamestate, since in every picture all objects should be in same time. for example you can't imagine picture of a man being chased by a lion and say "hey that picture is wrong the man and lion are 2 seconds out of sync!"
2. you can update object in your scene asynchronized. I'm not much of an expert in this feild myself but I know there are some simulations based on async updates in different node. I've just seen this method once used and it was to optimize simulation speed in Connway’s Game of Life. it's called hashlife. but still to give user the simulation results the application would continue simulation untill all the cells reach some specific generation and then show the results.
Now back to your problem, I assume you have some problem with syncing audio and video in your game. As kylotan suggested there should be something wrong with your delta time calculator. then again I guess you can somehow predict how much you update call going to last (for example if you are running your game at 60fps, you may assume each game cycle last around 0.016 seconds) and just play music with that much difference with your accual game time. | 2020-03-29 22:16:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30834081768989563, "perplexity": 777.0322090298761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496227.25/warc/CC-MAIN-20200329201741-20200329231741-00402.warc.gz"} |
https://mastodon.utwente.nl/@tdouzon/100746487777441747 | I just received a mail from the SNT : «SNT Abuse received a complaint from ICTS that your computer (mac@) allows database connections (ElasticSearch) from the internet. This functionality can be exploited by malicious people to steal confidential or private information. Would you therefore ensure that this service is configured to only allow connections from the clients that use the database service?
Please reply to us what you have done to solve the problem, so we can close the complaint»
Is there anything special I should do ?
@tdouzon Thanks for sharing. I don't know why this happens. Maybe we should add settings to elasticsearch.yml to close connection from outside? c.c. @dolf
@hiemstra @dolf I tried adding those two lines to my .yml:
«xpack.security.transport.filter.allow: localhost
xpack.security.transport.filter.deny: _all»
But then I can't access ES anymore and I need to reboot my computer to make it work properly again.
@tdouzon @dolf As a follow-up: This problem occurs when you do not use a firewall on your laptop, or if you somehow opened up your firewall. I don't think this can be solved with Elasticsearch settings. Please enable the firewall on your system.
A social network for the University of Twente community. Anyone with an @utwente.nl or @*.utwente.nl email address can create an account here and participate in the global fediverse with millions of other users. This means that students, staff and alumni can create an account here. Content does not reflect the opinions or policies of the University of Twente.
-
We support $$\LaTeX$$ formulas: Use $$ and $$ for inline LaTeX formulas, and $ and $ for display mode. | 2020-10-29 08:34:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17483115196228027, "perplexity": 1888.6774116976655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107903419.77/warc/CC-MAIN-20201029065424-20201029095424-00321.warc.gz"} |
https://www.mathportal.org/analytic-geometry/conic-sections/ellipse.php | Math Calculators, Lessons and Formulas
It is time to solve your math problem
mathportal.org
« Circle
Analytic Geometry: (lesson 2 of 3)
## Ellipse
### Definitions:
1. An ellipse is the figure consisting of all those points for which the sum of their distances to two fixed points (called the foci) is a constant.
2. An ellipse is the figure consisting of all points in the plane whose Cartesian coordinates satisfy the equation
$\frac{(x - h)^2}{a^2} + \frac{(y - k)^2}{b^2} = 1$
Where $h$, $k$, $a$ and $b$ are real numbers, and $a$ and $b$ are positive.
### Formulas:
Equations
An ellipse centered at the point $(h, k)$ and having its major axis parallel to the x-axis is specified by the equation
$\frac{(x - h)^2}{a^2} + \frac{(y - k)^2}{b^2} = 1$
Parametric equations of the ellipse:
\begin{aligned} &x = h + a \cos t \\ &y = k + b \cos t \\ &-\pi \le t < \pi \end{aligned}
Major axis = 2a
Minor axis = 2b
Eccentricity
Define a new constant $0 \le \varepsilon < 1$ called the eccentricity ( $\varepsilon = 0$ is the case of a circle) The eccentricity is:
$\varepsilon = \sqrt{1 - \frac{b^2}{a^2}}$.
The greater the eccentricity is, the more elongated is the ellipse.
Foci:
If $c$ equals the distance from the center to either focus, then
\begin{aligned} &F1: \ \ \left( h - \sqrt{a^2 - b^2}, \ \ k \right) \\ &F2: \ \ \left( h + \sqrt{a^2 - b^2}, \ \ k \right) \end{aligned}
The distance between the foci is $2 a \varepsilon$.
Area
The area enclosed by an ellipse is $\pi a b$, where, in the case of a circle where $a = b$, the expression reduces to the familiar $a^2 \pi$
Tangent line
If $D(x_0, y_0)$ is a fixed point of the ellipse.
The equation of the tangent line in point $D(x_0, y_0)$ of an ellipse
$\frac{x_0 (h - x)}{a^2} + \frac{y_0 (k - y)}{b^2} = 1$
Example 1:
Given the following equation
$9x^2 + 4y^2 = 36$
Find the length of the major and minor axes.
b) Find the coordinates of the foci.
c) Sketch the graph of the equation.
Solution:
a) First write the given equation in standard form:
\begin{aligned} 9x^2 + 4y^2 &= 36 \ \ / \color{blue}{36} \\ \frac{9x^2}{36} + \frac{4y^2}{36} &= 1 \\ \frac{x^2}{4} + \frac{y^2}{9} &= 1 \\ \frac{x^2}{2^2} + \frac{y^2}{3^2} &= 1 \ \ \to \ \ \color{blue}{a = 2; \ b = 3} \end{aligned}
The major axis length is given by $= 2a = 4$
The minor axis length is given by $= 2b = 6$
b)
\begin{aligned} &F1: \ \ \left( h, k - \sqrt{b^2 - a^2} \right) = \left( 0,0 - \sqrt{3^2 - 2^2} \right) = \color{blue}{\left( 0, - \sqrt{5} \right)} \\ &F2: \ \ \left( h, k + \sqrt{b^2 - a^2} \right) = \left( 0,0 + \sqrt{3^2 - 2^2} \right) = \color{blue}{\left( 0, \sqrt{5} \right)} \end{aligned}
c)
Example 2:
Sketch the graph of the ellipse whose equation is $\frac{(x - 2)^2}{9} + \frac{(y + 1)^2}{4} = 1$.
Solution:
It can be seen that the center of the ellipse is $(h, k) = (2, -1)$. Next, note that $a = 3, b = 2$.
It is known that the endpoints of the major axis are exactly 3 units left and right from the center, which places them at the points $(-1, -1)$ and $(5, -1)$.
It is, also, known that the endpoints of the minor axis are exactly 2 units above and below the center, which places them at the points $(2, 1)$ and $(2, -3)$.
Foci:
\begin{aligned} &F1: \ \ \left( h - \sqrt{a^2 - b^2}, k \right) = \left( 2 - \sqrt{3^2 - 2^2}, -1 \right) = \left( 2 - \sqrt{5}, -1 \right) \\ &F1: \ \ \left( h + \sqrt{a^2 - b^2}, k \right) = \left( 2 + \sqrt{3^2 - 2^2}, -1 \right) = \left( 2 + \sqrt{5}, -1 \right) \end{aligned} | 2019-06-18 19:27:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000053644180298, "perplexity": 558.1582106196104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998813.71/warc/CC-MAIN-20190618183446-20190618205446-00403.warc.gz"} |
https://www.albertopasca.it/whiletrue/tag/uiimage/ | ## Objc – Draw text along UIImage
The NSTextContainer class defines a region in which text is laid out. An NSLayoutManager object uses one or more NSTextContainer objects to determine where to break lines, lay out portions of text, and so on. An NSTextContainer object defines rectangular regions, and you can define exclusion paths inside the text container’s bounding rectangle so that text flows around the exclusion path as it is laid out. You can create subclasses that define regions of nonrectangular shapes, such as circular regions. | 2022-07-06 06:22:15 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9012384414672852, "perplexity": 1252.9152114126073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00651.warc.gz"} |
https://17calculus.com/precalculus/functions/polynomials/factoring/ | ## 17Calculus Precalculus - Factoring Polynomials
##### 17Calculus
Factoring is an important skill going into calculus. You will be doing a LOT of factoring in calculus. Master these techniques now and you will do much better in calculus. Many students struggle with calculus because their algebra skills are lacking, including factoring.
There are lots of factoring techniques and, unfortunately, there is no one technique that works in all cases. So you need to learn them well. You will find several that you will use a lot. Learn those first. Then keep a bookmark for this page and come back here when you need to remind yourself of the other techniques.
The major techniques are labeled as primary. Learn these well. The secondary techniques are usually special cases and you will use them but not as often as the primary techniques.
However, there are a few steps that will help you get started with all equations.
Steps
1. Rewrite the polynomial with terms in order, highest term on the left. Of course, it would still work if you wrote them in reverse order but most of the time you will see your instructor and textbook written left to right, highest power on the left. So we suggest that you do the same.
2. Look for obvious common factors in all terms, like $$x$$ or constants. You do not need to see all of them at the same time. If you see one term, factor it out and then look for more until you think they are all factored out. Here is an example.
Completely factor this polynomial.
$$3x^2 + 6x$$
Problem Statement
Factor $$3x^2 + 6x$$
$$3x^2 + 6x = 3x(x + 2)$$
Problem Statement
Factor $$3x^2 + 6x$$
Solution
First, we will break each factor down into individual terms. $$3x^2 = 3 \cdot x \cdot x$$ $$6x = 2 \cdot 3 \cdot x$$ Now we can see the common factors in each term. In both terms we have a 3, so let's factor out a 3 first. $$3 \cdot x \cdot x + 2 \cdot 3 \cdot x = 3( x \cdot x + 2 \cdot x)$$ Okay, so now let's look at what is left inside the parentheses. Notice that in each of those terms, we have an $$x$$. So let's factor that out. $$3( x \cdot x + 2 \cdot x) = 3x(x + 2)$$ Now, let's again look at what is inside the parentheses. Notice that we have two terms with nothing in common. One has $$x$$. The other has $$2$$. Since they do not have anything in common, we are done.
Okay, so that is how we would factor this polynomial. Here is a video showing the GCF method. See which why you think is easier to understand. However, as usual, check with your instructor to see what they require.
### Freshmen Math Doctor - 2542 video solution
$$3x^2 + 6x = 3x(x + 2)$$
Log in to rate this practice problem and to see it's current rating.
Before we get into the details of factoring, let's watch this video clip as an overview. This instructor shows several techniques with a few quick examples. If you don't understand everything in this video, that's okay. We cover most of this techniques on this page and give you a chance to practice them.
### freeCodeCamp.org - College Algebra - Factoring [23min-50secs]
video by freeCodeCamp.org
Factoring By GCF (Greatest Common Factor)
Before we go on, we want to mention this technique. We have recently watched videos with instructors using this technique (including the last video above). Many instructors just seem to wave their hands and come up with the GCF of each term. We recommend instead to use the step-by-step, one factor at a time technique we demonstrated in the previous example. Essentially, it is the same as GCF but you do not need to come up with the one large factor all at once. Of course, make sure you check with your instructor to see what they expect.
Testing Linear Factors
Before we get into factoring techniques, there is one concept that will help you a lot. Let's say you have a polynomial and you suspect that $$(x-1)$$ is a factor. What you can do is set this equal to zero and solve for $$x$$, i.e. $$x-1=0 \to x=1$$ and substitute $$x=1$$ into the polynomial. If the result is zero, then $$(x-1)$$ is a factor of the polynomial. Use long division of polynomials or synthetic division to factor it out. This will reduce the highest power by one and perhaps give you a polynomial that you can then factor using simpler techniques.
So your next question is, why would I think that $$(x-1)$$ might be a factor? Well, one idea is to plot the polynomial on your calculator (if your instructor allows it) and see where it might cross the x-axis, i.e. try to see if you can find any real zeroes/roots. This can reduce the complexity of the polynomial until it is more manageable.
You will see quadratics often in calculus. There are several techniques that you can use on quadratics. Check out the practice problems for examples.
Unless otherwise instructed, factor these polynomials. Give your answers in simplified, completely factored form.
$$x^2 - 4x - 5$$
Problem Statement
Unless otherwise instructed, factor the polynomial $$x^2 - 4x - 5$$
Solution
### Freshmen Math Doctor - 2543 video solution
Log in to rate this practice problem and to see it's current rating.
$$x^2 + 8x + 15$$
Problem Statement
Unless otherwise instructed, factor the polynomial $$x^2 + 8x + 15$$
Solution
### Freshmen Math Doctor - 2544 video solution
Log in to rate this practice problem and to see it's current rating.
$$x^2 - 14x + 45$$
Problem Statement
Unless otherwise instructed, factor the polynomial $$x^2 - 14x + 45$$
Solution
### Freshmen Math Doctor - 2540 video solution
Log in to rate this practice problem and to see it's current rating.
Solve $$x^2 = -11x - 10$$ by factoring.
Problem Statement
Solve $$x^2 = -11x - 10$$ by factoring.
Solution
### Freshmen Math Doctor - 2536 video solution
Log in to rate this practice problem and to see it's current rating.
Solve $$x^2 - 13x + 36 = 0$$ by factoring.
Problem Statement
Solve $$x^2 - 13x + 36 = 0$$ by factoring.
Solution
### Freshmen Math Doctor - 2537 video solution
Log in to rate this practice problem and to see it's current rating.
$$6x^4 - 18x^3 + 12x^2$$
Problem Statement
Unless otherwise instructed, factor $$6x^4 - 18x^3 + 12x^2$$
Solution
### Freshmen Math Doctor - 2541 video solution
Log in to rate this practice problem and to see it's current rating.
$$x^2+4x - 12$$
Problem Statement
Unless otherwise instructed, factor the polynomial $$x^2+4x - 12$$
Solution
### 2653 video solution
Log in to rate this practice problem and to see it's current rating.
$$3x^2 + 12x - 36$$
Problem Statement
Unless otherwise instructed, factor the polynomial $$3x^2 + 12x - 36$$
Solution
### 2654 video solution
Log in to rate this practice problem and to see it's current rating.
$$3x^2 + 10x - 8$$
Problem Statement
Unless otherwise instructed, factor the polynomial $$3x^2 + 10x - 8$$
Solution
### 2655 video solution
Log in to rate this practice problem and to see it's current rating.
$$8x^2 + 35x + 12$$
Problem Statement
Unless otherwise instructed, factor the polynomial $$8x^2 + 35x + 12$$
Solution
### 2662 video solution
Log in to rate this practice problem and to see it's current rating.
$$6x^2 - 3x - 45$$
Problem Statement
Unless otherwise instructed, factor the polynomial $$6x^2 - 3x - 45$$
Solution
### 2663 video solution
Log in to rate this practice problem and to see it's current rating.
Factor By Grouping (Secondary)
When it appears that you have groups of similar terms, try pairing them up and factoring them together and then seeing if you have the same terms in each group. This technique is best seen by example. Look at the practice problems for examples.
Unless otherwise instructed, factor these polynomials. Give your answers in simplified, completely factored form.
$$x^3 - x^2 - 5x + 5$$
Problem Statement
Factor $$x^3 - x^2 - 5x + 5$$
Solution
### Freshmen Math Doctor - 2545 video solution
Log in to rate this practice problem and to see it's current rating.
$$x^3 - 3x^2 + 4x - 12$$
Problem Statement
Factor $$x^3 - 3x^2 + 4x - 12$$
Solution
### Freshmen Math Doctor - 2546 video solution
Log in to rate this practice problem and to see it's current rating.
$$x^3 + 2x^2 - 5x - 10$$
Problem Statement
Unless otherwise instructed, factor the polynomial $$x^3 + 2x^2 - 5x - 10$$
Solution
### 2656 video solution
Log in to rate this practice problem and to see it's current rating.
$$4x^3 - 8x^2 + 6x - 12$$
Problem Statement
Unless otherwise instructed, factor the polynomial $$4x^3 - 8x^2 + 6x - 12$$
Solution
### 2657 video solution
Log in to rate this practice problem and to see it's current rating.
$$x^3 + 3x^2 - 4x - 12$$
Problem Statement
Unless otherwise instructed, factor the polynomial $$x^3 + 3x^2 - 4x - 12$$
Solution
### 2658 video solution
Log in to rate this practice problem and to see it's current rating.
$$x^3 - 4x^2 + x + 6$$
Problem Statement
Unless otherwise instructed, factor the polynomial $$x^3 - 4x^2 + x + 6$$
Solution
### 2659 video solution
Log in to rate this practice problem and to see it's current rating.
$$5v^3 - 2v^2 + 25v - 10$$
Problem Statement
Unless otherwise instructed, factor the polynomial $$5v^3 - 2v^2 + 25v - 10$$
Solution
### 2660 video solution
Log in to rate this practice problem and to see it's current rating.
$$5r^4 - 7r^2s - 6s^2$$
Problem Statement
Unless otherwise instructed, factor the polynomial $$5r^4 - 7r^2s - 6s^2$$
Solution
### 2661 video solution
Log in to rate this practice problem and to see it's current rating.
$$6xy - 9y - 10x -15$$
Problem Statement
Unless otherwise instructed, factor $$6xy - 9y - 10x -15$$
Solution
### 2664 video solution
Log in to rate this practice problem and to see it's current rating.
$$15 - 5A^2 - 3B^2 + A^2B^2$$
Problem Statement
Unless otherwise instructed, factor $$15 - 5A^2 - 3B^2 + A^2B^2$$
Solution
### 2665 video solution
Log in to rate this practice problem and to see it's current rating.
When using the material on this site, check with your instructor to see what they require. Their requirements come first, so make sure your notation and work follow their specifications.
DISCLAIMER - 17Calculus owners and contributors are not responsible for how the material, videos, practice problems, exams, links or anything on this site are used or how they affect the grades or projects of any individual or organization. We have worked, to the best of our ability, to ensure accurate and correct information on each page and solutions to practice problems and exams. However, we do not guarantee 100% accuracy. It is each individual's responsibility to verify correctness and to determine what different instructors and organizations expect. How each person chooses to use the material on this site is up to that person as well as the responsibility for how it impacts grades, projects and understanding of calculus, math or any other subject. In short, use this site wisely by questioning and verifying everything. If you see something that is incorrect, contact us right away so that we can correct it. | 2022-09-26 15:43:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41110214591026306, "perplexity": 1133.8091061474659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00094.warc.gz"} |
https://me.gateoverflow.in/598/gate2016-2-12 | # GATE2016-2-12
For the brake shown in the figure, which one of the following is TRUE?
1. Self energizing for clockwise rotation of the drum
2. Self energizing for anti-clockwise rotation of the drum
3. Self energizing for rotation in either direction of the drum
4. Not of the self energizing type
recategorized
## Related questions
The forces $F_1$ and $F_2$ in a brake band and the direction of rotation of the drum are as shown in the figure. The coefficient of friction is $0.25$. The angle of wrap is $3\pi /2$ radians. It is given that $R$ = $1$ $m$ and $F_2$ = $1$ $N$. The torque (in $N$-$m$) exerted on the drum is _________
A short shoe external drum brake is shown in the figure. The diameter of the brake drum is $500 \: mm$. The dimensions $a=1000 \: mm, \: b=500 \: mm$ and $c=200 \: mm$. The coefficient of friction between the drum and the shoe is $0.35$. The force ... shown in the figure. The drum is rotating anti-clockwise. The braking torque on the drum is ______ $N \cdot m$ (round off to two decimal places).
A disc clutch with a single friction surface has coefficient of friction equal to $0.3$. The maximum pressure which can be imposed on the friction material is $1.5$ $MPa$. The outer diameter of the clutch plate is $200$ $mm$ and its internal diameter is $100$ $mm$. Assuming uniform wear theory for the clutch plate, the maximum torque (in $N.m$) that can be transmitted is _______
The system shown in the figure consists of block A of mass $5$ $kg$ connected to a spring through a massless rope passing over pulley B of radius $r$ and mass $20$ $kg$. The spring constant $k$ is $1500$ $N/m$. If there is no slipping of the rope over the pulley, the natural frequency of the system is_____________ $rad/s$.
The rod $AB$, of length $1$ $m$, shown in the figure is connected to two sliders at each end through pins. The sliders can slide along $QP$ and $QR$. If the velocity $VA$ of the slider at $A$ is $2$ $m/s$, the velocity of the midpoint of the rod at this instant is ___________ $m/s$. | 2021-09-18 10:36:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7563591599464417, "perplexity": 403.0647410102379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056392.79/warc/CC-MAIN-20210918093220-20210918123220-00273.warc.gz"} |
http://math.stackexchange.com/questions/153619/eigenvalues-of-a-quasi-stochastic-matrix | Eigenvalues of a quasi-stochastic matrix
Quasi-stochastic
In order not to make the title too long I used the term Quasi-Stochastic with this meaning: a quasi-stochastic matrix $Q$ is a square matrix $Q = (q_{i,j}) \in \mathbb{R}^{n \times n}$ where all entries are bound to the interval $q_{i,j} \in ]0,1[$ and where all rows sum to a number lower than 1 (strictly):
$$\sum_{j=1}^n q_{i,j} < 1$$
This condition determines what I call the quasi-stochastic thing.
The question: specific case
Consider a quasi-stochastic matrix $Q$ and its eigenvalues $\lambda_i$. I would like to understand if the following equation holds:
$$|\lambda_i| < 1, \forall i = 1 \dots n$$
Or, less strictly
$$|\lambda_i| \leq 1, \forall i = 1 \dots n$$
Rationale
There is a reason why I ask. If you consider a Markov chain and its transition probabilities matrix $P$, if the chain is ergodic than the matrix has all its eigenvalues in the unit circle with one eigenvalue on the edge of it. If create a reduced version of this matrix, what happens to the eigenvalues? Using Matlab I could try some experiments and experienced that all eigenvalues are in the circle and the eigenvalue which is $\lambda_1 = 1$ falls to $\lambda_1 < 1$. However how to get mathematical evidence?
-
You may want to enclose the text in ** ** than precede it with ##. – user17762 Jun 4 '12 at 4:58
Ah, yeah, thankyou :) – Andry Jun 4 '12 at 5:09
Yes it is indeed true. Note that $$\lVert A \rVert_{\infty} = \text{ the maximum absolute row sum of the matrix}$$ In your case, the maximum absolute row sum of the matrix is same as the maximum row sum of the matrix which in turn is strictly less than $1$. More importantly, if $\lambda_k$ is any eigenvalue of the matrix, then we have that $$\dfrac1{\lVert A^{-1} \rVert} \leq \lvert \lambda_k \rvert \leq \lVert A \rVert, \text{ for all eigenvalues } \lambda_k$$ for any valid matrix norm. In particular, choosing the matrix norm as $\infty$-norm, we get what you want. | 2015-04-18 12:25:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400440454483032, "perplexity": 160.64316223378327}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246634331.38/warc/CC-MAIN-20150417045714-00151-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://mathematica.stackexchange.com/questions/191533/communitygraphplot-of-weighted-graph-node-too-far | # CommunityGraphPlot of weighted graph: node too far
I am playing with weighted Graph and CommunityGraphPlot, and I am considering the following example.
listWeights = {2, 1, 1, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
tab = {UndirectedEdge[3, 1], UndirectedEdge[3, 31], UndirectedEdge[3, 32], UndirectedEdge[3, 33], Table[UndirectedEdge[1, i + 1], {i, 1, 29}]} // Flatten
g = Graph[tab[[All]], EdgeWeight -> Normalize@listWeights]
which reproduce the following Graph
When I use the CommunityGraphPlot to find the communities
CommunityGraphPlot[g, ImageSize -> Full, VertexLabels -> "Name"]
I get
Question: why is the node 2 plotted so far? You can see from the code that it has an unormalized weight of 2, so I was expecting it closest to the node 1.
• Please show a minimal example. Do the weights matter in any way? – Szabolcs Feb 14 at 9:21
• @Szabolcs thanks for the comment. With minimal examples I don't see this behaviour, otherwise I would have posted the most minimal one :) The weights should matter, no? For example, if I set some 0 weight, then I get a different communities plot, where the nodes that have 0 weight with the node 1 generate a separate community. – apt45 Feb 14 at 9:23
• "The weights should matter, no?" Have you actually tried it, or are you just assuming this? I just tried omitting them and it nothing changed. – Szabolcs Feb 14 at 9:24
• @Szabolcs try to set listWeights = {0, 1, 1, 0, 0, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}; – apt45 Feb 14 at 9:27
• Just be cautious, test instead of just assuming, and if in doubt, ask Wolfram Support ... I have mixed experiences with asking them about this specific topic, and I am very frustrated by the lack of proper documentation on it. If you have questions about IGraph/M, you can ask me (perhaps use the IGraph/M chatroom) – Szabolcs Feb 14 at 9:50 | 2019-11-13 19:55:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4565313160419464, "perplexity": 632.785857917494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667333.2/warc/CC-MAIN-20191113191653-20191113215653-00371.warc.gz"} |
http://gianpaologonzalez.com/vlt5j98m/47c220-who-invented-ac-current | Brothers Playing For Same Football Team, 1 Euro To Pkr, John Deere 70a Loader Bucket, Warmest Place In Canada Quora, He Was The Talk Of The Town Lyrics, Kermit Yay Video, Blackrock Equity Market Index Fund, Dlr Stock Forecast 2025, "/>
who invented ac current
, it is more practical to use a time averaged power (where the averaging is performed over any integer number of cycles). The skin depth is the thickness at which the current density is reduced by 63%. The first alternator to produce alternating current was a dynamo electric generator based on Michael Faraday's principles constructed by the French instrument maker Hippolyte Pixii in 1832. {\displaystyle V_{\text{peak}}} A common source of DC power is a battery cell in a flashlight. Below an AC waveform (with no DC component) is assumed. V If coils are added opposite to these (60° spacing), they generate the same phases with reverse polarity and so can be simply wired together. {\displaystyle V_{\text{pp}}} Transmission with high voltage direct current was not feasible in the early days of electric power transmission, as there was then no economically viable way to step down the voltage of DC for end user applications such as lighting incandescent bulbs. It measures direct and alternating current with accuracies of from 0.1 to 0.25 percent. Patenting Alternating Current After studying alternating current for a number of years, Charles Steinmetz patented a "system of distribution by alternating current" (A/C power), on January 29, 1895. 25 Hz industrial customers still existed as of the start of the 21st century. The most famous of the three visionary men, Edison, developed the world’s first practical light bulb in the late 1870s, then began building a system for producing and distributing electricity so businesses and homes could use his new invention. The abbreviations AC and DC are often used to mean simply alternating and direct, as when they modify current or voltage.[1][2]. A bipolar open-core power transformer developed by Lucien Gaulard and John Dixon Gibbs was demonstrated in London in 1881, and attracted the interest of Westinghouse. represents a load resistance. A coaxial cable has a conductive wire inside a conductive tube, separated by a dielectric layer. The first centrifugal refrigeration machine invented by Willis H. Carrier, the father of air conditioning, is pictured in Syracuse, New York in 1922. However, the idea of using evaporated water — … In 1887 and 1888 Tesla had an experimental shop at 89 Liberty Street, New York, and there he invented the induction motor. Alternating current reverses direction a certain number of times per second -- 60 in the U.S. -- and can be converted to different voltages relatively easily using a transformer. V {\displaystyle \sin(x)} Wire constructed using this technique is called Litz wire. This is because the acceleration of an electric charge in an alternating current produces waves of electromagnetic radiation that cancel the propagation of electricity toward the center of materials with high conductivity. . [6] In 1878, the Ganz factory, Budapest, Hungary, began manufacturing equipment for electric lighting and, by 1883, had installed over fifty systems in Austria-Hungary. peak ) The RMS voltage is the square root of the mean over one cycle of the square of the instantaneous voltage. FACT CHECK: We strive for accuracy and fairness. This significantly reduces the risk of electric shock in the event that one of the live conductors becomes exposed through an equipment fault whilst still allowing a reasonable voltage of 110 V between the two conductors for running the tools. Harmonics can cause neutral conductor current levels to exceed that of one or all phase conductors. These currents typically alternate at higher frequencies than those used in power transmission. A third wire, called the bond (or earth) wire, is often connected between non-current-carrying metal enclosures and earth ground. In 1855, he announced that AC was superior to direct currentfor electrotherapeutic trigg… [13][14] When employed in parallel connected electric distribution systems, closed-core transformers finally made it technically and economically feasible to provide electric power for lighting in homes, businesses and public spaces. Sidney Licht, New Haven: E. Licht, 1967, Pp. Paper: On the Law of Hysteresis by Chas. P {\displaystyle P_{\rm {w}}} Alternating current (AC) is an electric current which periodically reverses direction and changes its magnitude continuously with time in contrast to direct current (DC) which flows only in one direction. In the U.S., William Stanley, Jr. designed one of the first practical devices to transfer AC power efficiently between isolated circuits. [30] In 1891, a second transmission system was installed in Telluride Colorado. [12], The ZBD patents included two other major interrelated innovations: one concerning the use of parallel connected, instead of series connected, utilization loads, the other concerning the ability to have high turns ratio transformers such that the supply network voltage could be much higher (initially 1400 V to 2000 V) than the voltage of utilization loads (100 V initially preferred). In 1855, he announced that AC was superior to direct current for electrotherapeutic triggering of muscle contractions. Direct current, abbreviation DC, flow of electric charge that does not change direction. ) The other essential milestone was the introduction of 'voltage source, voltage intensive' (VSVI) systems'[18] by the invention of constant voltage generators in 1885. The AC resistance often is many times higher than the DC resistance, causing a much higher energy loss due to ohmic heating (also called I2R loss). In 1893 he designed the first commercial three-phase power plant in the United States using alternating current—the hydroelectric Mill Creek No. Steinmetz retired as an engineer from General Electric to teach electrical … or Rather than using instantaneous power, Information signals are carried over a wide range of AC frequencies. 1892. R From the 1920s on, research continued on applying thyratrons and grid-controlled mercury arc valves to power transmission. READ MORE: When Thomas Edison Turned Night into Day. x Litz wire is used for making high-Q inductors, reducing losses in flexible conductors carrying very high currents at lower frequencies, and in the windings of devices carrying higher radio frequency current (up to hundreds of kilohertz), such as switch-mode power supplies and radio frequency transformers. × In the autumn of 1884, Károly Zipernowsky, Ottó Bláthy and Miksa Déri (ZBD), three engineers associated with the Ganz Works of Budapest, determined that open-core devices were impractical, as they were incapable of reliably regulating voltage. PSHS has 14 other campuses around the country. Alternating current systems can use transformers to change voltage from low to high level and back, allowing generation and consumption at low voltages but transmission, possibly over great distances, at high voltage, with savings in the cost of conductors and energy losses. As written above, an alternating current is made of electric charge under periodic acceleration, which causes radiation of electromagnetic waves. Electric chair (using AC) 70. Two years later, Tesla, a young Serbian engineer, immigrated to America and went to work for Edison. Off-shore, military, textile industry, marine, aircraft, and spacecraft applications sometimes use 400 Hz, for benefits of reduced weight of apparatus or higher motor speeds. 1-70. − name designation given to this first commercial AC transformer. [5] Alternating current technology was developed further by the Hungarian Ganz Works company (1870s), and in the 1880s: Sebastian Ziani de Ferranti, Lucien Gaulard, and Galileo Ferraris. This conductor provides protection from electric shock due to accidental contact of circuit conductors with the metal chassis of portable appliances and tools. This measure helps to partially mitigate skin effect by forcing more equal current throughout the total cross section of the stranded conductors. {\displaystyle V_{\text{P-P}}} P-P Since the maximum value of Three current waveforms are produced that are equal in magnitude and 120° out of phase to each other. You can also thank Tesla for the ability to change the channel without having to get … Westinghouse also received an important contract to construct the AC generators for a hydro-electric power plant at Niagara Falls; in 1896, the plant started delivering electricity all the way to Buffalo, New York, 26 miles away. Audio and radio signals carried on electrical wires are also examples of alternating current. The two generators (42 Hz, 550 kW each) and the transformers were produced and installed by the Hungarian company Ganz. Instead, fiber optics, which are a form of dielectric waveguides, can be used. Until this time electricity had been generated using direct current … Except for his earlier work in low frequency ac and an occasional venture into incidental fields, high-frequency researches were to remain the dominant objective of his inventive career. READ MORE: 6 Brilliant Tesla Inventions That Never Got Built. Before the Alternating Current Electrical Supply (AC for short) was invented its predecessor the Direct Current Electrical Supply (DC for short) was used. The earliest recorded practical application of alternating current is by Guillaume Duchenne, inventor and developer of electrotherapy. Nikola Tesla experimented with electrical resonance and studied various lighting systems. History. V © 2020 A&E Television Networks, LLC. The achievement was regarded as the unofficial end to the War of the Currents, and AC became dominant in the electric power industry. The voltage delivered to equipment such as lighting and motor loads is standardized, with an allowable range of voltage over which equipment is expected to operate. Westinghouse won the contract to supply electricity to the 1893 World’s Fair in Chicago—beating out rival General Electric, which was formed in 1892 by a merger involving Edison’s company—and the expo became a dazzling showcase for Tesla’s AC system. Consumer voltages vary somewhat depending on the country and size of load, but generally motors and lighting are built to use up to a few hundred volts between phases. Open-core transformers with a ratio near 1:1 were connected with their primaries in series to allow use of a high voltage for transmission while presenting a low voltage to the lamps. Direct current was supplanted by alternating current (AC) for common commercial power in the late 1880s because it was then uneconomical to transform it to the high voltages needed for long-distance transmission. For low to medium frequencies, conductors can be divided into stranded wires, each insulated from the others, with the relative positions of individual strands specially arranged within the conductor bundle. The use of lower frequencies also provided the advantage of lower impedance losses, which are proportional to frequency. Ultimately, however, Edison failed in his efforts to discredit AC. The initials for each of the three men's last names formed the Z.B.D. The two feuding geniuses waged a "War of Currents" in the 1880s over whose electrical system would power the world — Tesla's alternating-current (AC) system or … Licht, Sidney Herman., "History of Electrotherapy", in Therapeutic Electricity and Ultraviolet Radiation, 2nd ed., ed. A direct current flows uniformly throughout the cross-section of a uniform wire. DC electricity is easier to store, however, and it's better for small applications involving delicate electronics and thin wire. Coaxial cables often use a perforated dielectric layer to separate the inner and outer conductors in order to minimize the power dissipated by the dielectric. In the late 19th century, three brilliant inventors, Thomas Edison, Nikola Tesla and George Westinghouse, battled over which electricity system—direct current (DC) or alternating current (AC)–would become standard. (1890), Mathematical descriptions of the electromagnetic field, "Pixii Machine invented by Hippolyte Pixii, National High Magnetic Field Laboratory", "Electrostatics and Electrodynamics at Pest University in the Mid-19th Century", IEEE Transactions of the American Institute of Electrical Engineers, "Hungarian Inventors and Their Inventions", "Ottó Bláthy, Miksa Déri, Károly Zipernowsky", id=qQMOPjUgWHsC&pg=PA138&lpg=PA138&dq=tesla+motors+sparked+induction+motor&source=bl&ots=d0d_SjX8YX&sig=sA8LhTkGdQtgByBPD_ZDalCBwQA&hl=en&sa=X&ei=XoVSUPnfJo7A9gSwiICYCQ&ved=0CEYQ6AEwBA#v=onepage&q=tesla%20motors%20sparked%20induction%20motor&f=false Evolving Technology and Market Structure: Studies in Schumpeterian Economics, Professor Mark Csele's tour of the 25 Hz Rankine generating station, The Frequency Changer Era: Interconnecting Systems of Varying Cycles, https://en.wikipedia.org/w/index.php?title=Alternating_current&oldid=995205991, Short description is different from Wikidata, All articles with specifically marked weasel-worded phrases, Articles with specifically marked weasel-worded phrases from December 2011, Creative Commons Attribution-ShareAlike License, This page was last edited on 19 December 2020, at 20:20. The advantage is that lower rotational speeds can be used to generate the same frequency. During their bitter dispute, dubbed the War of the Currents, Edison championed the direct-current system, in which electrical current flows steadily in one direction, while Tesla and Westinghouse promoted the alternating-current system, in which the current’s flow constantly alternates. This phenomenon is called skin effect. In the absence of a continuous current, there is no useful application of electricity. Three-phase electrical generation is very common. 2 Because waveguides do not have an inner conductor to carry a return current, waveguides cannot deliver energy by means of an electric current, but rather by means of a guided electromagnetic field. He opened his first power plant, in New York City, in 1882. POTS telephone signals have a frequency of about 3 kHz, close to the baseband audio frequency. Their AC systems used arc and incandescent lamps, generators, and other equipment.[8]. t After his paper he is hired by General Electric Company and joins forces with Elihu Thomson and William Stanley. [9] In both designs, the magnetic flux linking the primary and secondary windings traveled almost entirely within the confines of the iron core, with no intentional path through air (see toroidal cores). What new invention does Edison create to prove that AC current is dangerous? Decker's design incorporated 10 kV three-phase transmission and established the standards for the complete system of generation, transmission and motors used today. For larger installations all three phases and neutral are taken to the main distribution panel. Image Source/Getty Images In the late 19th century, three brilliant inventors, Thomas Edison, Nikola Tesla and George Westinghouse, battled … The original Niagara Falls generators were built to produce 25 Hz power, as a compromise between low frequency for traction and heavy induction motors, while still allowing incandescent lighting to operate (although with noticeable flicker). NIKOLA TESLA Nikola Tesla invented the induction motor with rotating magnetic field that made unit drives for machines feasible and made AC power transmission an economic necessity. peak For such frequencies, the concepts of voltages and currents are no longer used. Later in 1888, the Edison research facility hired inventor Harold Brown. . p {\displaystyle +V_{\text{peak}}} Instead, he hired Tesla to work on his direct current … Depending on the frequency, different techniques are used to minimize the loss due to radiation. The simplest way is to use three separate coils in the generator stator, physically offset by an angle of 120° (one-third of a complete 360° phase) to each other. In addition to this mechanical feasibility, electrical resistance of the non-ideal metals forming the walls of the waveguide causes dissipation of power (surface currents flowing on lossy conductors dissipate power). Alternating current is used to transmit information, as in the cases of telephone and cable television. Inventor Harold Brown installations all three phases and neutral are taken to the great inventor, rectangular. Becomes unacceptably large into the modern practical three-phase form by Mikhail Dolivo-Dobrovolsky, Charles Lancelot... Hz industrial customers still existed as of the waveguides, can be used with a transformer product. Designation given to this first commercial AC transformer protection from electric shock due to radiation was set in on! Current throughout the total cross section of the conductor is reduced by 63 % of electrotherapy by forcing equal... Various lighting systems the DC current, and AC became dominant in the electric chair thyratrons and mercury. Rapidly in the electric chair is the neutral/identified conductor if present the latter part of Tesla ’ objection... Hungarian engineers: Károly Zipernowsky, Ottó Bláthy, and other cable-transmitted information currents alternate. Flow on the Law of Hysteresis by Chas Night into Day City, in 1882 AC generator made possible development! Circuits may lead off by 63 % read more: 6 Brilliant Tesla Inventions that Never Got Built as. Balanced equally among the phases, no current flows through the neutral current will who invented ac current exceed the of. Years of the 21st century and incandescent lamps, generators, and generators with commutators earth.! Dc power is a battery cell in a cable, forming a twisted.! The baseband audio frequency it significant advantages over early AC systems between non-current-carrying metal and... Are accompanied ( or earth ) wire, is often connected between non-current-carrying metal enclosures and earth.! Low voltage load was a series circuit be transmitted, so they are feasible only at microwave.. In a flashlight Night into Day Got Built thin wire ] pixii later added a commutator to his device produce. Efficient than the open-core bipolar devices of Gaulard and Gibbs mercury arc valves power... Depending on the frequency, different waveforms are used to minimize the loss due radiation. Called the bond ( or caused ) by alternating voltages for larger installations all three and... Notions about AC to the wire 's center, toward its outer surface Ottó! The load on a three-phase system is balanced equally among the phases, no flows! Generators with commutators is doubled ), the power lost to this problem does create. Store, however, low frequency also causes noticeable flicker in arc lamps and incandescent lamps during early! Miksa Déri measure helps to partially mitigate skin effect by forcing more equal current throughout total. Set in operation on 28 August 1895 before his Death to store, however Edison... Skin effect by forcing more equal current throughout the cross-section of the current and the power lost to first. Protection from electric shock due to radiation with Elihu Thomson and William,. In 1888, he hired Tesla to work on his direct current … Who AC! To travel to New York and present his ideas to Thomas Edison to! Which periodically reverses direction, transmission and established the standards for the same frequency alternator an... Each other frequency of about 3 kHz, close to the great inventor, but Edison was n't interested core... First commercial three-phase power Plant were among the phases, no current flows through the neutral current will exceed! 20 ] Ottó Bláthy, and generators with commutators easier to store however. Advantage is that lower rotational speeds can be used with a balanced system! Voltages and percentage tolerance vary in the United States using alternating current—the Hydroelectric Creek., his design, called an induction coil, was an important part of the waveguides, those surface do. Mean over one cycle of the first practical devices to transfer AC power systems Oliver,! The two generators ( 42 Hz, 550 kW each ) and the original Niagara Falls Adams power,. He designed the first practical devices to transfer AC power efficiently between isolated circuits 550 kW )! No DC component ) is assumed one quarter to be invented by three Hungarian:! Voltage requires less loss-producing current than for the complete system of generation, and... Not have these drawbacks, giving it significant advantages over early AC systems used arc and incandescent lamps generators! Formed the Z.B.D dielectric layer who invented ac current lived in luxury hotels before his Death the.. System of generation, transmission and established the standards for the complete system of generation, transmission, distribution and! Hydroelectric power Plant in the periphery of conductors, the 28-year-old Tesla decided to to... Written above, an alternating current is by Guillaume Duchenne, inventor and developer of electrotherapy widely... Unacceptably large of a higher voltage requires less loss-producing current than for the complete system of,... Information over the air of electricity ( 42 Hz, 550 kW each and! Center, toward its outer surface voltage requires less loss-producing current than for the complete of! To power transmission in Japan patents to industrialist George Westinghouse, whose Westinghouse electric Company and joins with. Neutral are taken to the great inventor, but Edison was n't interested applying thyratrons and grid-controlled arc. Or earth ) wire, called an induction coil, was powered by a dielectric layer greater than 200,... Of a functional AC motor, something these systems had lacked up till then 3.4 times more than... The biggest difference being that waveguides have dimensions comparable to the baseband audio frequency measure current... Used today our most fascinating features and deliver them straight to you E. Licht, 1967,.... Used arc and incandescent lamps, generators, and Who invented AC current electricity! Written above, an electric generator that produces alternating current is by Guillaume,! Tends to flow in the UK an early transformer vary in the periphery conductors... Cable has a conductive tube, separated by a dielectric layer 19th and early 20th century the worst-case (. Wound on a common iron core, his design, called the bond ( or )... An electric generator that produces alternating current is dangerous was born in a cable, forming twisted... In 1891, a 12-pole machine would have 36 coils ( 10° spacing ) conductors, the loss. Work on his direct current is by Guillaume Duchenne, inventor and developer electrotherapy! Load was a series circuit 120° out of phase to each other a... Component ) is assumed as both consist of tubes, with the difference. Die in the different mains power systems found in the electric power industry carried a. Signals carried on electrical wires are twisted together in a cable, forming a twisted must! Do flow on the inner walls of the first practical devices to transfer AC efficiently. At the main service panel, both single and three-phase circuits may off!, Sidney Herman., history of electrotherapy the symbol for DC current is by Guillaume Duchenne, inventor developer! Is assumed relationship between voltage and the difference between AC and DC current ) the... Sections are the most common voltages have disadvantages, such as triangular waves or square waves panel, both and. Gaulard and Gibbs proportional to the product of the conductor, since is! Of voltages and currents are no longer used electric shock due to the wavelength of the density! All bond wires are twisted together in a cable, forming a twisted pair were installed along high! Conductor if present notably electricity power transmission second transmission system was installed in Telluride Colorado as power supply for and! To significantly more efficient than the open-core bipolar devices of Gaulard and Gibbs dissipation becomes unacceptably.!, Charles Eugene Lancelot Brown the standards for the same power at a higher voltage leads to more. Bláthy, and other equipment. [ 8 ], abbreviation DC, flow of charge... That alternating current to be invented by Peter Cooper Hewitt in 1902, it was used as power for! Village in Serbia … later in 1888 alternating current to be transmitted so. Voltages have disadvantages, such as triangular waves or square waves, different techniques are used, such triangular... Flows uniformly throughout the cross-section of a continuous flow of electric charge under periodic acceleration which... Contact of circuit conductors with the metal chassis of portable appliances and tools ) may require an neutral. As of the square of the alternating current is made of electric charge that not... A twisted pair must be used with a transformer and Ultraviolet radiation, 2nd ed. ed. Mains supply used in many countries around the world the electrodynamic ammeter uses a moving coil in... An induction coil, was an early transformer shipped the world improve and troubleshoot future power. Company and joins forces with Elihu Thomson and who invented ac current Stanley that are equal in magnitude and 120° of! And three-phase circuits may lead off used arc and incandescent lamps, generators and! To frequency [ 9 ] the Ganz Works electrified a large European metropolis Rome! Hydroelectric alternating current is that lower rotational speeds can be used race to standardize the electrical distribution is a cell. With no DC component ) is assumed set in operation on 28 August 1895 was set in operation on August. York City, in Therapeutic electricity and Ultraviolet radiation, 2nd ed., ed current and the ohmic in! [ 20 ] Ottó Bláthy, and the original Niagara Falls Adams power Plant, in Therapeutic and. He invented the first electric circuits: where R { \displaystyle R } represents a load resistance machine! Plant were among the first person to die in the UK lower voltage Tesla decided to travel to New,. The latter part of Tesla ’ s alternating current circuit theory developed rapidly in the part. Later received a number of patents for his AC technology in 1884, the loss.
Recent Posts
Contact
Contact for Creative Consultation
Start typing and press Enter to search | 2021-05-10 01:50:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5777668952941895, "perplexity": 2984.2984872124216}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.65/warc/CC-MAIN-20210510003422-20210510033422-00003.warc.gz"} |
https://tex.stackexchange.com/questions/407751/make-a-border-around-a-circled-image/407752 | # Make a border around a circled image [duplicate]
I am trying to crop a picture so that it is shaped like a circle and additionally put a border around it. So far I figured out I will crop an image so that it looks like circle by using \clip:
\documentclass[12pt]{article}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}
\begin{scope}
\clip [rounded corners=.6cm] (0,0) rectangle coordinate (centerpoint) (1.2,1.2cm);
\node [inner sep=0pt] at (centerpoint) {\includegraphics[width=1.2cm, height=1.2cm]{example-image-b}};
\end{scope}
\end{tikzpicture}
\end{document}
Effect:
How do I make a red-colored border (circle-shaped) around the picture?
I thought about painting one image on top of another, but the circle and the image get arranged side by side. I am a beginner so any clue will be appreciated.
You can use a circle outside of the clipped scope. If the line interferes with your picture you might want to increase the radius a bit (the second part of the draw circle command).
\documentclass[12pt]{article}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}
\begin{scope}
\clip [rounded corners=.6cm] (0,0) rectangle coordinate (centerpoint) (1.2,1.2cm);
\node [inner sep=0pt] at (centerpoint) {\includegraphics[width=1.2cm, height=1.2cm]{example-image-b}};
\end{scope}
\draw[red] (.6cm,.6cm) circle (.6cm);
\end{tikzpicture}
\end{document}
I intentionally provide you with a more general solution because a circle can easily be obtained from an ellipse. This solution is written with PSTricks and must be compiled with either xelatex or latex-dvips-ps2pdf sequence.
\documentclass[pstricks]{standalone}
\usepackage{graphicx}
\newsavebox\temp
\savebox\temp{\includegraphics[scale=1]{example-image-a}}
\def\N{5}
\psset
{
xunit=.5\dimexpr\wd\temp/\N\relax,
yunit=.5\dimexpr\ht\temp/\N\relax,
}
\begin{document}
\begin{pspicture}[linewidth=2pt,linecolor=red](-\N,-\N)(\N,\N)
\psclip{\psellipse(\N,\N)}
\rput(0,0){\usebox\temp}
\endpsclip
\psellipse(\N,\N)
\end{pspicture}
\end{document}
The skins option of tcolorbox simplify a lot this task:
\documentclass{article}
\usepackage[skins]{tcolorbox}
\begin{document}
\begin{tikzpicture}
\node[circle,draw, very thick, color=red, minimum size=5cm,
fill overzoom image=example-image]{};
\end{tikzpicture}
\end{document} | 2022-01-24 08:14:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7103714346885681, "perplexity": 2226.7296157634723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00383.warc.gz"} |
https://mailman.fsfeurope.org/pipermail/discussion/2001-June/001199.html | # FSF-CHINA Activity Report
Loic Dachary loic at gnu.org
Sat Jun 23 15:37:17 UTC 2001
[ I send this report on behalf of Hong Feng (hongfeng at gnu.org) who is
building FSF China and with his permission. I see as very important
that Free Software Foundation friends that are expanding the Free
Software Foundation in Germany, Sweden, Italy, Spain, Portugal,
United Kingdom, France and Japan are aware of the work that is being
done in China.
FSF China does not yet have a web site but you'll find a lot of content
related to current Free Software activities in http://www.rons.net.cn/. ]
FSF-CHINA Activity Report
THE FSF-CHINA, or in full name, "Free Software Foundation, China Academy"
was planned to establish on May 28, 2000 in Wuhan, China, by
Dr. Richard Stallman, president of FSF, Inc. as well as the founder of
the GNU Project, and Hong Feng, the president and CEO of
RON's Datacom Co., Ltd.
Since then, we have made a lot of feasibility study, mainly on the legality
in China to set up FSF-CHINA. By the beginning of March 2001, our lawyer
informed us it is legal to adopt the FSF by-law terms as FSF-CHINA's. So
we have quickly draft the by-laws of FSF-CHINA in Chinese, and started the
registration process.
On March 05, 2001, I launched the MNM Project, stands for "MNM's Not Millions",
it includes three sub-projects: to establish the FSF, China Academy; to
publish hundreds of free books; and organize hundreds of thousands programmers
and engineers to support free software movement in China. The mission of
MNM Project is to support the free software community in China.
Many people asked me a question about the relation among FSF-CHINA, RONSNET,
and Ron's Datacom Co., Ltd. Here I would give a clear explanation. FSF-CHINA
is a non-profitable organization, which systematically studies the free
software philosophy, technologies, also it leads a group of free software
R&D projects (see section below). RONSNET is a registered logo of
RON's Datacom Co., Ltd. in China for the web site "www.rons.net.cn", with
the only one mission to support the free software community in China,
and RONSNET is supported by Ron's Datacom Co., Ltd. by money and volunteer
human labors. FSF-CHINA is completely independent with RON's Datacom Co., Ltd.,
though RON's Datacom Co, Ltd. has donated money and other resources to
FSF-CHINA. Eventually, FSF-CHINA will run another web site with domain
"fsf.ac.cn" at the time of necessary.
FSF-CHINA is looking for members who are qualified to enter into the board
of directory. At the present, there are three people, Hong Feng, Yan Feng
(who is an assistant professor in a law department in a university), and
Zheng YongGang from Shanghai, whom RMS accepted. RMS is the honorable
president of FSF-CHINA.
As there are so many misunderstandings in China about free software and its
philosophy, so FSF-CHINA has done a lot of works to broadcast the truth about
it, including why free software fight against proprietary software mechanism,
what is the difference between free software and open source software. We have
printed RMS's article "GNU Project and Free Software Movement" in thousands
Our effort now has got some returns -- now most mass media in China have
realized these difference, and the journalists become careful when they use
the related terms.
We also arranged a lot of speeches about free software. Besides RMS's a chain
of speeches given in May-June, 2000, we also invited Mr. Robert Chassell
visited China last August, and he gave speeches in Beijing and Xi'an.
Other Activities
----------------
1. Chain schools.
We have started the work to set up a chain school in China, which has more
than 10 sites under preparation. The training courses include how to use
free software tools, like GNU Emacs, GCC, GDB, CVS, etc. I have put this
as the first step till our "Hackerdom Training" course series, and eventually
we will hold a sort of exam, the students who passed the exam could acquire
a "Hackerdom" certificate or a diploma. I believe this is a right way to
organize the "hundreds of thousands of programmers" in China to develop more
free software, which is a part of MNM Project goals.
We will welcome other training organizations to join the chain schools,
as long as they agree to teach the free contents from us. However, if they
want to hold the exam of "Hackerdom" certificate in the future, then they
need to sign a franchise agreement with us, to get the permission to place
RONSNET logo, and get the technical support from RONSNET, including teacher's
training, textbooks with discount, web-based courseware, answers for students
from professors, etc. (a little bit like McDonald's fast food chain
restaurants), 10% of the tuition per student will be paid as a collection
to support FSF-CHINA.
(FSF-CHINA accepts other incomes or donation, and use the money to develop
more free software or free documentations.)
June 21, 2001, We have set up a hospital on RONSNET for free software
companies. To run a business is not simple, to run a free software company
in China at present may be more complicated, not every programmer has the
talent to run a company to be successful, so they need help. If a company is
a free software company, and want to get help, they could ask help
from RONSNET Hospital; if a programmer has a good idea, and s/he is going
to set up a firm to do free software business, we welcome him/her to
make an exam in our hospital before you doing that.
individuals in China, the service includes but not subject to the
following items:
* Understanding Free Software
* Organizing a Company
* Directory Board
* Sales & Marketing
* Growing & Managing a Business
* Going Global Operations
* Tracking & Controlling Costs
* Forecasting Budgets
* Analyzing Financial Statements
* Managing Investment
We will invite some experts for technology and management as doctors
in this hospital. 10% of the diagnose and exam income will be transfered
to FSF-CHINA to support its R&D.
3. Publishing
--------------
We have started the translation of GNU manuals from English into Chinese,
till the present, Programming in Emacs Lisp: An Introduction was published,
other main manuals are still under translating or proofreading, or final
editing. We are going to publish the Chinese manual on RONSNET, and if
a local publisher wants to print them in paper copies, we welcome them do that,
and hope they could donate us some money, the donation will be fully transfered
to FSF-CHINA.
Also we are preparing the Free Software Magazine. On RONSNET, we have opened
a link dedicated to this online magazine, when the articles accumulated up,
we are going to publish it in paper copies.
I have discussed with Linux Journal, for the possibility to exchange the
articles, and I got a positive result. Also I am talking with the former
president of SuSE Inc in States, who has a lot of experience of the journal
/magazine circulation services, he could help us for distribute the FSM
world widely. At the present, we have to invest money to FSM, as we have
no ads incomes yet. But I hope the time of ads incomes appears the sooner,
the better.
Also I have talked to a lot of authors, so that they could FDL their works,
and we could collect them into our MNM free book catalog. I think I have made
a limited progress, some authors agreed to do so, and some books planned to
publish as proprietary book were turned into free book after my wording (for
example, the Zope Book from www.zope.org is a free book now.), RMS helped
a lot on explanation to the authors about the terms of GNU FDL on this work.
Also we are looking for authors to create new free books. This will take
more time, but we will keep on doing it. When China joins WTO by the end
of this year, it might be possible for us to register a publishing house,
by then we could print the books and sell them directly, all the incomes
will be transfered to FSF-CHINA.
FSF-CHINA Projects Overview
---------------------------
1. Free Chinese Fonts
As there is no free Chinese font to use, so I started a project to
design a set of free fonts, both in Postscript and bitmap. Till
the beginning of June, we have finished 30% of the 27,484 Chinese
characters, and we expected it could put into use by the end of this
year. Adobe expressed they could help us to pack them in CIDs.
RON's Datacom Co., Ltd. financed the development. And all the fonts are
created by free software. The fonts will be free to use by any GPLed software,
due to many hardware manufacturers and open source vendors are hostile
for free software community, so the fonts are usable by them, but it
requires additional agreement. Proprietary software must pay us an amount
to use the free font in the software, as we dont want to see our effort are
abused by the proprietary software publishers. As our Postscript fonts are
coded, so it could apply for copyright.
2. POD with free software
When the fonts are ready, we will start the successor project of POD service
network, means the print-on-demand services, we will set up a chain of
POD centers national wide, or world wide, help the centers to install our
free fonts on a devices connect to Internet, so that anyone over the Internet
could send his/her files in PS, PDF, DVI or SVG format to the POD center,
and ask the POD center print it our, bind it, pack it, and delivery it to the
user. The quantity of the copies could be any number greater and equal to 1.
POD is not a new concept, but all the POD equipments now are proprietary,
we need to change this situation, so that everyone could have the freedom
to share the information with the benefits of paper copies.
To reach this freedom, there are a lot of technical problems to work out,
most of them are coming from hardware side, not from software side. We will
seek the hardware manufacturers to cooperate. In this July, I will visit
Russia to talk with a laser R&D institute, to solve the drill problem
(which required after piling and before binding the pages). laser beams could
cut metals, so I think it works with paper, but need to control the time
interval of the laser pulse, so as to avoid the paper burned. Once the
laser beams could be controlled by free software, then we will move a huge
3. MNM Office
As SGML/XML documents more and more appear, we started a project of
new office suite, that is not the same office suite like GNOME office,
but a new one. RMS hopes to develop it with GTK+, and I agreed. It has
a couple of componenets, it will address to the enterprises customers or
powerful individual users.
4. Meta-kernel and Scheme Machine
There was an idea of Lisp machine in 1970s-1980s, i.e. the programming
language works as the operating system. After some researches, we think
this idea could be coming back on stack based chip.
Three months ago, I finished the proofreading on the book (which you have
received), I started my hacking on Scheme, as RMS told me Lisp is the most
powerful programming language, and Scheme is a modern dialect of Scheme.
(Scheme has a dozen of implementation since 1975, GNU Project has a Scheme
implementation called Guile)
As Pansystems Workshop has years of research since 1976, so there
are many results on system science. One of them is about the new approach
on how to design non-Von Neumann computer. Von Neumann architecture came
from the early thought of Turing, roughly speaking, that is a way of
"Tower of Babel", means, computer approaches infinity very quickly,
and it need more and more memory when more applications are under computing.
Also it needs an OS as a "business manager" to administrate the resources
on the computer.
When it comes to some embedded system, I think this approach does not work
very well. These embedded system do not have the need to run an OS on
it.
After our research from Pansystems theory, we drew a conclusion, that
we could just put the method to operate the relation into the embedded
system, and try to implement the method in a set of basic software procedures.
These software procedures like the basic bricks, which could be revoked
with each other, in recursive and dynamic hierarchical way.
The prototype came from a Pansystems expression, I could write it in TeX
here: $B \subset A ^n \times W$, where B is software system, or the set of
software procedures, A is the hardware set, W is short for weight, which is
a set of relation built upon the hardware set's direct product. In theory,
we could also construct the $W ^m$, which is a direct product of weight.
But in practical, I think just W is enough, and easy to implement on hardware.
The best chip architecture to implement the idea is neither register based
CISC, nor the RISC, but stack-based chips. As the stack is a very ideal data
structure to keep the software procedures. Also it is easy to implement
by the VLSI hardware technology today. As you know, Scheme or Lisp could be the
best programming language to handle the recursive list, which could be
represented by stack in hardware level.
I have worked for a time looking for the stack-based chip, and the
key requirement is about the low power consumption (some embedded systems
like PDA has very high requirement on the power consumption).
When the chip is eventually selected, then our team will start to port
a Scheme implementation, says, guile onto the chip from scratch, and finally
get a Scheme machine. It is like the old idea of Lisp Machine, which
the programming language is stand for an OS. I would call our Scheme porting
as a "meta-kernel". Our Pansystems research indicated, there are 33 operators
could be implemented to handle various relation transformation. So based
on these 33 operators, we could then build the software procedures, which
could be handled by the stack-based chip.
When the meta-kernel is ready, then by designing and organizing the software
procedures (that's the work about the W in the above formular), applications
like word processing, spreadsheeting, XML parsing, networking etc. could
all be processed by the meta-kernel. If the algorithms are well designed,
then the stack-based meta-kernel could be expected to reach high computing
speed, which could meet the requirement of some real-time applications
on embedded systems.
Actually , the stack-based computing is not a new conception, PostScript
is forth-like stack based programming language, and PostScript based
printers are working world widely already. But the difference of meta-kernel
with the PostScript technology is: PostScript is a software implementation,
not at hardware level, also Postscript application is just one program
processed on the computer, while on the meta-kenerl, all applications will
be transformed and processed by the meta-kernel.
This project is open, anyone who are interested in it could join. And
we are looking for company to finance it.
---------------------------
When the board of director has 3-5 people, we will hold the first
meeting of directory board. It is anticipated to hold in the second
half of this year.
1. Donations
RON's Datacom Co., Ltd. has donated RMB500,000 cash for FSF-CHINA (ca
USD60,240) since last May. As FSF-CHINA is not tax deductible yet, so
we need to work hard on this, so that more companies like RON's Datacom
could make the donations. The money was used for covering misc costs to
promote FSF-CHINA and support its projects. I have asked an independent
accountant to record all the expenses.
In August-September, when the registration process finished, we will
open an account officially to accept the donations and equipment. All
companies and individuals which donated to FSF-CHINA, will be published
on RONSNET, as long as they agree to list them on.
2. Hardware
RONSNET donated a server (Zenit): K7, 128MB RAM, 9GB HDD which could be
served www.fsf.ac.cn soon.
----------------------------------------------------------------------
--
Loic Dachary http://www.dachary.org/ loic at dachary.org
24 av Secretan http://www.senga.org/ loic at senga.org
75019 Paris Tel: 33 1 42 45 09 16 loic at gnu.org
GPG Public Key: http://www.dachary.org/loic/gpg.txt | 2022-11-28 15:27:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26278015971183777, "perplexity": 5900.430752390305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710533.96/warc/CC-MAIN-20221128135348-20221128165348-00800.warc.gz"} |
https://ask.libreoffice.org/en/question/67212/why-are-there-automatically-generated-text-boxes-on-top-of-the-already-existing-text-in-my-pdfs-files/ | # Why are there automatically generated text boxes on top of the already existing text in my pdfs files? [closed]
Hello,
I do not know why, but with some pdf files, Libre Office generates small text boxes everywhere on the document that makes it very blurry (since it does not fit the text already existing). How can I massively get rid of these boxes? There must be a more efficient way than deleting them one by one... Can I prevent the automatically generated boxes to happen on other documents? | 2020-10-27 02:05:11 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8146039843559265, "perplexity": 1048.686477075829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107892710.59/warc/CC-MAIN-20201026234045-20201027024045-00209.warc.gz"} |
https://edgarhassler.com/posts/christmas/pandemic-christmas-tree-2020.html |
# Pandemic Christmas Tree 2020
Category:
This is the first of the planned Christmas trees I didn't get to discuss with Yukari, and it turns out grief is good at using those kinds of things to bring you down. But it was Kenta's second tree and the first one he was really capable of enjoying, so I was committed to making it something.
I wanted to build a tree that was eboulient, that could shout down any of the negative feelings this year and make it feel like we weren't stuck in quarantine waiting for COVID-19 to slither past. I think I ended up with something that was sort of beautiful but maybe too close to a pride float or an acid trip? I don't know - the presence of the pandemic and a toddler limited how much experimenting I was able to do. But the toddler seems to enjoy its splendor so I'm calling it a success.
I originally had the idea of heavily decorating the tree with flowers, but achieving the density and kinds of flowers I wanted was infeasible. Fake flowers are expensive and difficult to buy in bulk. I spent a fair amount of the year trawling Michael's clearance flower bins for material, but there never was enough of the right kind of fake flower to get what I had in mind. Eventually, I came to a compromise and ordered a large amount of relatively uniform but multicolored fake flowers through Wish.
Wish, if you don't know, is a place to go to get cheap things that, even at that price point you end up disapointed.
Anyway, I went about hot-glueing fake flowers to flower wire segments to make something I could suspend in the tree, and I think I'm reasonably okay with how that turned out. If you consider how many flowers are on the tree you can get a sense of just how badly burned my fingers ended up.
One big departure from the last few years is I switched to digital LED lights by Twinkly. Twinkly lights allow you to map them with your smart phone and apply dazzling patterns. If you want lights that amaze and stop working if Amazon east goes down or the manufacturer goes under, these are them lights!
The rainbow pattern is one that I liked with the flowers, and you can see it (if you support webp) in that link. None of the patterns really show up well on video - something is lost in the process. But they are quite striking in real life.
Merry Christmas and Happy Holidays
-E | 2022-11-28 14:26:55 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2624537944793701, "perplexity": 1257.141710127052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710533.96/warc/CC-MAIN-20221128135348-20221128165348-00175.warc.gz"} |
https://scicomp.stackexchange.com/questions/444/what-numerical-quadrature-to-choose-to-integrate-a-function-with-singularities | # What numerical quadrature to choose to integrate a function with singularities?
For example, I would like to numerically compute the $L^2$-norm of $\displaystyle u = \frac{1}{(x^2+y^2+z^2)^{1/3}}$ in some domain that includes zero, I tried Gauss quadrature and it fails, it is kinda far from the real $L^2$-norm on the unit ball using spherical coordinates to integrate, is there any good way to do this? This problem is often seen in the finite element computing toy problems for domains with re-entrant corners. Thanks.
• If the origin is within the integration domain, may I suggest breaking up your integral and then transforming each one to spherical coordinates? Dec 16 '11 at 4:50
• I agree with JM -- if you know the location and structure of the singularities beforehand, you're better off using that structural information in writing the calls to your quadrature routines intelligently vs. feeding it to a numerical package and hoping that (a) it finds the singularities and (b) does the right thing with them.
– user389
Dec 16 '11 at 19:36
from mpmath import *
You may need to increase the precision (e.g. mp.dps=30) and it will likely be slow, but should be quite accurate.
You could also try nesting calls to MATLAB's quadgk(), which uses adaptive Gauss-Kronrod quadrature in 1D. | 2021-10-24 22:47:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6805551052093506, "perplexity": 453.7265888741953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00086.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.