url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://exciting.wikidot.com/oxygen-graphene-from-the-ground-state-to-excitations | Graphene: from the Ground State to Excitations
by Pasquale Pavone, Caterina Cocchi, Wahib Aggoune, & Ronaldo Rodrigues Pela for exciting oxygen
Purpose: In this tutorial you will learn how to set up and execute exciting calculations for graphene. Examples of electronic band-structure calculations and full structure optimization will be also shown. Finally, the loss function of is presented as an example for the calculation of excited-states properties.
### 0. Preliminary notes
Read the following paragraphs before getting started!
Be sure that relevant environment variables are already defined as specified in How to set environment variables for tutorials scripts. Here is a list of the scripts that are relevant for this tutorial, together with a short description.
• SETUP-elastic-strain.py: Python script for generating strained structures.
• SETUP-planar.py: Python script for setting calculations for two-dimensional materials.
• SETUP-graphene-along-c.py: Python script for generating graphene structures in which an atom is displaced along the c axis.
• EXECUTE-single.sh: (Bash) shell script for running a single exciting calculation.
• EXECUTE-elastic-strain.sh: (Bash) shell script for running a series of exciting calculations.
• EXECUTE-diamond-phonon-g.sh: (Bash) shell script for running a series of exciting calculations.
• PLOT-birch.py: Python script for fitting energy-vs-strain curves using the Birch-Murnaghan equation of state.
• PLOT-files.py: Python visualization tool for multiple files with the same name in different directories (for more details, see The python script "PLOT-files.py").
• PLOT-optimized-geometry.py: Python visualization tool for relaxed coordinates of atoms in the unit cell.
• PLOT-band-structure.py: Python visualization script for plotting and comparing energy bands(for more details, see The python script "PLOT-band-structure.py") .
• PLOT-dos.py: Python visualization script for plotting and comparing density of states (for more details, see The python script "PLOT-dos.py").
From now on the symbol will indicate the shell prompt. Requirements: Bash shell. Python numpy, lxml, matplotlib.pyplot, and sys libraries. ### 1. Basic background on graphene Graphene is a two-dimensional material composed by carbon atoms in a hexagonal structure (honeycomb lattice), as shown in the left panel of the following figure. This structure can be described as an hexagonal lattice with a basis of two atoms. One choice for the lattice vectors is (1) \begin{align} {\bf a}_1= a \left(-\frac{1}{2}, \frac{\sqrt{3}}{2}\right) \qquad {\rm and} \qquad {\bf a}_2= a \left(\frac{1}{2}, \frac{\sqrt{3}}{2}\right)\;, \end{align} wherea= 2.46 Å is the experimental lattice constant. The reciprocal-lattice vectors are (2) \begin{align} {\bf b}_1= \frac{2\pi}{a} \left(-1, \frac{\sqrt{3}}{3}\right) \qquad {\rm and} \qquad {\bf b}_2= \frac{2\pi}{a} \left(1, \frac{\sqrt{3}}{3}\right)\;. \end{align} Of particular importance for the physics of graphene are the K points at the corners of the first Brillouin zone. These are named Dirac points due to the particular feature of the electronic band structure, illustrated in the next figure with a three-dimensional plot of the uppermost occupied (blue) and the lowermost empty (red) bands. ### 2. Using exciting for calculating properties of graphene ##### 2.1 Two dimensional structures are treated as special three-dimensional periodic systems Since exciting assumes three-dimensional periodicity along the directions defined by the lattice vectors, to simulate the two-dimensional system, some care needs to be taken. The solution is to consider a special three-dimensional crystal where the distances between a monolayer and its replica are large enough so that each monolayer behaves as it would be isolated. This illustrated in the next figure. As seen, a three dimensional structure is created by replicating the monolayer along the out-of-plane axis at a given interlayer distance, corresponding to the amount of "vacuum" added to the system. Its size has to be selected to prevent interlayer interactions. ##### 2.2 Preparation of the input file Initially, create a directory for this tutorial, graphene, and enter it. mkdir -p graphene
$cd graphene The input file (input.xml) for graphene is given in the following. <input> <title>Graphene</title> <structure speciespath="$EXCITINGROOT/species">
<crystal scale="4.65">
<basevect> -0.5 0.8660254040 0.0 </basevect>
<basevect> 0.5 0.8660254040 0.0 </basevect>
<basevect> 0.0 0.0000000000 6.0 </basevect>
</crystal>
<species speciesfile="C.xml" rmt="1.20">
<atom coord="0.00000000 0.00000000 0.0"/>
<atom coord="0.33333333 0.33333333 0.0"/>
</species>
</structure>
<groundstate
do="fromscratch"
xctype="GGA_PBE"
ngridk="8 8 1"
rgkmax="4"
swidth="0.01">
</groundstate>
</input>
##### 2.3 Execute calculations
To perform the first calculation for graphene, create the input.xml (as provided above) inside the current directory (graphene/).
N.B.: Do not forget to replace in the input.xml the string "$EXCITINGROOT" by the actual value of the environment variable$EXCITINGROOT using the command
$SETUP-excitingroot.sh You can now run the script EXECUTE-single.sh in the current directory (where input.xml is). To save the results in the subdirectory first-example: $ EXECUTE-single.sh first-example
===> Output directory is "first-example" <===
Running exciting for file input.xml -------------------------------------
Elapsed time = 0m16s
Run completed for file input.xml ----------------------------------------
$ This calculation solves self-consistently the Kohn-Sham equations and allows to determine the ground-state total energy. For more information, check the file first-example/INFO.OUT. ##### 2.4 Density of states After the ground-state is completed, it is time to investigate more properties. One of the most fundamental ones is the density of states (DOS): it provides information on how many electronic states there are at a given energy. To calculate it, move inside the directory first-example. $ cd first-example/
Then, modify the file input.xml inside this directory (for more details, see Input Reference) as described below:
1. in the element groundstate, change the attribute do to "skip";
2. add the element properties after groundstate;
3. insert the sub-element dos into the element properties;
4. add the attribute nsmdos = "3" to the element dos.
The input.xml file should look like this:
...
<groundstate
do="skip"
...
</groundstate>
<properties>
<dos nsmdos="3"/>
</properties>
...
Now, execute exciting with:
$time exciting_smp This should take only a few seconds. Then, to visualize the DOS, execute $ PLOT-dos.py
The script PLOT-dos.py (discussed in details in The python script "PLOT-dos.py") produces the PNG file PLOT.png. You can visualize this file with standard tools.
Questions:
• Does the DOS reveal the presence of a gap? Why?
• What would you expect due to the existence of the Dirac cones?
• Is the K point included in the ngridk mesh?
• What does it change if you use ngridk = "9 9 1"? Why?
##### 2.5 Electronic band-structure
The electronic band-structure details the dependence of the energy eigenvalues on the wavevectors k. To obtain it, proceed as follows:
• Comment or delete the sub-element dos inside properties.
<!--dos nsmdos="3"/-->
• Insert the sub-element bandstructure inside properties with the following specifications:
...
<properties>
...
<bandstructure>
<plot1d>
<path steps="100">
<point coord="0.0 0.0 0.0" label="GAMMA"/>
<point coord="0.33333333 -0.33333333 0.0" label="K" />
<point coord="0.5 -0.5 0.0" label="M" />
<point coord="1.0 0.0 0.0" label="GAMMA"/>
</path>
</plot1d>
</bandstructure>
</properties>
...
Now execute exciting with the command:
$time exciting_smp To visualize the electronic band-structure (which is written inside the file BAND.OUT), you can use the script PLOT-band-structure.py (discussed in details in The python script "PLOT-band-structure.py"). $ PLOT-band-structure.py
This script produces the PNG file PLOT.png which can be visualized with standard tools.
### 3. Full structure optimization
Up to here, all calculations assumed graphene in its experimental equilibrium configuration. However, exciting is also able to predict this equilibrium configuration, i.e., the equlilibrium lattice constant of graphene. This is addressed in this section.
Move to previous directory, create a new folder structure-optimization, and move inside it.
$cd ..$ mkdir -p structure-optimization/
$cd structure-optimization/ We take as a starting point the input file used in the last section. Copy it using: $ cp ../input.xml ./
Modify the input file including the relax element just after groundstate (see Input Reference for the meaning of the different attributes).
...
</groundstate>
<relax method="harmonic" taunewton="1.0"/>
</input>
The goal is to find the lattice constant which minimizes the total energy of the system. In our special case of a graphene monolayer, we should be careful in changing the lattice parameter in the monolayer without changing the distance between the monolayers along the out-of-plane direction. The effect of changing the amount of "vacuum" will be addressed in the next section.
A series of distorted structures corresponding to an in-plane homogeneous strain must be generated. The script SETUP-elastic-strain.py serves to this purpose and can be executed as follows
$SETUP-elastic-strain.py optimization Enter maximum Lagrangian strain [smax] >>>> 0.10 Enter the number of strain values in [-smax,smax] >>>> 21 ------------------------------------------------------------------------ List of deformation codes for strains in Voigt notation ------------------------------------------------------------------------ 0 => ( eta, eta, eta, 0, 0, 0) | volume strain 1 => ( eta, 0, 0, 0, 0, 0) | linear strain along x 2 => ( 0, eta, 0, 0, 0, 0) | linear strain along y 3 => ( 0, 0, eta, 0, 0, 0) | linear strain along z 4 => ( 0, 0, 0, eta, 0, 0) | yz shear strain 5 => ( 0, 0, 0, 0, eta, 0) | xz shear strain 6 => ( 0, 0, 0, 0, 0, eta) | xy shear strain 7 => ( 0, 0, 0, eta, eta, eta) | shear strain along (111) 8 => ( eta, eta, 0, 0, 0, 0) | xy in-plane strain 9 => ( eta, -eta, 0, 0, 0, 0) | xy in-plane shear strain 10 => ( eta, eta, eta, eta, eta, eta) | global strain 11 => ( eta, 0, 0, eta, 0, 0) | mixed strain 12 => ( eta, 0, 0, 0, eta, 0) | mixed strain 13 => ( eta, 0, 0, 0, 0, eta) | mixed strain 14 => ( eta, eta, 0, eta, 0, 0) | mixed strain ------------------------------------------------------------------------ Enter deformation code >>>> 8$
In this example, (on screen) input entries are preceded by the symbol ">>>>" and must be typed on the screen when requested. The first entry (in our example, 0.10) represents the absolute value of the maximum strain considered in the calculations. The second entry (21) is the number of deformed structures equally spaced in strain generated between the maximum negative strain and the maximum positive one. The third entry (8) is a label indicating the type of deformation: an in-plane homogeneous strain, in our example. Strain tensors are given in the Voigt notation.
After running the script, a directory called optimization is created, which contains input files for different strain values. To execute the series of calculation with input files created by SETUP-elastic-strain.py you have to run the script EXECUTE-elastic-strain.sh.
$EXECUTE-elastic-strain.sh optimization ===> Output directory is "optimization" <=== Running exciting for file input-01.xml ---------------------------------- ... Run completed for file input-21.xml -------------------------------------$
After the complete run, move to the directory optimization.
$cd optimization Inside this directory, results of the calculation for the input file input-i.xml are contained in the subdirectory rundir-i where i is running from 01 to the total number of strain values. The data for energy-vs-strain curves are contained in the file energy-vs-strain. For extracting the value of the theoretical equilibrium lattice parameter, run first the script SETUP-planar.py. $ SETUP-planar.py
Then, use the script PLOT-birch.py as indicated below.
$PLOT-birch.py Input file is "energy-vs-strain". Modified version for strained planar systems! ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ A2 A3 lagrangian strain at minimum log(chi) 885.566 -10393.618 0.00010 ( alat0 = 4.6505 ) -4.07 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++$
As can be seen from this output an equilibrium lattice constant (alat0) of 4.6505 Bohr is obtained. The comparison with between this theoretical value and the experimental lattice constant of 4.65 Bohr is very good. However, the accuracy of this result should be checked for improved values of the numerical parameters of the calculation, as explained in the next section. Furthermore, the script PLOT-birch.py generates a PNG output file, PLOT.png, which can be visualized using standard Linux tools.
The presence of internal relaxation can be verified by running
$PLOT-optimized-geometry.py This script also generate PLOT.png, overwritting the existing one. If every plotted curve is flat, there is no internal relaxation. Questions: • Do you find any relaxation of the relative position of the two atoms in the unit cell of graphene? Why? • Are there deformations for which relaxation of the internal degrees of freedom is important? ### 4. Convergence tests In order to be able to rely on your calculation, you need to understand how the choice of the computational parameters change the values of the physical quantities relevant for you. For most of the calculations performed in this tutorial, the most important computational parameters are listed below: • The mesh of k-points (groundstate attribute: ngridk). • The size of the basis set for expanding the wave function (groundstate attribute: rgkmax). • The amount of "vacuum" considered in the out-of-plane direction between two adjacent monolayers. Now, perform some calculation to verify how changes in these parameters modify the physical properties relevant in this tutorial. ### 5. Is graphene really planar? In the rest of this tutorial, we assumed that the planar configuration of graphene is the most stable one. In this section, we can check if this is always true. Create a new folde planar inside the directory graphene and we move into it. $ cd ..
$mkdir -p planar$ cd planar
To analyze if the planar configuration is the most favorable one, we perform a series of calculations displacing one of the two atoms in the unit cell along the out-of-plane direction. The calculation can be repeated for different values of the applied strain. The starting point is the following input file, to be copied as input.xml inside the current folder planar.
<input>
<title>Graphene</title>
<structure speciespath="$EXCITINGROOT/species"> <crystal scale="4.6505"> <basevect> -0.5 0.8660254040 0.0 </basevect> <basevect> 0.5 0.8660254040 0.0 </basevect> <basevect> 0.0 0.0000000000 6.0 </basevect> </crystal> <species speciesfile="C.xml" rmt="1.03"> <atom coord="0.00000000 0.00000000 0.0"/> <atom coord="0.33333333 0.33333333 0.0"/> </species> </structure> <groundstate do="fromscratch" xctype="GGA_PBE" ngridk="8 8 1" rgkmax="4" swidth="0.01"> </groundstate> </input> Notice that the muffin-tin radius of carbon has been decreased to avoid the overlap of muffin-tin spheres at large negative strains ... <species speciesfile="C.xml" rmt="1.03"> ... As usual, employ $ SETUP-excitingroot.sh
to assign the correct value to the variable $EXCITINGROOT. Now you can produce a series of input files for each displacement using the script SETUP-graphene-along-c.py. As an example for an applied compressive strain of -0.10, one has $ SETUP-graphene-along-c.py str-0.10
Enter lagrangian strain >>>> -0.10
Enter maximum displacement umax [c/a] >>>> 0.2
Enter the number of displacements in [0,umax] >>>> 11
$ where one of the two carbon atoms is displaced along the out-of-plane direction up to a value of 20% of the length of the out-of-plane lattice vector. The generated input files for each displacement are stored in the str-0.10 subdirectory. To carry out the calculations, run the script EXECUTE-diamond-phonon-g.sh: $ EXECUTE-diamond-phonon-g.sh str-0.10
The calculated energy for the different displacements can be obtained using
$PLOT-files.py -f energy-vs-displacement -d str-0.10 -pm -ly 'Energy [Ha]' -lx '% of out-of-plane displacement [c/a]' Inspect now the corresponding output file PLOT.png. The same calculation can be repeated for different applied strains. It is specially interesting to investigate what happens at relatively high compression strains. Repeat the above procedure for the following strains strain directory -0.12 str-0.12 -0.14 str-0.14 -0.16 str-0.16 -0.18 str-0.18 -0.20 str-0.20 and look at the results using $ PLOT-files.py -f energy-vs-displacement -d str-0.10 str-0.12 str-0.14 str-0.16 str-0.18 str-0.20 -pm -ly 'Energy [Ha]' -lx '% of out-of-plane displacement [c/a]'
Questions:
• At a given compressive strain, which percentage of out-of-plane displacement minimizes the total energy?
• What do you conclude from the previous results?
• Is graphene really planar? Always?
### 6. Excited-states properties: The loss function
In this section, we calculate the loss function of graphene using time-dependent density-functional theory (TDDFT) within the random-phase approximation (RPA). The loss function is a quantity directly measurable using electron-energy loss spectroscopy (EELS).
Go to the directory graphene, create a new folder loss-function, and enter it.
$cd ../../$ mkdir -p loss-function
$cd loss-function Copy the input.xml given in Section 2 (if this is the case, update the attribute speciespath). You can do this with $ cp ../input.xml ./
To calculate excited-states properties, make the following changes in input.xml.
• Modify the in-plane lattice vector to the optimized value of 4.6505 Bohr.
• Change inside groundstate the value of the attribute ngridk to "9 9 1".
• Include the element xs just after groundstate as given below (see Input Reference and Excited states from TDDFT for more information about the meaning of the different attributes).
...
</groundstate>
<xs
xstype ="TDDFT"
rgkmax="4"
ngridk="9 9 1"
vkloff="0.097 0.273 0.493"
nempty="80"
gqmax="1.0"
tevout="true">
<energywindow
intv="0.0 1.0"
points="500" />
<tddft
fxctype="RPA"/>
<qpointset>
<qpoint> 0.0 0.0 0.0 </qpoint>
</qpointset>
</xs>
</input>
Then, execute exciting using the script EXECUTE-single.sh:
$EXECUTE-single.sh rpa-k09 When the calculation finishes, the results are stored in the directory rpa-k09 where the suffix k09 is representative for the choice ngridk = "9 9 1" inside both the groundstate and xs elements. Now, the xx component of the loss function can be copied to the current directory as follows $ cp rpa-k09/LOSS_FXCRPA_OC11_QMT001.OUT loss-k09
You can visualize the loss function employing the script PLOT-files.py (for a detailed description of the script arguments see The python script "PLOT-dos.py") as follows:
$PLOT-files.py -f loss-k09 -rc -lx 'Energy [eV]' -ly 'Loss function' This generates the file PLOT.png, which should look as follows. ##### 6.1 Converging the loss spectrum w.r.t. the mesh in reciprocal space The results shown above are obtainedd using ngridk = "9 9 1" for both the ground-state (groundstate) and excited-states (xs) calculations. The loss spectrum is not fully converged at this mesh. Further calculations should be performed with finer meshes until the desired convergence is reached. In order to do so, repeat the calculations above for "18 18 1" --> rpa-k18 --> loss-k18 "27 27 1" --> rpa-k27 --> loss-k27 "36 36 1" --> rpa-k36 --> loss-k36 "45 45 1" --> rpa-k45 --> loss-k45 ... Notice that at each step the complexity of the calculation is increased in a non linear way, resulting soon in quite large execution times. Hint: You can compare two or more loss spectra by using the script PLOT-files.py similarly to what follows $ PLOT-files.py -f loss-k09 loss-k18 loss-k27 -rc -lx 'Energy [eV]' -ly 'Loss function'
Questions:
• At which value of ngridk can you consider the loss spectrum as converged?
• Consider separately the features in the loss spectrum below and above 10 eV. Do they converge at the same value of ngridk?
##### 6.2 Converging the loss spectrum w.r.t. the number of basis functions
Another very inportant parameter for the convergence of the loss spectrum is the attribute rgkmax of the element groundstate. This parameter, together with the total number of local orbitals, is related to the number of basis function used for the expansion of the Kohn-Sham wave-functions.
The loss spectrum is not fully converged for rgkmax = "4.0", which is the value we used above. Further calculations should be performed with larger values (e.g., "4.5", "5.0", "5.5", etc.) until the desired convergence is reached.
Questions:
• How can you further improve the precision of your calculations?
• Which further parameters are relevant to be optimized? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8232341408729553, "perplexity": 3761.5595856844498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00213.warc.gz"} |
https://mattermodeling.stackexchange.com/tags/model-hamiltonians/new | # Tag Info
### How can I change the mass of an electron in the hamiltonian on a PySCF calculation?
As already stated above, PySCF works with atomic units. This means that the electron mass is implicityly baked in the equations. In the molecular Hamiltonian, the only term that depends on the mass is ...
• 15.7k | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8714404106140137, "perplexity": 746.860666466673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710719.4/warc/CC-MAIN-20221130024541-20221130054541-00159.warc.gz"} |
https://www.physicsforums.com/threads/oscillating-string-transverse-speed-what-am-i-doing-wrong.525477/ | # Homework Help: Oscillating String - Transverse Speed, what am I doing wrong?
1. Aug 29, 2011
### Malavin
A string oscillates according to the equation
y´ = (0.654 cm) sin[(π/4.0 cm-1)x] cos[(24.2π s-1)t].
What are the (a) amplitude and (b) speed of the two waves (identical except for direction of travel) whose superposition gives this oscillation? (c) What is the distance between nodes? (d) What is the transverse speed of a particle of the string at the position x = 1.24 cm when t = 1.13 s?
I only need help with part d, the other parts I have gotten right.
Here's my attempt at solving:
u = ∂y´/∂t = (0.654 cm) (24.2π s-1) sin[(π/4.0 cm-1)x] (-1) sin[(24.2π s-1)t]
Then, plugging in x and t:
u = (-0.654 cm) (24.2π s-1) sin[(π/4.0 cm-1)(1.24 cm)] sin[(24.2π s-1)(1.13 s)]
u = -32.9 cm/s
When I plug this solution in, I am told that it is not the correct answer. Even when I tried neglecting the negative sign. I am not sure how my calculations are wrong, but I would love to be enlightened!
Last edited: Aug 29, 2011
2. Aug 29, 2011
### vela
Staff Emeritus
Your method looks fine. I get a different number. Are you sure your calculator is in radian mode?
3. Aug 29, 2011
### Malavin
Oops! I looked at the equation again and it's π/4.0 cm-1, not 4.0π. That is what I used to get the result I did. I have gone back and edited my original post to reflect this. I hope that when calculated with π/4, you get the same thing as I do.
EDIT: Okay, apparently my method was right, but there is something wrong with my calculator. Just plugged in the numbers to Wolfram and received 36.4 cm/s which is the right answer.
Final Edit: Yes, I must have been using parentheses dumb or something. Just plugged it into my calculator again and got 36.4 cm/s. I feel dumb now, but I got the right answer after all! :) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.848696768283844, "perplexity": 1484.3062460956446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215858.81/warc/CC-MAIN-20180820062343-20180820082343-00398.warc.gz"} |
https://www.physicsforums.com/threads/q1-does-a-and-its-transpose-have-the-same-eigenspace.215613/ | # Q1: Does A and its transpose have the same eigenspace?
1. Feb 15, 2008
### Howers
So I've shown that A and A^T have the same char. polynomials => same eigenvalues, using the fact that detA = detA^T. I still can't see any way I could possibly show or disprove that the eigenspace is the same.
2. Feb 15, 2008
### gel
$$\left(\begin{array}{ll} 1 & 1\\ 0 & 0 \end{array}\right)\ \ \left(\begin{array}{ll} 1 & 0\\ 1 & 0 \end{array}\right)$$
Do they have the same eigenspaces?
3. Feb 15, 2008
### ObsessiveMathsFreak
A and A^T will not have the same eigenspaces, i.e. eigenvectors, in general.
Remember that there are in fact two "eigenvectors" for every eigenvalue $$\lambda$$. The right eigenvector satisfying $$A\mathbf{x} = \lambda \mathbf{x}$$ and a left eigenvector (eigenrow?) satisfying $$\mathbf{x}A = \lambda \mathbf{x}$$. In general these are not equal.
Also, I believe that the set of left eigenvectors is the inverse matrix of the set of right eigenvectors, but I am not about sure of this. If this is indeed the case then the set of left eigenvectors will "coincide" with the set of right eigenvectors only when the set of right eigenvectors is orthonormal, i.e. when A is symmetric A=A^T.
EDIT: In fact, the conjecture above is not true unless you select specially scaled sets of eigenvectors and eigenrows. Is there a way of selecting eigenvectors of canonical lengths?
Last edited: Feb 15, 2008 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9843334555625916, "perplexity": 583.4133084158416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649627.3/warc/CC-MAIN-20180324015136-20180324035136-00081.warc.gz"} |
http://math.stackexchange.com/questions/74934/p-lefte-e-right-sum-eta-1-mathbbh-p-lefth-eta-right-p-left/76855 | # $P\left(E_e\right)=\sum _{\eta =1}^{\mathbb{H}} P\left(H_{\eta }\right) P\left(E_e|H_{\eta }\right)$ How to prove?
Is it that true?
If yes, how to prove this?
$$P\left(E_e\right)=\sum _{\eta =1}^{\mathbb{H}} P\left(H_{\eta }\right) P\left(E_e|H_{\eta }\right),$$
where $E_e$ is an generic evidence, $H_\eta$ it's the $\eta$-esim hyphotesis and $\mathbb{H}$ is the cardinality of the hypothesis ($H$) set.
-
I don't quite follow the jargon, but the formula looks like the law of total probability. – Srivatsan Oct 22 '11 at 23:04
Looks like very simple yet...but didnt find a way until now. Thx about the link. I will take a look. – GarouDan Oct 22 '11 at 23:05
Just be aware that the law of total probability $P(A) = \sum_i P(A\mid B_i)P(B_i)$ assumes $A \subset \cup_i B_i \subset \Omega$ where $\Omega$ is the entire sample space. More simply, people just assume $\cup_i B_i = \Omega$ for simplicity. – Dilip Sarwate Oct 23 '11 at 0:57
@SrivatsanNarayanan, if one fo you answer this question, I will embrace it. Someone else too. I just don't want post my own answer. – GarouDan Oct 28 '11 at 12:50
@DilipSarwate if one fo you answer this question, I will embrace it. Someone else too. I just don't want post my own answer. – GarouDan Oct 28 '11 at 12:50
show 1 more comment
The formula you quote seems to be just the law of total probability. Assume that the set of events $\{ H_\eta \}_{1 \leq \eta \leq \mathbb H}$ forms a partition of the sample space $\Omega$; i.e., the $H_\eta$'s are pairwise disjoint, and $\bigcup \limits_{\eta = 1}^{\mathbb H} H_\eta = \Omega$.
Now for any event $E_e$, the set of events $\{ E_e \cap H_\eta \}$ forms a partition of $E_e$. Therefore, by additivity, we have $$P(E_e) = \sum_{\eta = 1}^{\mathbb H} P(E_e \cap H_\eta). \tag{1}$$ Now, by the definition of conditional probability, we have $P(E_e \cap H_\eta) = P(H_\eta) \cdot P(E_e \mid H_\eta)$. Plugging this in $(1)$ we get the claim.*
The following sentence taken from the wikipedia article explains what this theorem means intuitively (notation changed to match ours):
The summation can be interpreted as a weighted average, and consequently the marginal probability, $P(E_e)$, is sometimes called "average probability"; "overall probability" is sometimes used in less formal writings.
*The formula is true even for $\{ H_\eta \}_{\eta \geq 1}$ forms a countably infinite partition of $\Omega$. The proof has to be modified only slightly for this.
-
Thx. A good answer! – GarouDan Oct 29 '11 at 23:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9419443607330322, "perplexity": 284.097243659785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877881.80/warc/CC-MAIN-20140722025757-00056-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://arxiv.org/abs/hep-th/0401053 | # Title:M-theory and E10: Billiards, Branes, and Imaginary Roots
Abstract: Eleven dimensional supergravity compactified on $T^{10}$ admits classical solutions describing what is known as billiard cosmology - a dynamics expressible as an abstract (billiard) ball moving in the 10-dimensional root space of the infinite dimensional Lie algebra E10, occasionally bouncing off walls in that space. Unlike finite dimensional Lie algebras, E10 has negative and zero norm roots, in addition to the positive norm roots. The walls above are related to physical fluxes that, in turn, are related to positive norm roots (called real roots) of E10. We propose that zero and negative norm roots, called imaginary roots, are related to physical branes. Adding `matter' to the billiard cosmology corresponds to adding potential terms associated to imaginary roots. The, as yet, mysterious relation between E10 and M-theory on $T^{10}$ can now be expanded as follows: real roots correspond to fluxes or instantons, and imaginary roots correspond to particles and branes (in the cases we checked). Interactions between fluxes and branes and between branes and branes are classified according to the inner product of the corresponding roots (again in the cases we checked). We conclude with a discussion of an effective Hamiltonian description that captures some features of M-theory on $T^{10}.$
Comments: 68pp, references added as well as minor corrections Subjects: High Energy Physics - Theory (hep-th) Journal reference: JHEP 0408:063,2004 DOI: 10.1088/1126-6708/2004/08/063 Report number: UCB-PTH-04-01, LBNL-54228 Cite as: arXiv:hep-th/0401053 (or for this version)
## Submission history
From: Ori J. Ganor [view email]
[v1] Fri, 9 Jan 2004 01:34:52 UTC (54 KB)
[v2] Sun, 11 Jan 2004 20:33:35 UTC (55 KB)
[v3] Wed, 14 Jan 2004 02:25:48 UTC (55 KB)
[v4] Sat, 24 Jan 2004 03:59:28 UTC (56 KB)
[v5] Mon, 2 Feb 2004 21:12:50 UTC (58 KB)
[v6] Thu, 15 Jul 2004 00:11:56 UTC (59 KB)
[v7] Thu, 15 Jul 2004 22:33:31 UTC (59 KB)
[v8] Tue, 19 Oct 2004 21:05:42 UTC (62 KB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166267275810242, "perplexity": 3493.1811415953334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597230.18/warc/CC-MAIN-20200120023523-20200120051523-00554.warc.gz"} |
https://www.esaral.com/q/the-quantity-of-charge-required-to-obtain-20762 | The quantity of charge required to obtain
Question:
The quantity of charge required to obtain one mole of aluminium from Al2O3
is
___________.
(i) 1F
(ii) 6F
(iii) 3F
(iv) 2F
Solution:
Option (iii) 3F is the answer. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9789845943450928, "perplexity": 4714.04280774181}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00801.warc.gz"} |
https://www.varsitytutors.com/hiset_math-help/quadratic-equations | # HiSET: Math : Quadratic equations
## Example Questions
← Previous 1
### Example Question #1 : Quadratic Equations
What is the vertex of the following quadratic polynomial?
Explanation:
the vertex will always be
.
Thus, since our function is
, and .
We plug these variables into the formula to get the vertex as
.
Hence, the vertex of
is
.
### Example Question #2 : Quadratic Equations
Which of the following expressions represents the discriminant of the following polynomial?
Explanation:
The discriminant of a quadratic polynomial
is given by
.
Thus, since our quadratic polynomial is
,
, and
Plugging these values into the discriminant equation, we find that the discriminant is
.
### Example Question #3 : Quadratic Equations
Which of the following polynomial equations has exactly one solution?
Explanation:
A polynomial equation of the form
has one and only one (real) solution if and only if its discriminant is equal to zero - that is, if its coefficients satisfy the equation
In each of the choices, and , so it suffices to determine the value of which satisfies this equation. Substituting, we get
Solve for by first adding 400 to both sides:
Take the square root of both sides:
The choice that matches this value of is the equation
### Example Question #4 : Quadratic Equations
Give the nature of the solution set of the equation
Two irrational solutions
Two imaginary solutions
One imaginary solution
Two rational solutions
One rational solution
Two rational solutions
Explanation:
To determine the nature of the solution set of a quadratic equation, it is necessary to first express it in standard form
To accomplish this, first, multiply the binomials on the left using the FOIL technique:
Collect like terms:
The key to determining the nature of the solution set is to examine the discriminant . Setting , the value of the discriminant is
The discriminant is a positive number; furthermore, it is a perfect square, being equal to the square of 11. Therefore, the solution set comprises two rational solutions.
### Example Question #5 : Quadratic Equations
Which of the following polynomial equations has exactly one solution?
Explanation:
A polynomial equation of the standard form
has one and only one (real) solution if and only if its discriminant is equal to zero - that is, if its coefficients satisfy the equation
Each of the choices can be rewritten in standard form by subtracting the term on the right from both sides. One of the choices can be rewritten as follows:
By similar reasoning, the other four choices can be written:
In each of the five standard forms, and , so it is necessary to determine the value of that produces a zero discriminant. Substituting accordingly:
Add 900 to both sides and take the square root:
Of the five standard forms,
fits this condition. This is the standard form of the equation
,
the correct choice.
### Example Question #6 : Quadratic Equations
Give the nature of the solution set of the equation
.
Two rational solutions
One imaginary solution
Two irrational solutions
One rational solution
Two imaginary solutions
Two imaginary solutions
Explanation:
To determine the nature of the solution set of a quadratic equation, it is necessary to first express it in standard form
This can be done by simply switching the first and second terms:
The key to determining the nature of the solution set is to examine the discriminant . Setting , the value of the discriminant is
The discriminant has a negative value. It follows that the solution set comprises two imaginary values.
### Example Question #7 : Quadratic Equations
Give the nature of the solution set of the equation
Two irrational solutions
Two rational solutions
One rational solution
Two imaginary solutions
One imaginary solution
Two imaginary solutions
Explanation:
To determine the nature of the solution set of a quadratic equation, it is necessary to first express it in standard form
This can be done by adding 17 to both sides:
The key to determining the nature of the solution set is to examine the discriminant
. Setting , the value of the discriminant is
This value is negative. Consequently, the solution set comprises two imaginary numbers.
### Example Question #11 : Understand And Apply Concepts Of Equations
Give the nature of the solution set of the equation
Two rational solutions
Two imaginary solutions
One rational solution
Two irrational solutions
One imaginary solution
Two irrational solutions
Explanation:
To determine the nature of the solution set of a quadratic equation, it is necessary to first express it in standard form
To accomplish this, first, multiply the binomials on the left using the FOIL technique:
Collect like terms:
Now, subtract 18 from both sides:
The key to determining the nature of the solution set is to examine the discriminant . Setting , the value of the discriminant is
The discriminant is a positive number, so there are two real solutions. Since 73 is not a perfect square, the solutions are irrational.
### Example Question #12 : Understand And Apply Concepts Of Equations
Give the nature of the solution set of the equation
Two imaginary solutions
One imaginary solution
Two irrational solutions
Two rational solutions
One rational solution
Two imaginary solutions
Explanation:
To determine the nature of the solution set of a quadratic equation, it is necessary to first express it in standard form
To accomplish this, first, multiply the binomials on the left using the FOIL technique:
Collect like terms:
Now, add 18 to both sides:
The key to determining the nature of the solution set is to examine the discriminant . Setting , the value of the discriminant is
This discriminant is negative. Consequently, the solution set comprises two imaginary numbers.
### Example Question #13 : Understand And Apply Concepts Of Equations
Give the nature of the solution set of the equation
Two rational solutions
One rational solution
One imaginary solution
Two irrational solutions
Two imaginary solutions
Two irrational solutions
Explanation:
To determine the nature of the solution set of a quadratic equation, it is necessary to first express it in standard form
This can be done by switching the first and third terms on the left:
The key to determining the nature of the solution set is to examine the discriminant
. Setting , the value of the discriminant is
.
The discriminant is a positive number but not a perfect square. Therefore, there are two irrational solutions.
← Previous 1 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8754071593284607, "perplexity": 511.10747308540493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608676.72/warc/CC-MAIN-20170526184113-20170526204113-00423.warc.gz"} |
http://www.ck12.org/book/Probability-and-Statistics---Advanced-(Second-Edition)/r1/section/11.1/ | <meta http-equiv="refresh" content="1; url=/nojavascript/">
You are reading an older version of this FlexBook® textbook: CK-12 Probability and Statistics - Advanced (Second Edition) Go to the latest version.
# 11.1: The F-Distribution and Testing Two Variances
Difficulty Level: At Grade Created by: CK-12
## Learning Objectives
• Understand the differences between the $F$-distribution and Student’s $t$-distribution.
• Calculate a test statistic as a ratio of values derived from sample variances.
• Use random samples to test hypotheses about multiple independent population variances.
• Understand the limits of inferences derived from these methods.
## Introduction
In previous lessons, we learned how to conduct hypothesis tests that examined the relationship between two variables. Most of these tests simply evaluated the relationship of the means of two variables. However, sometimes we also want to test the variance, or the degree to which observations are spread out within a distribution. In the figure below, we see three samples with identical means (the samples in red, green, and blue) but with very difference variances:
So why would we want to conduct a hypothesis test on variance? Let’s consider an example. Suppose a teacher wants to examine the effectiveness of two reading programs. She randomly assigns her students into two groups, uses a different reading program with each group, and gives her students an achievement test. In deciding which reading program is more effective, it would be helpful to not only look at the mean scores of each of the groups, but also the “spreading out” of the achievement scores. To test hypotheses about variance, we use a statistical tool called the $F$-distribution.
In this lesson, we will examine the difference between the $F$-distribution and Student’s $t$-distribution, calculate a test statistic with the $F$-distribution, and test hypotheses about multiple population variances. In addition, we will look a bit more closely at the limitations of this test.
### The $F$-Distribution
The $F$-distribution is actually a family of distributions. The specific $F$-distribution for testing two population variances, $\sigma^2_1$ and $\sigma^2_2$, is based on two values for degrees of freedom (one for each of the populations). Unlike the normal distribution and the $t$-distribution, $F$-distributions are not symmetrical and span only non-negative numbers. (Normal distributions and $t$-distributions are symmetric and have both positive and negative values.) In addition, the shapes of $F$-distributions vary drastically, especially when the value for degrees of freedom is small. These characteristics make determining the critical values for $F$-distributions more complicated than for normal distributions and Student’s $t$-distributions. $F$-distributions for various degrees of freedom are shown below:
### $F$-Max Test: Calculating the Sample Test Statistic
We use the $F$-ratio test statistic when testing the hypothesis that there is no difference between population variances. When calculating this ratio, we really just need the variance from each of the samples. It is recommended that the larger sample variance be placed in the numerator of the $F$-ratio and the smaller sample variance in the denominator. By doing this, the ratio will always be greater than 1.00 and will simplify the hypothesis test.
Example: Suppose a teacher administered two different reading programs to two groups of students and collected the following achievement score data:
$& \text{Program 1} && \text{Program 2}\\& n_1=31 && n_2=41\\& \bar{x}_1=43.6 && \bar{x}_2=43.8\\& s{_1}^2=105.96 && s{_2}^2=36.42$
What is the $F$-ratio for these data?
$F=\frac{s{_1}^2}{s{_2}^2}=\frac{105.96}{36.42} \approx 2.909$
### $F$-Max Test: Testing Hypotheses about Multiple Independent Population Variances
When we test the hypothesis that two variances of populations from which random samples were selected are equal, $H_0: \sigma^2_1=\sigma^2_2$ (or in other words, that the ratio of the variances $\frac{\sigma^2_1}{\sigma^2_2}=1$), we call this test the $F$-Max test. Since we have a null hypothesis of $H_0: \sigma^2_1=\sigma^2_2$, our alternative hypothesis would be $H_a: \sigma^2_1 \neq \sigma^2_2$.
Establishing the critical values in an $F$-test is a bit more complicated than when doing so in other hypothesis tests. Most tables contain multiple $F$-distributions, one for each of the following: 1 percent, 5 percent, 10 percent, and 25 percent of the area in the right-hand tail. (Please see the supplemental link for an example of this type of table.) We also need to use the degrees of freedom from each of the samples to determine the critical values.
On the Web
http://www.statsoft.com/textbook/sttable.html#f01 $F$-distribution tables.
Example: Suppose we are trying to determine the critical values for the scenario in the preceding section, and we set the level of significance to 0.02. Because we have a two-tailed test, we assign 0.01 to the area to the right of the positive critical value. Using the $F$-table for $\alpha=0.01$, we find the critical value at 2.203, since the numerator has 30 degrees of freedom and the denominator has 40 degrees of freedom.
Once we find our critical values and calculate our test statistic, we perform the hypothesis test the same way we do with the hypothesis tests using the normal distribution and Student’s $t$-distribution.
Example: Using our example from the preceding section, suppose a teacher administered two different reading programs to two different groups of students and was interested if one program produced a greater variance in scores. Perform a hypothesis test to answer her question.
For the example, we calculated an $F$-ratio of 2.909 and found a critical value of 2.203. Since the observed test statistic exceeds the critical value, we reject the null hypothesis. Therefore, we can conclude that the observed ratio of the variances from the independent samples would have occurred by chance if the population variances were equal less than 2% of the time. We can conclude that the variance of the student achievement scores for the second sample is less than the variance of the scores for the first sample. We can also see that the achievement test means are practically equal, so the difference in the variances of the student achievement scores may help the teacher in her selection of a program.
### The Limits of Using the $F$-Distribution to Test Variance
The test of the null hypothesis, $H_0; \sigma^2_1=\sigma^2_2$, using the $F$-distribution is only appropriate when it can safely be assumed that the population is normally distributed. If we are testing the equality of standard deviations between two samples, it is important to remember that the $F$-test is extremely sensitive. Therefore, if the data displays even small departures from the normal distribution, including non-linearity or outliers, the test is unreliable and should not be used. In the next lesson, we will introduce several tests that we can use when the data are not normally distributed.
## Lesson Summary
We use the $F$-Max test and the $F$-distribution when testing if two variances from independent samples are equal.
The $F$-distribution differs from the normal distribution and Student’s $t$-distribution. Unlike the normal distribution and the $t$-distribution, $F$-distributions are not symmetrical and go from 0 to $\infty$, not from $- \infty$ to $\infty$ as the others do.
When testing the variances from independent samples, we calculate the $F$-ratio test statistic, which is the ratio of the variances of the independent samples.
When we reject the null hypothesis, $H_0:\sigma^2_1=\sigma^2_2$, we conclude that the variances of the two populations are not equal.
The test of the null hypothesis, $H_0: \sigma^2_1=\sigma^2_2$, using the $F$-distribution is only appropriate when it can be safely assumed that the population is normally distributed.
## Review Questions
1. We use the $F$-Max test to examine the differences in the ___ between two independent samples.
2. List two differences between the $F$-distribution and Student’s $t$-distribution.
3. When we test the differences between the variances of two independent samples, we calculate the ___.
4. When calculating the $F$-ratio, it is recommended that the sample with the ___ sample variance be placed in the numerator, and the sample with the ___ sample variance be placed in the denominator.
5. Suppose a guidance counselor tested the mean of two student achievement samples from different SAT preparatory courses. She found that the two independent samples had similar means, but also wants to test the variance associated with the samples. She collected the following data:
$& \text{SAT Prep Course} \ \# 1 && \text{SAT Prep Course} \ \# 2\\& n=31 && n=21\\& s^2=42.30 && s^2=18.80$
(a) What are the null and alternative hypotheses for this scenario?
(b) What is the critical value with $\alpha=0.10$?
(c) Calculate the $F$-ratio.
(d) Would you reject or fail to reject the null hypothesis? Explain your reasoning.
(e) Interpret the results and determine what the guidance counselor can conclude from this hypothesis test.
1. True or False: The test of the null hypothesis, $H_0:\sigma^2_1=\sigma^2_2$, using the $F$-distribution is only appropriate when it can be safely assumed that the population is normally distributed.
Feb 23, 2012
## Last Modified:
Dec 15, 2014
Files can only be attached to the latest version of None
# Reviews
Please wait...
Please wait...
Image Detail
Sizes: Medium | Original
CK.MAT.ENG.SE.2.Prob-&-Stats-Adv.11.1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 63, "texerror": 0, "math_score": 0.941040575504303, "perplexity": 512.5782283488411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932705.91/warc/CC-MAIN-20150521113212-00307-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/easy-question-about-the-number-operator.812852/ | # Easy Question About the Number Operator
1. May 8, 2015
### metapuff
Suppose I have a system of fermions in the ground state $\Psi_0$. If I operate on this state with the number operator, I get
$$\langle \Psi_0 | c_k^{\dagger} c_k | \Psi_0 \rangle = \frac{1}{e^{(\epsilon_k - \mu)\beta} + 1}$$
which is, of course, the fermi distribution. What if I operate with $c^{\dagger}_k c_l$, where $k \neq l$? I.e, what is
$$\langle \Psi_0 | c_k^{\dagger} c_l | \Psi_0 \rangle?$$
My hunch says that this is zero, but I'm not sure. This might be obvious.
2. May 8, 2015
### fzero
You can show that this is zero by writing out the ground state $|\Psi_0\rangle$ as a product of creation operators acting on the vacuum $|0>$. Since $\{c^\dagger_k,c_l\}=0$ for $k\neq l$, we can anticommute the $c^\dagger_k$ through to the $c^\dagger_k$ appearing in the product. Then we find a factor of $(c^\dagger_k)^2=0$.
3. May 8, 2015
### metapuff
Nice! That's really clever. This seems like a trick that I'll be using a lot. :)
4. May 8, 2015
### metapuff
Okay, another question. Let $\Psi_{0,\downarrow}$ be the ground state for spin down electrons (for example we could have a partially polarized electron gas, with $\Psi_{0,\downarrow}$ representing the filled fermi sphere for down-spin electrons). If I try to act on this with the number operator for spin up particles, like
$$\langle \Psi_{0,\downarrow} | c^{\dagger}_{\uparrow} c_{\uparrow} | \Psi_{0,\downarrow} \rangle$$
do I get 0 (since there are no spin-up particles in the down-spin ground state), or can I just pull the spin-up operators out, and write
$$\langle \Psi_{0,\downarrow} | c^{\dagger}_{\uparrow} c_{\uparrow} | \Psi_{0,\downarrow} \rangle = c^{\dagger}_{\uparrow} c_{\uparrow}?$$
Again, this might be obvious, but I don't have a lot of confidence with second quantization yet and am trying to build intuition. Thanks!
5. May 9, 2015
### fzero
You will get zero because you can anticommute to get the expression
$$c^\dagger_{k\uparrow} \prod_r^\mathcal{N} c^\dagger_{r\downarrow} c_{k\uparrow} | 0 \rangle.$$
Also, you generally can't pull operators out of an expectation value. If the expectation value you're writing down is physically sensible then the operator acts on the state that you're using to compute the expectation value.
Similar Discussions: Easy Question About the Number Operator | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9147686958312988, "perplexity": 460.7587638772645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891196.79/warc/CC-MAIN-20180122073932-20180122093932-00741.warc.gz"} |
https://www.physicsforums.com/threads/angle-between-two-skew-lines.888674/ | # Angle between two skew lines
1. Oct 11, 2016
### Kernul
1. The problem statement, all variables and given/known data
The problem asks me to evaluate the angle between these two lines:
$r : \begin{cases} x - 2y - 3 = 0 \\ 3y + z = 0 \end{cases} s : \begin{cases} x = 1 + 4t \\ y = 2 - 3t \\ z = 3 \end{cases}$
both oriented to the decreasing $y$.
2. Relevant equations
3. The attempt at a solution
Having found $\vec v_r = (-2, -1, 3)$, $\vec v_s = (4, -3, 0)$, $P_r (1, -1, 3)$, and $P_s (1, 2, 3)$
I already know that the lines are askew. I then found out that in order to find the angle between the two lines, I have to first find a plane containing one of the two lines(for example $r$) that is at the same time parallel to the other one(in this example $s$). In a few words I have to find a line parallel to $s$ that meets the line $r$ in a point that belongs to $r$.
The thing is that I don't know how to find that parallel line to $s$ that at the same time passes into a point $P_r$ belonging to the line $r$.
Should I take one of the Cartesian equations of $r$ and see the projection of $s$ on it so to have the parallel line? And then see the interjection between this parallel line and $r$? Or I should proceed in another way?
By the way, this is the Cartesian form of the $s$ line I found:
$s : \begin{cases} \frac{3}{4}x + y - \frac{11}{4} = 0 \\ z - 3 = 0 \end{cases}$
2. Oct 11, 2016
### Staff: Mentor
Do you know the dot product = scalar product? You don't have to construct new planes and whatever, once you have vr and vs the angle can be found in a single line on paper.
3. Oct 11, 2016
### Kernul
Ohw... I was so concentrated on the parallel line I didn't thought of that...
Thank you and sorry.
Draft saved Draft deleted
Similar Discussions: Angle between two skew lines | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9158719182014465, "perplexity": 329.55935457819663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825473.61/warc/CC-MAIN-20171022222745-20171023002745-00834.warc.gz"} |
https://baas.aas.org/pub/2021n6i109p07/release/1 | # Future Constraints On The Reionization History From GRB Afterglows
Presentation #109.07 in the session “Large Scale Structure”.
Published onJun 18, 2021
Future Constraints On The Reionization History From GRB Afterglows
We forecast the reionization history constraints, inferred from Lyman-alpha damping wing absorption features, for a future sample of ~ 20 z > 6 gamma-ray burst (GRB) afterglows using a Fisher matrix analysis. We determine the expected constraints on the average neutral fraction after marginalizing over parameters describing the size of the ionized regions around each GRB and the column density of local damped Lyman-alpha systems associated with the GRB host galaxies. Assuming follow-up spectroscopy of the afterglows with a fiducial signal-to-noise ratio of 20 per R=3,000 resolution element at the continuum, we find that the neutral fraction may be determined to better than 10-15% (1-sigma) accuracy from this data across multiple independent redshift bins at z ~ 6–10, spanning much of the Epoch of Reionization, although the precision degrades somewhat near the end of reionization. A more futuristic survey with 80 GRB afterglows at z > 6 can improve the precision here by a factor of 2 and extend measurements out to z ~ 14. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8388850688934326, "perplexity": 3224.355202946789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00502.warc.gz"} |
https://projecteuclid.org/euclid.aoms/1177729885 | ## The Annals of Mathematical Statistics
### Sample Criteria for Testing Outlying Observations
Frank E. Grubbs
#### Abstract
The problem of testing outlying observations, although an old one, is of considerable importance in applied statistics. Many and various types of significance tests have been proposed by statisticians interested in this field of application. In this connection, we bring out in the Histrical Comments notable advances toward a clear formulation of the problem and important points which should be considered in attempting a complete solution. In Section 4 we state some of the situations the experimental statistician will very likely encounter in practice, these considerations being based on experience. For testing the significance of the largest observation in a sample of size $n$ from a normal population, we propose the statistic $\frac{S^2_n}{S^2} = \frac{\sum^{n-1}_{i=1} (x_i - \bar x_n)^2}{\sum^n_{i=1} (x_i - \bar x)^2}$ where $x_1 \leq x_2 \leq \cdots \leq x_n, \bar x_n = \frac{1}{n - 1} \sum^{n-1}_{i=1} x_i$ and $\bar x = \frac{1}{n}\sum^{n}_{i=1} x_i.$ A similar statistic, $S^2_1/S^2$, can be used for testing whether the smallest observation is too low. It turns out that $\frac{S^2_n}{S^2} = 1 - \frac{1}{n - 1} \big(\frac{x_n - \bar x}{s}\big)^2 = 1 - \frac{1}{n - 1} T^2_n,$ where $s^2 = \frac{1}{n}\sigma(x_i - \bar x)^2,$ and $T_n$ is the studentized extreme deviation already suggested by E. Pearson and C. Chandra Sekar [1] for testing the significance of the largest observation. Based on previous work by W. R. Thompson [12], Pearson and Chandra Sekar were able to obtain certain percentage points of $T_n$ without deriving the exact distribution of $T_n$. The exact distribution of $S^2_n/S^2$ (or $T_n$) is apparently derived for the first time by the present author. For testing whether the two largest observations are too large we propose the statistic $\frac{S^2_{n-1,n}}{S^2} = \frac{\sum^{n-2}_{i=1} (x_i - \bar x_{n-1,n})^2}{\sum^n_{i=1} (x_i - \bar x)^2},\quad\bar x_{n-1,n} = \frac{1}{n - 2} \sum^{n-2}_{i=1} x_i$ and a similar statistic, $S^2_{1,2}/S^2$, can be used to test the significance of the two smallest observations. The probability distributions of the above sample statistics $S^2 = \sum^n_{i=1} (x_i - \bar x)^2 \text{where} \bar x = \frac{1}{n} \sum^n_{i=1} x_i$ $S^2_n = \sum^{n-1}_{i=1} (x_i - \bar x_n)^2 \text{where} \bar x_n = \frac{1}{n-1} \sum^{n-1}_{i=1} x_i$ $S^2_1 = \sum^n_{i=2} (x_i - \bar x_1)^2 \text{where} \bar x_1 = \frac{1}{n-1} \sum^n_{i=2} x_i$ are derived for a normal parent and tables of appropriate percentage points are given in this paper (Table I and Table V). Although the efficiencies of the above tests have not been completely investigated under various models for outlying observations, it is apparent that the proposed sample criteria have considerable intuitive appeal. In deriving the distributions of the sample statistics for testing the largest (or smallest) or the two largest (or two smallest) observations, it was first necessary to derive the distribution of the difference between the extreme observation and the sample mean in terms of the population $\sigma$. This probability$X_1 \leq x_2 \leq x_3 \cdots \leq x_n$ $s^2 = \frac{1}{n} \sum^n_{i=1} (x_i - \bar x)^2 \quad \bar x = \frac{1}{n} \sum^n_{i=1} x_i$ distribution was apparently derived first by A. T. McKay [11] who employed the method of characteristic functions. The author was not aware of the work of McKay when the simplified derivation for the distribution of $\frac{x_n - \bar x}{\sigma}$ outlined in Section 5 below was worked out by him in the spring of 1945, McKay's result being called to his attention by C. C. Craig. It has been noted also that K. R. Nair [20] worked out independently and published the same derivation of the distribution of the extreme minus the mean arrived at by the present author--see Biometrika, Vol. 35, May, 1948. We nevertheless include part of this derivation in Section 5 below as it was basic to the work in connection with the derivations given in Sections 8 and 9. Our table is considerably more extensive than Nair's table of the probability integral of the extreme deviation from the sample mean in normal samples, since Nair's table runs from $n = 2$ to $n = 9,$ whereas our Table II is for $n = 2$ to $n = 25$. The present work is concluded with some examples.
#### Article information
Source
Ann. Math. Statist., Volume 21, Number 1 (1950), 27-58.
Dates
First available in Project Euclid: 28 April 2007
https://projecteuclid.org/euclid.aoms/1177729885
Digital Object Identifier
doi:10.1214/aoms/1177729885
Mathematical Reviews number (MathSciNet)
MR33993
Zentralblatt MATH identifier
0036.21003
JSTOR | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8986604809761047, "perplexity": 444.94886548349814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202525.25/warc/CC-MAIN-20190321132523-20190321154523-00260.warc.gz"} |
https://en.wikipedia.org/wiki/Radius | Circle with circumference C in black, diameter D in cyan, radius R in red, and center or origin O in magenta.
In classical geometry, a radius of a circle or sphere is any of the line segments from its center to its perimeter, and in more modern usage, it is also their length. The name comes from the Latin radius, meaning ray but also the spoke of a chariot wheel.[1] The plural of radius can be either radii (from the Latin plural) or the conventional English plural radiuses.[2] The typical abbreviation and mathematical variable name for radius is r. By extension, the diameter d is defined as twice the radius:[3]
${\displaystyle d\doteq 2r\quad \Rightarrow \quad r={\frac {d}{2}}.}$
If an object does not have a center, the term may refer to its circumradius, the radius of its circumscribed circle or circumscribed sphere. In either case, the radius may be more than half the diameter, which is usually defined as the maximum distance between any two points of the figure. The inradius of a geometric figure is usually the radius of the largest circle or sphere contained in it. The inner radius of a ring, tube or other hollow object is the radius of its cavity.
For regular polygons, the radius is the same as its circumradius.[4] The inradius of a regular polygon is also called apothem. In graph theory, the radius of a graph is the minimum over all vertices u of the maximum distance from u to any other vertex of the graph.[5]
The radius of the circle with perimeter (circumference) C is
${\displaystyle r={\frac {C}{2\pi }}.}$
## Formula
For many geometric figures, the radius has a well-defined relationship with other measures of the figure.
### Circles
The radius of a circle with area A is
${\displaystyle r={\sqrt {\frac {A}{\pi }}}.}$
The radius of the circle that passes through the three non-collinear points P1, P2, and P3 is given by
${\displaystyle r={\frac {|{\vec {OP_{1}}}-{\vec {OP_{3}}}|}{2\sin \theta }},}$
where θ is the angle P1P2P3. This formula uses the law of sines. If the three points are given by their coordinates (x1,y1), (x2,y2), and (x3,y3), the radius can be expressed as
${\displaystyle r={\frac {\sqrt {[(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}][(x_{2}-x_{3})^{2}+(y_{2}-y_{3})^{2}][(x_{3}-x_{1})^{2}+(y_{3}-y_{1})^{2}]}}{2|x_{1}y_{2}+x_{2}y_{3}+x_{3}y_{1}-x_{1}y_{3}-x_{2}y_{1}-x_{3}y_{2}|}}.}$
### Regular polygons
n Rn
3 0.577350...
4 0.707106...
5 0.850650...
6 1.0
7 1.152382...
8 1.306562...
9 1.461902...
10 1.618033...
A square, for example (n=4)
The radius r of a regular polygon with n sides of length s is given by r = Rn s, where ${\displaystyle R_{n}=1\left/\left(2\sin {\frac {\pi }{n}}\right)\right..}$ Values of Rn for small values of n are given in the table. If s = 1 then these values are also the radii of the corresponding regular polygons.
### Hypercubes
The radius of a d-dimensional hypercube with side s is
${\displaystyle r={\frac {s}{2}}{\sqrt {d}}.}$
## Use in coordinate systems
### Polar coordinates
The polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a fixed point and an angle from a fixed direction.
The fixed point (analogous to the origin of a Cartesian system) is called the pole, and the ray from the pole in the fixed direction is the polar axis. The distance from the pole is called the radial coordinate or radius, and the angle is the angular coordinate, polar angle, or azimuth.[6]
### Cylindrical coordinates
In the cylindrical coordinate system, there is a chosen reference axis and a chosen reference plane perpendicular to that axis. The origin of the system is the point where all three coordinates can be given as zero. This is the intersection between the reference plane and the axis.
The axis is variously called the cylindrical or longitudinal axis, to differentiate it from the polar axis, which is the ray that lies in the reference plane, starting at the origin and pointing in the reference direction.
The distance from the axis may be called the radial distance or radius, while the angular coordinate is sometimes referred to as the angular position or as the azimuth. The radius and the azimuth are together called the polar coordinates, as they correspond to a two-dimensional polar coordinate system in the plane through the point, parallel to the reference plane. The third coordinate may be called the height or altitude (if the reference plane is considered horizontal), longitudinal position,[7] or axial position.[8]
### Spherical coordinates
In a spherical coordinate system, the radius describes the distance of a point from a fixed origin. Its position if further defined by the polar angle measured between the radial direction and a fixed zenith direction, and the azimuth angle, the angle between the orthogonal projection of the radial direction on a reference plane that passes through the origin and is orthogonal to the zenith, and a fixed reference direction in that plane. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598767161369324, "perplexity": 322.6366580215007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526064.11/warc/CC-MAIN-20190719053856-20190719075856-00090.warc.gz"} |
https://de.maplesoft.com/support/help/Maple/view.aspx?path=versions | Maple Versions - Maple Help
Maple Versions
You can access the power of the Maple computation engine through a variety of user interfaces: the Standard Worksheet, the Command-line version, the Classic Worksheet, and custom-built Maplet applications.
Maple provides users with two worksheet interfaces. Both have access to the full mathematical engine of Maple and take advantage of the new functionality in Maple 2021. By default, worksheets open in the enhanced and more modern Standard Worksheet. The Classic Worksheet, available on certain platforms (see below), has the traditional Maple worksheet look and uses less memory.
Standard Worksheet
The Standard Worksheet contains the full-featured interface. To access the Standard Worksheet, click the Maple 2021 icon. For example, using Windows:
• From the Start menu, select All Programs > Maple 2021 > Maple 2021.
Command-line Version
The Command-line version does not include graphical user interfaces features, but this method is recommended when solving very large complex problems or using scripts for batch processing.
To access the Command-line version in Mac, use the maple command. In Windows, use the cmaple command. For more information on the Command-line version, including Command-line options, see maple. Alternatively, to access the Command-line version using Windows, click the Command-line version icon.
• From the Start menu, select All Programs > Maple 2021 > Command-line Maple 2021.
Classic Worksheet
The Maple Legacy Interface, Classic Worksheet, provides a basic worksheet environment for older computers with limited memory. If your system has less than the recommended amount of physical memory, it is suggested that you use the Classic Worksheet version. For system requirements, refer to the Install.html file in your Maple installation directory, or visit to the webpage https://www.maplesoft.com/products/system_requirements.aspx.
Note: The Classic Worksheet is available on Windows platforms. To access the Classic Worksheet:
• From the bin.X86_64_WINDOWS subfolder of your Maple installation directory, launch cwmaple.exe
A typical location for your Maple installation directory is C:\Program Files\Maple 2021
Maplet Applications
The Maplet User Interface Customization system allows you to create windows, dialogs, and other visual interfaces that interact with a user to provide the power of Maple. Users can perform calculations or plot functions without using the Standard Worksheet interface. A Maplet application is a collection of elements including, but not limited to, windows and their associated layouts, dialogs, and actions. To create Maplet applications, you can use the Maplets package or the Maplet Builder.
For information and examples, see the Maplets and MapletViewer help pages.
To view examples of built-in Maplet applications, see the interactive tutors in the Student[Calculus1] package.
Worksheet Compatibility Issues
For worksheet compatibility issues, see the Worksheet Compatibility help page. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9007224440574646, "perplexity": 4375.659162860695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034170.1/warc/CC-MAIN-20220625034751-20220625064751-00745.warc.gz"} |
https://papers.nips.cc/paper/1989/hash/0e01938fc48a2cfb5f2217fbfb00722d-Abstract.html | Authors
Daniel Kammen, Christof Koch, Philip Holmes
Abstract
The firing patterns of populations of cells in the cat visual cor(cid:173) tex can exhibit oscillatory responses in the range of 35 - 85 Hz. Furthermore, groups of neurons many mm's apart can be highly synchronized as long as the cells have similar orientation tuning. We investigate two basic network architectures that incorporate ei(cid:173) ther nearest-neighbor or global feedback interactions and conclude that non-local feedback plays a fundamental role in the initial syn(cid:173) chronization and dynamic stability of the oscillations. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9899303913116455, "perplexity": 3500.9438450511284}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300934.87/warc/CC-MAIN-20220118152809-20220118182809-00143.warc.gz"} |
http://tex.stackexchange.com/questions/32032/extract-all-citations-from-tex-file/32035 | # Extract all citations from .tex file
Is there a fool-proof way to extract all bibtex citation-keys that are cited in a .tex file?
I do not mean regular-expression magic on the .tex-file because this is bound to cause problems when switching between natbib, apacite etc. which all use different citation commands. Also, citations made using \nocite{*} will not be included ...
I though about looking into the .bbl file which does contain all references included in the final document but the format of the .bbl file differs vastly between packages as well such that the key-extraction is difficult.
-
The citations are contained in the .aux file.
\usepackage{atveryend}
\makeatletter
\let\origcitation\citation
\AtEndDocument{\def\mycites{\@gobble}%
\AtVeryEndDocument{\typeout{***^^JCited keys: \mycites^^J***}}
\makeatother
This will show on screen and in the .log file, at the end of the LaTeX run, a message such as
***
Cited keys: xxx,yyy,*
***
It would be possible to avoid the appearance of *, but I don't think it's worthy the trouble. Only actually cited keys will appear (BibTeX uses \citation{*} as a signal for including the whole database).
One can output the citations to an auxiliary file, instead:
\makeatletter
\let\origcitation\citation
\AtEndDocument{\def\mycites{}%
\AtVeryEndDocument{\newwrite\citeout\immediate\openout\citeout=\jobname.cit
\immediate\write\citeout{\mycites}\immediate\closeout\citeout}
\makeatother
Then, if the file is test.tex, the citation keys will be saved in the file test.cit one per line.
-
Yes, I noted that \nocite{*} citations do not appear. However, in the .aux file, all items appear in \bibcite{} commands (at least in my current test-setup)... – thias Oct 19 '11 at 13:34
\bibcite entries are written when reading the thebibliography environment, so also keys coming from \nocite{*} will be there. The right entries are the \citation ones. – egreg Oct 19 '11 at 14:39
so, replacing \citation with \bibcite everywhere in your code will output all the used citation keys? That would be exactly what I need... – thias Oct 19 '11 at 14:58
I tested it and it works. One minor issue: latex breaks the output at (probably) exactly 80 characters such that the list is broken at weird places. Since I want to use the output in a script, is there a way to output it without line-breaks? – thias Oct 19 '11 at 15:00
In unix-like systems, one can write cat myfile.aux | grep "\\\\citation" | sed 's/\\citation{$$.*$$}/\1/g' | sort | uniq > myfile.cit and the resulting file contains all used citations, and each of them exactly once. – tohecz Feb 3 '12 at 13:35
With bibtool you can do as follows:
bibtool -x file.aux -o bibliography.bib
This extracts your cited bibliography. Now if you just grep the file for lines with @ in them, you get fairly close to a list of keys...
-
nice! However, I want to get along without bibtool. I found that it destroys some of my entries. For example, it decapitalizes my keys all the time... – thias Oct 19 '11 at 13:37
Various TeX-aware programming editors have macros to achieve this. For instance, there's a package called bibmacros for use with winedt which (inter alia) does the job you describe. It works on the .aux file created by latex and BibTeX, and creates a new bib file called jobname-minimal.bib, where jobbame is the name of the aux file (without the "aux" extension, of course). Other editors must have similar macros, either built-in or accessible as extra packages.
-
Hey, that's right! For emacs, I found M-x reftex-create-bibtex-file... – thias Oct 19 '11 at 15:05
@thias --- Really? I don't see that option in reftex 4.31. Could you elaborate? (I usually use bibtool, but it doesn't handle cross-references in the .bib file very well, so an emacs solution would be great.) – jon Feb 3 '12 at 14:55
Found in menu [Ref] -> [Global Actions] -> [Create BibTeX File] – thisirs Nov 14 '12 at 9:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.810418963432312, "perplexity": 4720.66800645703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500831174.98/warc/CC-MAIN-20140820021351-00242-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/state-the-advantages-of-ac-over-dc-distinction-between-an-ac-generator-and-dc-motor_213956 | Tamil Nadu Board of Secondary EducationSSLC (English Medium) Class 9th
# State the advantages of ac over dc. - Science
Short Note
State the advantages of ac over dc.
#### SolutionShow Solution
• The Voltage of AC can be varied easily using a device called transformer. The AC can be carried over long distances using step up transformers. The loss of energy while distributing current in the form of AC is negligible.
• Direct current cannot be transmitted as such. The AC can be easily converted into DC. Generating AC is easier than DC. The AC can produce electromagnetic induction which is useful in several ways.
Concept: Distinction Between an A.C. Generator and D.C. Motor
Is there an error in this question or solution? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9478563666343689, "perplexity": 2378.057117969728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572908.71/warc/CC-MAIN-20220817122626-20220817152626-00625.warc.gz"} |
http://www.sciphysicsforums.com/spfbb1/viewtopic.php?t=429&p=10970 | ## The Mistakes by Bell and von Neumann are Identical
Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues
### Re: The Mistakes by Bell and von Neumann are Identical
Joy Christian wrote:***
I have worked out the correct eigenvalue of the operator (R + S + T - U) relevant for the Bell's implicit assumption [cf. eq. (16) or (29) of this paper: https://arxiv.org/abs/1704.02876].
The correct eigenvalue of the operator (R + S + T - U) is
(1) $\sqrt{(r + s + t - u)^2 + z\;}$,
where $z = l + m + n - o - p - q$, and l, m, n, o, p, and q are the eigenvalues of the operators L = RS - SR, M = RT - TR, N = TS - ST, O = US - SU, P = UR - RU, and Q = UT - TU, respectively.
Now, implementing what they think is the demand of local realism, Bell and his followers assume that the eigenvalue of the operator (R + S + T - U) is (r + s + t - u). But that is true if and only if the operators R, S, T, and U commute with each other. This is easy to see from the above eq. (1). When R, S, T, and U all commute with each other, then z = 0 and the eigenvalue reduces to (r + s + t - u). But in the Bell-test experiments the operators R, S, T, and U do not commute with each other because they correspond to different detections made at mutually exclusive measurement directions. So Bell and his followers assume a wrong eigenvalue of the operator (R + S + T - U) and thus incorrectly implement Einstein's notion of local realism. It is a simple mathematical mistake. And it invalidates the bounds of -2 and +2 on the CHSH correlator. The correct bounds follow if we use the correct eigenvalue (1) worked out above. The correct bounds work out to be $-2\sqrt{2}$ and $+2\sqrt{2}$, exactly as those predicted by quantum mechanics. Thus there is no incompatibility between quantum mechanics and local realism.
There is a dishonest attempt by a Bell-believer to deflect from the above calculation. Note to other readers: Don't fall for that deflection. Concentrate on what I have presented above.
***
Joy Christian
Research Physicist
Posts: 2241
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
### Re: The Mistakes by Bell and von Neumann are Identical
Joy Christian wrote:[...]
There is a dishonest attempt by a Bell-believer to deflect from the above calculation. Note to other readers: Don't fall for that deflection. Concentrate on what I have presented above.
Could you please be more specific why you think this is dishonest? Did you not write the paper I cited? Does it not describe an experiment where the operators R, S, T, and U do indeed commute?
Heinera
Posts: 714
Joined: Thu Feb 06, 2014 1:50 am
### Re: The Mistakes by Bell and von Neumann are Identical
Heinera wrote:
Joy Christian wrote:[...]
There is a dishonest attempt by a Bell-believer to deflect from the above calculation. Note to other readers: Don't fall for that deflection. Concentrate on what I have presented above.
Could you please be more specific why you think this is dishonest? Did you not write the paper I cited? Does it not describe an experiment where the operators R, S, T, and U do indeed commute?
I do not see any self-adjoint operators on a complex Hilbert space in the description of the experiment proposed in my paper, let alone a specific linear combination like (R + S + T - U) of self-adjoint operators. Therefore, the question of their commutation or non-commutation does not even arise. As you always do, you are simply trying to deflect from the major mistake in Bell's argument I have identified. It is a pathetic attempt to deflect. Moreover, the discussion of my experiment is off-topic for this thread. It has been discussed elsewhere in this forum.
***
Joy Christian
Research Physicist
Posts: 2241
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
### Re: The Mistakes by Bell and von Neumann are Identical
Joy Christian wrote:I do not see any self-adjoint operators on a complex Hilbert space in the description of the experiment proposed in my paper, let alone a specific linear combination like (R + S + T - U) of self-adjoint operators. Therefore, the question of their commutation or non-commutation does not even arise.
***
Do you see any self-adjoint operators on a complex Hilbert space in the proof of Bell's theorem, as laid out in this paper, let alone a specific linear combination like (R + S + T - U) of self-adjoint operators?
https://journals.aps.org/ppf/pdf/10.110 ... zika.1.195
Heinera
Posts: 714
Joined: Thu Feb 06, 2014 1:50 am
### Re: The Mistakes by Bell and von Neumann are Identical
Heinera wrote:
Joy Christian wrote:I do not see any self-adjoint operators on a complex Hilbert space in the description of the experiment proposed in my paper, let alone a specific linear combination like (R + S + T - U) of self-adjoint operators. Therefore, the question of their commutation or non-commutation does not even arise.
***
Do you see any self-adjoint operators on a complex Hilbert space in the proof of Bell's theorem, as laid out in this paper, let alone a specific linear combination like (R + S + T - U) of self-adjoint operators?
https://journals.aps.org/ppf/pdf/10.110 ... zika.1.195
Yes, I do.
The fact that you don't see any says a lot about your knowledge and understanding of Bell's theorem. I bet you have never read Bell's 1966 paper to understand his 1964 paper. Sad, really.
PS: Bell's 1966 paper was written before his 1964 paper. The dates have to do with an editorial mistake. His 1964 paper is effectively just an appendix to his 1966 paper (see last footnote).
***
Joy Christian
Research Physicist
Posts: 2241
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
### Re: The Mistakes by Bell and von Neumann are Identical
Heinera wrote:
Joy Christian wrote:I do not see any self-adjoint operators on a complex Hilbert space in the description of the experiment proposed in my paper, let alone a specific linear combination like (R + S + T - U) of self-adjoint operators. Therefore, the question of their commutation or non-commutation does not even arise.
***
Do you see any self-adjoint operators on a complex Hilbert space in the proof of Bell's theorem, as laid out in this paper, let alone a specific linear combination like (R + S + T - U) of self-adjoint operators?
https://journals.aps.org/ppf/pdf/10.110 ... zika.1.195
Oh for heavens sake, there is no proof of Bell's junk physics theory. The inequalities prove themselves.
.
FrediFizzx
Independent Physics Researcher
Posts: 1781
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA
### Re: The Mistakes by Bell and von Neumann are Identical
FrediFizzx wrote:
Heinera wrote:
Joy Christian wrote:I do not see any self-adjoint operators on a complex Hilbert space in the description of the experiment proposed in my paper, let alone a specific linear combination like (R + S + T - U) of self-adjoint operators. Therefore, the question of their commutation or non-commutation does not even arise.
***
Do you see any self-adjoint operators on a complex Hilbert space in the proof of Bell's theorem, as laid out in this paper, let alone a specific linear combination like (R + S + T - U) of self-adjoint operators?
https://journals.aps.org/ppf/pdf/10.110 ... zika.1.195
Oh for heavens sake, there is no proof of Bell's junk physics theory. The inequalities prove themselves.
And even the inequalities were proven by George Boole 100 years before Bell.
***
Joy Christian
Research Physicist
Posts: 2241
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
### Re: The Mistakes by Bell and von Neumann are Identical
Joy Christian wrote:And even the inequalities were proven by George Boole 100 years before Bell.
They are utterly elementary. He did not prove them in his book. He gave them in his book as a simple exercise for schoolboys.
Of course, Bell's theorem (as opposed to Bell's inequality) - QM is incompatible with locality+realism+freedom - is really just a simple variation of EPR. You could say, it's obtained by a rotation of Bob's lab 45 degrees in the Q-P plane. Together with the switch to EPR-B ... due to Bohm and Aharonov.
gill1109
Mathematical Statistician
Posts: 1738
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden
### Re: The Mistakes by Bell and von Neumann are Identical
gill1109 wrote:
Of course, Bell's theorem (as opposed to Bell's inequality) - QM is incompatible with locality+realism+freedom - is really just a simple variation of EPR. You could say, it's obtained by a rotating Bob's lab 45 degrees in the P-Q plane.
This is just plain wrong. You keep repeating this mantra because you have no understanding of what Einstein's (or EPR's) notion of local realism actually is. As I have shown in my paper, quantum mechanics is perfectly compatible with loclity+realism+freedom (see below). Einstein's notion of local realism has to do with the simultaneous contextual assignment of specific eigenvalues of the respective operators to all observables of a given physical system. But Bell and his followers have botched the implementation of this idea in the proof of Bell's so-called theorem. And when their mistake is pointed out to them, they turn their "theorem" into an article of faith, and repeat the mantra that "QM is incompatible with locality+realism+freedom."
Joy Christian wrote:
I have worked out the correct eigenvalue of the operator (R + S + T - U) relevant for the Bell's implicit assumption [cf. eq. (16) or (29) of this paper: https://arxiv.org/abs/1704.02876].
The correct eigenvalue of the operator (R + S + T - U) is
(1) $\sqrt{(r + s + t - u)^2 + z\;}$,
where $z = l + m + n - o - p - q$, and l, m, n, o, p, and q are the eigenvalues of the operators L = RS - SR, M = RT - TR, N = TS - ST, O = US - SU, P = UR - RU, and Q = UT - TU, respectively.
Now, implementing what they think is the demand of local realism, Bell and his followers assume that the eigenvalue of the operator (R + S + T - U) is (r + s + t - u). But that is true if and only if the operators R, S, T, and U commute with each other. This is easy to see from the above eq. (1). When R, S, T, and U all commute with each other, then z = 0 and the eigenvalue reduces to (r + s + t - u). But in the Bell-test experiments the operators R, S, T, and U do not commute with each other because they correspond to different detections made at mutually exclusive measurement directions. So Bell and his followers assume a wrong eigenvalue of the operator (R + S + T - U) and thus incorrectly implement Einstein's notion of local realism. It is a simple mathematical mistake. And it invalidates the bounds of -2 and +2 on the CHSH correlator. The correct bounds follow if we use the correct eigenvalue (1) worked out above. The correct bounds work out to be $-2\sqrt{2}$ and $+2\sqrt{2}$, exactly as those predicted by quantum mechanics. Thus there is no incompatibility between quantum mechanics and local realism.
***
Joy Christian
Research Physicist
Posts: 2241
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
### Re: The Mistakes by Bell and von Neumann are Identical
Joy Christian wrote:Einstein's notion of local realism has to do with the simultaneous contextual assignment of specific eigenvalues of the respective operators to all observables of a given physical system.
Well, I could agree with one sentence in your post. The two named topics certainly do have quite a lot to do with one another. John Conway and Simon Kochen, Eric Cator and Klaas Landsman, and many other notable mathematicians would also agree and have proved very interesting theorems on the relationship.
gill1109
Mathematical Statistician
Posts: 1738
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden
### Re: The Mistakes by Bell and von Neumann are Identical
gill1109 wrote:
Joy Christian wrote:
Einstein's notion of local realism has to do with the simultaneous contextual assignment of specific eigenvalues of the respective operators to all observables of a given physical system.
Well, I could agree with one sentence in your post.
If you agree with the definition of Einstein's notion of local realism I have stated above in my sentence, then the conclusion of my paper is inescapable. The abstract of my paper reads:
***
Joy Christian
Research Physicist
Posts: 2241
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
### Re: The Mistakes by Bell and von Neumann are Identical
Joy Christian wrote:If you agree with the definition of Einstein's notion of local realism I have stated above in my sentence ...
What you stated was not a definition and anyway, nothing like Einstein's notion. BTW, EPR carefully avoided *defining* local realism. They just gave a sufficient condition for something to be considered an element of reality. Then, in your paper, you write
within quantum theory, using normalized state vectors | ψi >, his [Bell's] assumption is: Given the observables R, S, T, and U, there exists an observable
a R + b S + c T + d U such that...
But Bell was at this point in his paper explicitly *not* working within quantum theory. So to say that this is Bell's assumption is grossly unfair to the person you seem to be accusing of having perverted the course of science.
gill1109
Mathematical Statistician
Posts: 1738
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden
### Re: The Mistakes by Bell and von Neumann are Identical
gill1109 wrote:
But Bell was at this point in his paper explicitly *not* working within quantum theory.
No, he was not. But he was certainly working within von Neumann's framework of hidden variable theories. His 1964 paper is an "appendix" to his 1966 paper on hidden variable theories. See the last footnote of Bell's 1966 paper. Thus, he was very much concerned about reproducing quantum mechanical expectation values as an average of specific eigenvalues of quantum mechanical operators. But he blundered in doing so, as I explain in my paper: https://arxiv.org/abs/1704.02876.
***
Joy Christian
Research Physicist
Posts: 2241
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
### Re: The Mistakes by Bell and von Neumann are Identical
gill1109 wrote:
Joy Christian wrote:
Einstein's notion of local realism has to do with the simultaneous contextual assignment of specific eigenvalues of the respective operators to all observables of a given physical system.
Well, I could agree with one sentence in your post.
Good. So you agree with my sentence. You agree with the fact that "Einstein's notion of local realism has to do with the simultaneous contextual assignment of specific eigenvalues of the respective operators to all observables of a given physical system." It is then inescapable that Bell's assumption, eq. (16) of my paper, is wrong. Thus the rest of Bell's argument is wrong too.
***
Joy Christian
Research Physicist
Posts: 2241
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
### Re: The Mistakes by Bell and von Neumann are Identical
***
John S. Bell was singularly well-equipped to spot the mistake in his theorem but failed to do so. That is the vital lesson one can learn from my paper: https://arxiv.org/abs/1704.02876.
***
Joy Christian
Research Physicist
Posts: 2241
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
### Re: The Mistakes by Bell and von Neumann are Identical
Joy Christian wrote:
gill1109 wrote:
Joy Christian wrote:
Einstein's notion of local realism has to do with the simultaneous contextual assignment of specific eigenvalues of the respective operators to all observables of a given physical system.
Well, I could agree with one sentence in your post.
Good. So you agree with my sentence. You agree with the fact that "Einstein's notion of local realism has to do with the simultaneous contextual assignment of specific eigenvalues of the respective operators to all observables of a given physical system." It is then inescapable that Bell's assumption, eq. (16) of my paper, is wrong. Thus the rest of Bell's argument is wrong too.
***
I disagree with your deduction. You say “it is then inescapable...”. I think that your physical intuition is brilliant. I cannot follow your “logic”. But then I’m merely a statistician
By the way: to follow the lead of Jarek Duda in another internet discussion group: is anyone interested in a group video meeting? Of course, time zones are a big issue to truly international internet communities. And who knows how long internet will keep working.
gill1109
Mathematical Statistician
Posts: 1738
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden
### Re: The Mistakes by Bell and von Neumann are Identical
gill1109 wrote:
Joy Christian wrote:
gill1109 wrote:
Joy Christian wrote:
Einstein's notion of local realism has to do with the simultaneous contextual assignment of specific eigenvalues of the respective operators to all observables of a given physical system.
Well, I could agree with one sentence in your post.
Good. So you agree with my sentence. You agree with the fact that "Einstein's notion of local realism has to do with the simultaneous contextual assignment of specific eigenvalues of the respective operators to all observables of a given physical system." It is then inescapable that Bell's assumption, eq. (16) of my paper, is wrong. Thus the rest of Bell's argument is wrong too.
***
I disagree with your deduction. You say “it is then inescapable...”. I think that your physical intuition is brilliant. I cannot follow your “logic”. But then I’m merely a statistician
Ok. I am happy to spell out my logic. Or better still, spell out the logical fallacy in Bell's argument. It is not even a deep fallacy. Simply put, Bell's argument is circular. He assumes what he wants to prove. In his proof, Bell wants to conclude that the bounds on the CHSH correlator are -2 and +2. But in doing so he has smuggled-in that conclusion in one of his unacknowledged assumptions, namely, in his assumption I have written as eq. (16) of my paper. Even though mathematically my eq. (16) is a trivial identity, physically it is an unjustifiable assumption, as Bell himself has pointed out in the context of von Neumann's theorem against hidden variable theories. Thus Bell has fallen for the same circular argument he had ridiculed von Neumann for using. In other words, in the "proof" of his "theorem", he has assumed what he wants to prove. As I said, it is not even a deep fallacy.
***
Joy Christian
Research Physicist
Posts: 2241
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
Previous | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8789891600608826, "perplexity": 971.2338781122256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00100.warc.gz"} |
https://plot.ly/scikit-learn/plot-ridge-coeffs/ | Show Sidebar Hide Sidebar
# Ridge Coefficients as a Function of the L2 Regularization in Scikit-learn
Ridge Regression is the estimator used in this example. Each color in the left plot represents one different dimension of the coefficient vector, and this is displayed as a function of the regularization parameter. The right plot shows how exact the solution is. This example illustrates how a well defined solution is found by Ridge regression and how regularization affects the coefficients and their values. The plot on the right shows how the difference of the coefficients from the estimator changes as a function of regularization.
In this example the dependent variable Y is set as a function of the input features: y = X*w + c. The coefficient vector w is randomly sampled from a normal distribution, whereas the bias term c is set to a constant. As alpha tends toward zero the coefficients found by Ridge regression stabilize towards the randomly sampled vector w. For big alpha (strong regularisation) the coefficients are smaller (eventually converging at 0) leading to a simpler and biased solution. These dependencies can be observed on the left plot.
The right plot shows the mean squared error between the coefficients found by the model and the chosen vector w. Less regularised models retrieve the exact coefficients (error is equal to 0), stronger regularised models increase the error.
Please note that in this example the data is non-noisy, hence it is possible to extract the exact coefficients.
#### New to Plotly?¶
You can set up Plotly to work in online or offline mode, or in jupyter notebooks.
We also have a quick-reference cheatsheet (new!) to help you get started!
### Version¶
In [1]:
import sklearn
sklearn.__version__
Out[1]:
'0.18.1'
### Imports¶
This tutorial imports make_regression, Ridge and mean_squared_error.
In [2]:
import plotly.plotly as py
import plotly.graph_objs as go
from plotly import tools
import numpy as np
from sklearn.datasets import make_regression
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
### Calculations¶
In [3]:
clf = Ridge()
X, y, w = make_regression(n_samples=10, n_features=10, coef=True,
random_state=1, bias=3.5)
coefs = []
errors = []
alphas = np.logspace(-6, 6, 200)
# Train the model with different regularisation strengths
for a in alphas:
clf.set_params(alpha=a)
clf.fit(X, y)
coefs.append(clf.coef_)
errors.append(mean_squared_error(clf.coef_, w))
### Plot Results¶
In [4]:
fig = tools.make_subplots(rows=1, cols=2,
print_grid=False,
subplot_titles=('Ridge coefficients as a function of the regularization',
'Coefficient error as a function of the regularization')
)
Ridge coefficients as a function of the regularization
In [5]:
y_ = []
for col in range(0, len(coefs[0])):
y_.append([ ])
for row in range(0, len(coefs)):
y_[col].append(coefs[row][col])
for i in range(0, len(y_)):
trace = go.Scatter(x=alphas, y=y_[i],
showlegend=False)
fig.append_trace(trace, 1, 1)
fig['layout']['xaxis1'].update(title='alpha', type='log')
fig['layout']['yaxis1'].update(title='weights')
Coefficient error as a function of the regularization
In [6]:
trace = go.Scatter(x=alphas, y=errors,
showlegend=False)
fig.append_trace(trace, 1, 2)
fig['layout']['xaxis2'].update(title='alpha', type='log')
fig['layout']['yaxis2'].update(title='errors')
In [7]:
py.iplot(fig)
Out[7]:
Author:
Kornel Kielczewski -- <[email protected]>
Still need help? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8626875281333923, "perplexity": 2100.248817879342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156690.32/warc/CC-MAIN-20180920234305-20180921014705-00189.warc.gz"} |
https://www.physicsforums.com/threads/am-i-approach-this-question-right-finding-acceleration.225604/ | # Am I approach this question right? finding acceleration
1. Mar 31, 2008
### viet_jon
Am I approach this question right?......finding acceleration
1. The problem statement, all variables and given/known data
A tractor applies a force of 1.3kN to the sled, which has a mass of 1.1x10^4 kg. At that point, the co-efficient of kinetic friction between the sled and the ground has increased to .8. What is the acceleration of the sled?
2. Relevant equations
3. The attempt at a solution
Given
M = 11,000kg
Applied Force = 1300 N , let if be Fa
Kinetic Friction co-efficient = 0.8 , let it be uK
Legend
let Fa = applied force
let Ff = force of friction
m = mass
a = acceleration
Finding force of friction
Ff = uk*Fn
= (0.8)* (11000 kg x 9.81m/s^2)
= 86 328 N
Calculating Acceleration
F=ma
Fa + Ff = ma
a = ( Fa + Ff ) / m
= ( 1300 N + 86 328 N) / 11 000 kg
= 7.9 m/s^2
I tried to do this question literally 10 times now. I keep getting this answer, but know it's wrong. Maybe my approach isn't correct, or I am missing a negative sign somewhere. Please help.
2. Mar 31, 2008
### viet_jon
so it should be
Fa - Ff = ma
?
3. Mar 31, 2008
### YellowTaxi
yes
Applied force(Fa) - frictional force(Ff) = mass times acceleration
but don't confuse the 'a' in 'Fa' with acceleration
- its just being used to denote 'applied', the symbols used are a little confusing and they should have made more effort to avoid that.
4. Mar 31, 2008
### viet_jon
Fa - Ff = ma
1300 N - 86328 N = m * a
therefore acceleration = (1300N - 86328N) / 11 000 kg
= -7.729 m/s^2
but in the book, the answer is 0.61 m/s^2
5. Mar 31, 2008
### viet_jon
anybody?
this question is really bothering me............ it's intro physics, and I'm struggling already.
6. Mar 31, 2008
### dranseth
Perhaps the book is wrong because I attained the same answer without reading the thread until now.
7. Mar 31, 2008
### viet_jon
lol....wow
I've been beating my self all day over this question.
Similar Discussions: Am I approach this question right? finding acceleration | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8979176878929138, "perplexity": 4669.034057686429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693940.84/warc/CC-MAIN-20170926013935-20170926033935-00281.warc.gz"} |
https://stats.stackexchange.com/questions/198256/understanding-the-constraints-of-svms-in-the-non-separable-case | # Understanding the constraints of SVMs in the non-separable case?
In Pattern Recognition and Machine Learning Section 7.1:
Based on what I understood so far, the slack variable $\xi$ is defined as $max(0, 1-t_ny(x_n))$ and it's associated with the hinge loss.
However it seems to me that the two constraints $t_ny(x_n)\geq1-\xi_n$ and $\xi_n\geq0$ are just two properties of $\xi$ according to on how it is defined, and without them it is still a valid optimization problem to solve (hinge loss + regularizer).
Why do we want to use them as the constraints again?
Or the slack variable is not explicitly defined as $max(0, 1-t_ny(x_n))$ but is only defined by the constraints?
Please correct me where I'm wrong.
So I have found in another book that, introducing the slack variable
$\min_{w,b,\xi} \frac{1}{2}||w||^2+C\sum^N_{i=1}\xi_i$
s.t.
$t_iy(x_i)\geq1-\xi_i$ and $\xi_i\geq0$
is essentially a rewrite of the hinge+regularizer loss,
$\min_{w,b} \frac{1}{2}||w||^2+C\sum^N_{i=1}max(0, 1-t_iy(x_i))$.
So the slack variable $\xi$ is implicitly defined as $max(0, 1-t_iy(x_i))$ by the constraints. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8851519823074341, "perplexity": 256.55670363162596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371612531.68/warc/CC-MAIN-20200406004220-20200406034720-00212.warc.gz"} |
http://www.global-sci.org/intro/article_detail/cicp/11068.html | Volume 18, Issue 5
High Order Numerical Simulation of Detonation Wave Propagation Through Complex Obstacles with the Inverse Lax-Wendroff Treatment
Commun. Comput. Phys., 18 (2015), pp. 1264-1281.
Published online: 2018-04
Cited by
Export citation
• Abstract
The high order inverse Lax-Wendroff (ILW) procedure is extended to boundary treatment involving complex geometries on a Cartesian mesh. Our method ensures that the numerical resolution at the vicinity of the boundary and the inner domain keeps the fifth order accuracy for the system of the reactive Euler equations with the two-step reaction model. Shock wave propagation in a tube with an array of rectangular grooves is first numerically simulated by combining a fifth order weighted essentially non-oscillatory (WENO) scheme and the ILW boundary treatment. Compared with the experimental results, the ILW treatment accurately captures the evolution of shock wave during the interactions of the shock waves with the complex obstacles. Excellent agreement between our numerical results and the experimental ones further demonstrates the reliability and accuracy of the ILW treatment. Compared with the immersed boundary method (IBM), it is clear that the influence on pressure peaks in the reflected zone is obviously bigger than that in the diffracted zone. Furthermore, we also simulate the propagation process of detonation wave in a tube with three different widths of wall-mounted rectangular obstacles located on the lower wall. It is shown that the shock pressure along a horizontal line near the rectangular obstacles gradually decreases, and the detonation cellular size becomes large and irregular with the decrease of the obstacle width.
• Keywords
• BibTex
• RIS
• TXT
@Article{CiCP-18-1264, author = {}, title = {High Order Numerical Simulation of Detonation Wave Propagation Through Complex Obstacles with the Inverse Lax-Wendroff Treatment}, journal = {Communications in Computational Physics}, year = {2018}, volume = {18}, number = {5}, pages = {1264--1281}, abstract = {
The high order inverse Lax-Wendroff (ILW) procedure is extended to boundary treatment involving complex geometries on a Cartesian mesh. Our method ensures that the numerical resolution at the vicinity of the boundary and the inner domain keeps the fifth order accuracy for the system of the reactive Euler equations with the two-step reaction model. Shock wave propagation in a tube with an array of rectangular grooves is first numerically simulated by combining a fifth order weighted essentially non-oscillatory (WENO) scheme and the ILW boundary treatment. Compared with the experimental results, the ILW treatment accurately captures the evolution of shock wave during the interactions of the shock waves with the complex obstacles. Excellent agreement between our numerical results and the experimental ones further demonstrates the reliability and accuracy of the ILW treatment. Compared with the immersed boundary method (IBM), it is clear that the influence on pressure peaks in the reflected zone is obviously bigger than that in the diffracted zone. Furthermore, we also simulate the propagation process of detonation wave in a tube with three different widths of wall-mounted rectangular obstacles located on the lower wall. It is shown that the shock pressure along a horizontal line near the rectangular obstacles gradually decreases, and the detonation cellular size becomes large and irregular with the decrease of the obstacle width.
}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.160115.150915a}, url = {http://global-sci.org/intro/article_detail/cicp/11068.html} }
TY - JOUR T1 - High Order Numerical Simulation of Detonation Wave Propagation Through Complex Obstacles with the Inverse Lax-Wendroff Treatment JO - Communications in Computational Physics VL - 5 SP - 1264 EP - 1281 PY - 2018 DA - 2018/04 SN - 18 DO - http://doi.org/10.4208/cicp.160115.150915a UR - https://global-sci.org/intro/article_detail/cicp/11068.html KW - AB -
The high order inverse Lax-Wendroff (ILW) procedure is extended to boundary treatment involving complex geometries on a Cartesian mesh. Our method ensures that the numerical resolution at the vicinity of the boundary and the inner domain keeps the fifth order accuracy for the system of the reactive Euler equations with the two-step reaction model. Shock wave propagation in a tube with an array of rectangular grooves is first numerically simulated by combining a fifth order weighted essentially non-oscillatory (WENO) scheme and the ILW boundary treatment. Compared with the experimental results, the ILW treatment accurately captures the evolution of shock wave during the interactions of the shock waves with the complex obstacles. Excellent agreement between our numerical results and the experimental ones further demonstrates the reliability and accuracy of the ILW treatment. Compared with the immersed boundary method (IBM), it is clear that the influence on pressure peaks in the reflected zone is obviously bigger than that in the diffracted zone. Furthermore, we also simulate the propagation process of detonation wave in a tube with three different widths of wall-mounted rectangular obstacles located on the lower wall. It is shown that the shock pressure along a horizontal line near the rectangular obstacles gradually decreases, and the detonation cellular size becomes large and irregular with the decrease of the obstacle width.
Cheng Wang, Jianxu Ding, Sirui Tan & Wenhu Han. (2020). High Order Numerical Simulation of Detonation Wave Propagation Through Complex Obstacles with the Inverse Lax-Wendroff Treatment. Communications in Computational Physics. 18 (5). 1264-1281. doi:10.4208/cicp.160115.150915a
Copy to clipboard
The citation has been copied to your clipboard | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8405319452285767, "perplexity": 924.602237939488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499768.15/warc/CC-MAIN-20230129211612-20230130001612-00227.warc.gz"} |
https://maker.pro/forums/threads/magnetizing-energy-in-flyback-converter.73804/ | Login Join Maker Pro
Or sign in with
# magnetizing energy in Flyback converter
W
#### webber
Jan 1, 1970
0
I have a question about the magnetizing energy changing during the
transition between the turn off of MSOFET and the turn on of the
output rectifier.
When the power MOS is turned on, the input energy is stored into the
magnetizing inductance, and the current on the inductor increases
linearly. After the MOS is turned off, the inductor current starts to
charge the output capacitance of the MOS (Coss), and the Vds voltage
starts to increase. Once the voltage on the Vds is high enough to let
the secondary rectifier becomes forward biased, the energy starts to
transfer to the secondary. And, thing keep happens on the primary side
is that the energy on the leakage inductor continuous to transfer to
the Coss and the Vds will increase to high level until the primary
snubber diode is on.
The above is my understanding of the flyback converter.
Based on this understanding, the difference between the energy stored
in the magnetizing inductor and the left energy in this inductor
during the period after the MOS is off and the secondary rectifier is
on, is used to charge the Coss to the point to let the secondary
rectifier becomes forward biased.
But, my measured data is quite different.
The energy difference on the inductor (5.7W) during this small period
is much larger than the energy transfer to the Coss (0.36W).
Can anyone tell me what's wrong ??
Thanks a lot.
Webber.
J
#### John Popelish
Jan 1, 1970
0
webber said:
I have a question about the magnetizing energy changing during the
transition between the turn off of MSOFET and the turn on of the
output rectifier.
When the power MOS is turned on, the input energy is stored into the
magnetizing inductance, and the current on the inductor increases
linearly. After the MOS is turned off, the inductor current starts to
charge the output capacitance of the MOS (Coss), and the Vds voltage
starts to increase.
The energy stored in the inductor charges all primary and
secondary capacitances. These include the inter winding
capacitances of the coils, as well as the mosfet and diode
capacitances. The MOSFET output capacitance may start out
as the larges component of the total, but it falls as
voltage rises, so at peak voltage, it may no longer be dominant.
Once the voltage on the Vds is high enough to let
the secondary rectifier becomes forward biased, the energy starts to
transfer to the secondary. And, thing keep happens on the primary side
is that the energy on the leakage inductor continuous to transfer to
the Coss and the Vds will increase to high level until the primary
snubber diode is on.
The above is my understanding of the flyback converter.
Based on this understanding, the difference between the energy stored
in the magnetizing inductor and the left energy in this inductor
during the period after the MOS is off and the secondary rectifier is
on, is used to charge the Coss to the point to let the secondary
rectifier becomes forward biased.
But, my measured data is quite different.
The energy difference on the inductor (5.7W) during this small period
is much larger than the energy transfer to the Coss (0.36W).
Can anyone tell me what's wrong ??
Be careful with your units. Watts is unit of power, joules
is unit of energy. I think you need to find out what the
other capacitances in the circuit are. And the nonlinearity
of Coss makes it a bit rough to come up with a precise
energy stored at any given voltage rise.
A
#### Andrew Edge
Jan 1, 1970
0
I have a question about the magnetizing energy changing during the
transition between the turn off of MSOFET and the turn on of the
output rectifier.
When the power MOS is turned on, the input energy is stored into the
magnetizing inductance, and the current on the inductor increases
linearly. After the MOS is turned off, the inductor current starts to
charge the output capacitance of the MOS (Coss), and the Vds voltage
starts to increase. Once the voltage on the Vds is high enough to let
the secondary rectifier becomes forward biased, the energy starts to
transfer to the secondary. And, thing keep happens on the primary side
is that the energy on the leakage inductor continuous to transfer to
the Coss and the Vds will increase to high level until the primary
snubber diode is on.
The above is my understanding of the flyback converter.
Based on this understanding, the difference between the energy stored
in the magnetizing inductor and the left energy in this inductor
during the period after the MOS is off and the secondary rectifier is
on, is used to charge the Coss to the point to let the secondary
rectifier becomes forward biased.
The secondary rectifier becomes forward biased because the MOS is off
which reverses all the voltages on the transformer windings. It is the
reversed voltage that forward biases the diode. Of course the
secondary voltage has to be higher then the value on the output
capacitor for the diode to be forward biased.
But, my measured data is quite different.
The energy difference on the inductor (5.7W) during this small period
is much larger than the energy transfer to the Coss (0.36W).
Can anyone tell me what's wrong ??
Tell us how did you measure those values .
Thanks a lot.
Webber.
Andy
L
#### legg
Jan 1, 1970
0
I have a question about the magnetizing energy changing during the
transition between the turn off of MSOFET and the turn on of the
output rectifier.
When the power MOS is turned on, the input energy is stored into the
magnetizing inductance, and the current on the inductor increases
linearly. After the MOS is turned off, the inductor current starts to
charge the output capacitance of the MOS (Coss), and the Vds voltage
starts to increase. Once the voltage on the Vds is high enough to let
the secondary rectifier becomes forward biased, the energy starts to
transfer to the secondary. And, thing keep happens on the primary side
is that the energy on the leakage inductor continuous to transfer to
the Coss and the Vds will increase to high level until the primary
snubber diode is on.
The above is my understanding of the flyback converter.
Based on this understanding, the difference between the energy stored
in the magnetizing inductor and the left energy in this inductor
during the period after the MOS is off and the secondary rectifier is
on, is used to charge the Coss to the point to let the secondary
rectifier becomes forward biased.
But, my measured data is quite different.
The energy difference on the inductor (5.7W) during this small period
is much larger than the energy transfer to the Coss (0.36W).
Can anyone tell me what's wrong ??
As stated elsewhere, there are other capacitances. Note also that the
voltage rises as the mosfet current falls. If the transition period is
long, the fet will be absorbing energy during that time.
The more common complaint is that leakage energy, that is not
transferable has significant effects in clamp overshoot and
dissipation. They're both LI^2 /2, in joules.
RL
G
#### Genome
Jan 1, 1970
0
webber said:
I have a question about the magnetizing energy changing during the
transition between the turn off of MSOFET and the turn on of the
output rectifier.
When the power MOS is turned on, the input energy is stored into the
magnetizing inductance, and the current on the inductor increases
linearly. After the MOS is turned off, the inductor current starts to
charge the output capacitance of the MOS (Coss), and the Vds voltage
starts to increase. Once the voltage on the Vds is high enough to let
the secondary rectifier becomes forward biased, the energy starts to
transfer to the secondary. And, thing keep happens on the primary side
is that the energy on the leakage inductor continuous to transfer to
the Coss and the Vds will increase to high level until the primary
snubber diode is on.
The above is my understanding of the flyback converter.
Based on this understanding, the difference between the energy stored
in the magnetizing inductor and the left energy in this inductor
during the period after the MOS is off and the secondary rectifier is
on, is used to charge the Coss to the point to let the secondary
rectifier becomes forward biased.
But, my measured data is quite different.
The energy difference on the inductor (5.7W) during this small period
is much larger than the energy transfer to the Coss (0.36W).
Can anyone tell me what's wrong ??
Thanks a lot.
Webber.
I'm not certain about your explanation of what you are missing in terms of
where the extra (loss) is from. However Mr Legg mentions energy that is
stored in the leakage inductance, as LleakIpk^2/2, which ends up being lost
to the clamp (snubber).
It is really a clamp because it limits the overvoltage from leakage
inductance during flyback.
There is more to it than that though and it's a bit subtle.
As the leakage inductance is being reset it is sitting on top of the
reflected secondary voltage. Not only do you get the energy from
LleakIpk^2/2 dumped into the snubber energy is also removed from that
reflected voltage.
Let's say you have a 100uH primary with 10uH leakage inductance and your
peak primary current is 5 amps. The energy stored in the leakage inductance
is 125uJ. If your supply is operating at 100KHz then the power in the clamp
is 12.5W.
BUT.... If the reflected secondary voltage is 200V and your clamp voltage is
300V then the leakage inductance is being reset through 100V. That takes
500nS, T=dIL/V. The current waveform is triangular and you can work out the
associated charge from its area as being 1.25uC.
With your supply operating at 100KHz the average current recovered from the
reflected secondary voltage is that charge multiplied by the frequency or
125mA. Multiply that by the reflected 200V and you get the additional power
in the clamp as 25W......... !!!!!!!!!!
So the actual loss that the clamp has to deal with is not 12.5W it is 37.5W.
If you don't know about it then you can spend a lot of time scratching your
head wondering why the resistor in your clamp/snubber is smoking. If you do
then you can design for it.
Having minimised leakage inductance the next thing you should do is maximise
the clamp voltage. That reduces the time taken to reset the leakage
inductance and the energy lost.
DNA
W
#### webber
Jan 1, 1970
0
The secondary rectifier becomes forward biased because the MOS is off
which reverses all the voltages on the transformer windings. It is the
reversed voltage that forward biases the diode. Of course the
secondary voltage has to be higher then the value on the output
capacitor for the diode to be forward biased.
Tell us how did you measure those values .
Andy- ÁôÂóQ¤Þ¥Î¤å¦r -
- Åã¥Ü³Q¤Þ¥Î¤å¦r -
Thanks for all your valued input.
I measure the primary current by a current probe between the Source of
the MOSFET and the series current sensing resistor.
When the Vds starts to increase, the primary current is 1.06A. And it
reduces to 0.83A when Vds reaches 302V to let the secondary rectifier
start conducting current.
The magnetizing inductance is 600uH and the leakage inductance is 20uH
(both measured by LCR meter). And the operating freq. is 60KHz.
Therefore, the power difference during this period is:
1/2 * (600+20)*10^(-6)*(1.06^2-0.83^2)*60*10^3=8.09W---(1)
The Coss of MOSFET is 135pF, which is measured at VDS=25V and
freq=1MHz. Can anyone tell me how to transfer this data to meet the
condition that my circuit actually works?
If I still use this 135pF to calculate, the power would be:
1/2*135*10^(-12)*302^2*60*10^3=0.37W---(2)
And the conduction loss during this period (around 400nsec):
((1.06 + 0.83)/2) * (302/2)* 40*10^(-9)*60*10*3=0.34W-(3)
So, where is the difference between (1) - ( (2) + (3) ) = 7.38W
Thanks!!
Webber.
A
#### Andrew Edge
Jan 1, 1970
0
Thanks for all your valued input.
I measure the primary current by a current probe between the Source of
the MOSFET and the series current sensing resistor.
When the Vds starts to increase, the primary current is 1.06A. And it
reduces to 0.83A when Vds reaches 302V to let the secondary rectifier
start conducting current.
The magnetizing inductance is 600uH and the leakage inductance is 20uH
(both measured by LCR meter). And the operating freq. is 60KHz.
Therefore, the power difference during this period is:
1/2 * (600+20)*10^(-6)*(1.06^2-0.83^2)*60*10^3=8.09W---(1)
The Coss of MOSFET is 135pF, which is measured at VDS=25V and
freq=1MHz. Can anyone tell me how to transfer this data to meet the
condition that my circuit actually works?
If I still use this 135pF to calculate, the power would be:
1/2*135*10^(-12)*302^2*60*10^3=0.37W---(2)
And the conduction loss during this period (around 400nsec):
((1.06 + 0.83)/2) * (302/2)* 40*10^(-9)*60*10*3=0.34W-(3)
So, where is the difference between (1) - ( (2) + (3) ) = 7.38W
Thanks!!
Webber.
I assume you have no load on otherwise you'd be losing it there.
You should keep in account your measured values don't take into
account the spikes in currents and voltages which add considerably to
Power losses.
Just taking a rapid look at your calculations. I would think
subtracting 2*1.06*0.83W in the power expression for primary
inductance loss would be more accurate as energy remains stored in the
primary inductances, due to the fact that the current does not drop to
zero during the transition period.
What is the gate drive impedance on the MOS? If too high you may have
a bounce on-off effect during transitions.
Most losses, generally more then a half are in the diode, so check on
that too.
Core losses in the transformer windings , capacitor , error control
and switching circuitry could account for the difference.
Andy
W
#### Winfield Hill
Jan 1, 1970
0
webber said:
Thanks for all your valued input.
I measure the primary current by a current probe between the Source of
the MOSFET and the series current sensing resistor.
When the Vds starts to increase, the primary current is 1.06A. And it
reduces to 0.83A when Vds reaches 302V to let the secondary rectifier
start conducting current.
The magnetizing inductance is 600uH and the leakage inductance is 20uH
(both measured by LCR meter). And the operating freq. is 60KHz.
Therefore, the power difference during this period is:
1/2 * (600+20)*10^(-6)*(1.06^2-0.83^2)*60*10^3=8.09W---(1)
The Coss of MOSFET is 135pF, which is measured at VDS=25V and
freq=1MHz. Can anyone tell me how to transfer this data to meet the
condition that my circuit actually works?
If I still use this 135pF to calculate, the power would be:
1/2*135*10^(-12)*302^2*60*10^3=0.37W---(2)
And the conduction loss during this period (around 400nsec):
((1.06 + 0.83)/2) * (302/2)* 40*10^(-9)*60*10*3=0.34W-(3)
So, where is the difference between (1) - ( (2) + (3) ) = 7.38W
Thanks!!
Webber.
What MOSFET are you using?
I haven't followed the convoluted arguments being put forth in
this thread, but I can say I'm uncomfortable with the various
assumptions being made to allow cranking through some perhaps
oversimplified formulas. Completely missing circuit elements
(e.g., core losses, winding capacitances, and continuing
MOSFET drain-source conduction after supposed "turnoff"), is
one serious issue. Misuse of "known" parameters is another.
For example, power MOSFET capacitances are certainly not fixed
values, as John has pointed out, but instead vary by 10 to 40x
over the full operating range. The datasheet specs and plots
are nice to have, but I've found substantial variation in bench
measurements of actual parts against the datasheet values,
which can exceed 3x under some circumstances. I'm convinced
that datasheet plots are sometimes either oversimplifications
intended to convey a concept rather reality, or figments of a
draftsman's mind. For example, actual MOSFETs often show a
dramatic change in capacitance, 2x to even 5x over a few volts,
somewhere in the 7 to 12V region. This effect is completely
absent from the manufacturer's plots for the same part. My
favorite RCA engineering MOSFET model (a low-voltage MOSFET
in cascode with a high-voltage depletion-mode JFET) accurately
handles this dramatic condition, but that model does not lend
itself well to the classic parameters we see on the datasheet.
My suggestion is that you take a suite of bench measurements
on your components before attempting to accurately model and
calculate the power losses. You'll need specialized test jigs
to accurately measure some of the more difficult things, like
delayed incomplete MOSFET turnoff as a function of say gate
drive, drain current and time.
Good luck. Let us know what you learn.
L
#### legg
Jan 1, 1970
0
I'm not certain about your explanation of what you are missing in terms of
where the extra (loss) is from. However Mr Legg mentions energy that is
stored in the leakage inductance, as LleakIpk^2/2, which ends up being lost
to the clamp (snubber).
It is really a clamp because it limits the overvoltage from leakage
inductance during flyback.
There is more to it than that though and it's a bit subtle.
As the leakage inductance is being reset it is sitting on top of the
reflected secondary voltage. Not only do you get the energy from
LleakIpk^2/2 dumped into the snubber energy is also removed from that
reflected voltage.
Let's say you have a 100uH primary with 10uH leakage inductance and your
peak primary current is 5 amps. The energy stored in the leakage inductance
is 125uJ. If your supply is operating at 100KHz then the power in the clamp
is 12.5W.
BUT.... If the reflected secondary voltage is 200V and your clamp voltage is
300V then the leakage inductance is being reset through 100V. That takes
500nS, T=dIL/V. The current waveform is triangular and you can work out the
associated charge from its area as being 1.25uC.
With your supply operating at 100KHz the average current recovered from the
reflected secondary voltage is that charge multiplied by the frequency or
125mA. Multiply that by the reflected 200V and you get the additional power
in the clamp as 25W......... !!!!!!!!!!
So the actual loss that the clamp has to deal with is not 12.5W it is 37.5W.
If you don't know about it then you can spend a lot of time scratching your
head wondering why the resistor in your clamp/snubber is smoking. If you do
then you can design for it.
Having minimised leakage inductance the next thing you should do is maximise
the clamp voltage. That reduces the time taken to reset the leakage
inductance and the energy lost.
'reset'ing the leakage inductance is an interesting use of terms.
The dI/dt in the leakage inductance is seen by the clamp rectifier.
The higher the dI/dt in this part, the higher the peak reverse current
spike in this rectifier before it turns off. As the total reverse
charge increases with both peak forward current and reversing dI/dT,
you are working with some interesting relationships which can result
in more reverse recovery charge than initial forward charge transfer -
effectively 'regulating' the clamp voltage without appreciable
dissipation in the clamp bleeder.
This is possibly why Power Integrations specifies an ancient and only
moderately fast rectifier in this position, in most of their
application circuits. Crunching the numbers using their recommended
bleeder resistor values reveals quite low power levels being
anticipated for dissipation, despite calculably higher potential
losses when the 'ideal' parasite leakage term gets to work.
If you ignore the noise consequences, and the intentional stress of
the appreciably higher clamp voltages (ameliorated somewhat by PI's
intentional use of IET continuous current topologies), it's an
interesting and cheap technique - but only, in my opinion, if you know
you're doing it. Otherwise it's just plain fool's liuck.
RL
L
#### legg
Jan 1, 1970
0
Thanks for all your valued input.
I measure the primary current by a current probe between the Source of
the MOSFET and the series current sensing resistor.
When the Vds starts to increase, the primary current is 1.06A. And it
reduces to 0.83A when Vds reaches 302V to let the secondary rectifier
start conducting current.
The magnetizing inductance is 600uH and the leakage inductance is 20uH
(both measured by LCR meter). And the operating freq. is 60KHz.
Therefore, the power difference during this period is:
1/2 * (600+20)*10^(-6)*(1.06^2-0.83^2)*60*10^3=8.09W---(1)
The Coss of MOSFET is 135pF, which is measured at VDS=25V and
freq=1MHz. Can anyone tell me how to transfer this data to meet the
condition that my circuit actually works?
Sir,
Your circuit will either work, or not, in spite of the accuracy of
your calculations - not because of it.
RL
T
#### Terry Given
Jan 1, 1970
0
legg said:
Sir,
Your circuit will either work, or not, in spite of the accuracy of
your calculations - not because of it.
RL
LOL!
well put.
then suitable measurements can tell you whats really happening, and can
be used to verify (or falsify) your calculations.
Cheers
Terry
G
Replies
3
Views
1K
Zzzap
Z
Replies
68
Views
3K
Replies
11
Views
738
Replies
8
Views
1K
Replies
4
Views
1K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9011424779891968, "perplexity": 4365.150515280603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00672.warc.gz"} |
http://mathhelpforum.com/differential-equations/186209-finding-particular-solution-differential-equations.html | # Math Help - Finding particular solution from differential equations
1. ## Finding particular solution from differential equations
I am stuck on how to solve the following, i don't weather my arithmetic is wrong or what i am doing. So could someone please do each step.
Find the particular solution to
a) dh/dt=2h(3-h) where y=1 and t=0
b) dy/dx= e^y +1/e^y where y=0 when x=0.
2. ## Re: Finding particular solution from differential equations
First of all try to find the general solution of the DE (it's a separable one), for example the first one:
$\frac{dh}{dt}=2h(3-h)$
Calculating general solution, first rewrite the DE:
$\frac{dh}{2h(3-h)}=dt$
Take the integral of both sides:
$\int \frac{dh}{2h(3-h)}=\int dt$
If you split the integrand in partial fractions (in LHS) you'll get:
$\frac{1}{6}\int \frac{dh}{h} - \frac{1}{6}\int \frac{dh}{h-3}=tt+C$
$=\frac{1}{6}\ln|h|-\frac{1}{6}\ln|h-3|=t+C$
$\Leftrightarrow \ln\left|\frac{h}{h-3}\right|=6t+6C$
$\Leftrightarrow e^{6t+6C}=\left|\frac{h}{h-3}\right|$
Now try to continue and try to get an expression $h(t)=...$ and afterwards substitute $h=1,t=0$ to find $C$.
(Note: $\ln$ is the natural logarithm)
3. ## Re: Finding particular solution from differential equations
Originally Posted by johnsy123
I am stuck on how to solve the following, i don't weather my arithmetic is wrong or what i am doing. So could someone please do each step.
Find the particular solution to
a) dh/dt=2h(3-h) where y=1 and t=0
b) dy/dx= e^y +1/e^y where y=0 when x=0.
Hiya Johnsy, is the second one $\displaystyle \frac{dy}{dx} = e^y + \frac{1}{e^y}$ or $\displaystyle \frac{dy}{dx} = \frac{e^y + 1}{e^y}$?
For the first...
\displaystyle \begin{align*} \frac{dh}{dt} &= 2h(3 - h) \\ \frac{dt}{dh} &= \frac{1}{2h(3-h)} \\ t &= \int{\frac{1}{2h(3 -h)}\,dh} \end{align*}
You will now need to apply partial fractions.
4. ## Re: Finding particular solution from differential equations
It is (e^y +1)/(e^y)
5. ## Re: Finding particular solution from differential equations
Originally Posted by johnsy123
It is (e^y +1)/(e^y)
OK, so
\displaystyle \begin{align*} \frac{dy}{dx} &= \frac{e^y + 1}{e^y} \\ \frac{dx}{dy} &= \frac{e^y}{e^y + 1} \\ x &= \int{\frac{e^y}{e^y + 1}\,dy} \end{align*}
Now make the substitution $\displaystyle u = e^y + 1 \implies du = e^y\,dy$ and the integral becomes
$\displaystyle x = \int{\frac{1}{u}\,du}$.
I'm sure you can go from here
6. ## Be carefully using the results of Wolfram!...
Originally Posted by johnsy123
It is (e^y +1)/(e^y)
... so that the DE is...
$y^{'}= 1+e^{-y}\ ,\ y(0)=0$ (1)
In (1) the variables are separable and the solution is found with the standard approach...
$\int \frac{d y}{1+e^{-y}} = \int dx \implies y+ \ln (1+e^{-y}) = x + c$ (2)
... and the 'initial condition' means that is $c=0$. All right?... yes, of course, but it is interesting to spend a little of time about the details of the integration performed in (2). The integral is...
$\int \frac{d y}{1+e^{-y}} = \int dy - \int \frac{e^{-y}}{1+e^{-y}}\ dy= y+\ln (1+e^{-y}) + c$ (3)
The integral (3) is not 'trascendental' but may be that someone prefers to use Wolfram because 'it saves time'... well!... please observe the result supplied by Wolfram ...
Wolfram Mathematica Online Integrator
Kind regards
$\chi$ $\sigma$
7. ## Re: Be carefully using the results of Wolfram!...
Just so everyone knows, in Australia Separation of Variables isn't taught until university. The OP's question is from Year 12 Specialist Mathematics, so the method I gave is the method that is expected.
8. ## Re: Be carefully using the results of Wolfram!...
Originally Posted by Prove It
Just so everyone knows, in Australia Separation of Variables isn't taught until university. The OP's question is from Year 12 Specialist Mathematics, so the method I gave is the method that is expected.
Of course different methods can give the same result... in this case the 'Year 12 Specialist Mathematics method' gives the solution...
$x= \ln (1+e^{y}) + c$ (1)
... and the 'chisigma method' gives the solution...
$x= y + \ln (1+e^{-y}) + c = y + \ln (1+e^{-y}) + c$ (2)
... but considering the 'identity'...
$\ln (1+e^{y}) = \ln e^{y} + \ln (1+e^{-y})= y + \ln (1+e^{-y})$ (3)
... it is obvious that (1) and (2) are exactly the same expression. Scope [a little polemical may be...] of my post was to do some considerations about the 'reliability' of some 'very popular tools' ...
Kind regards
$\chi$ $\sigma$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9976891875267029, "perplexity": 2508.951937017038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673081.9/warc/CC-MAIN-20151001215753-00270-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://mymathforum.com/probability-statistics/345153-fortnite-math-problem-average-win-rate-exactly-1-a.html | My Math Forum Fortnite math problem: is the average win rate exactly 1%?
Probability and Statistics Basic Probability and Statistics Math Forum
October 17th, 2018, 07:26 AM #1 Newbie Joined: Oct 2018 From: United Kingdom Posts: 2 Thanks: 0 Math Focus: Multiplication Fortnite math problem: is the average win rate exactly 1%? At first I thought for sure it's 1% exactly, but then I realized I was wrong. Consider two cases: Suppose there are exactly the same 100 players each game. In this case, since 1 player wins and the rest lose, the average win rate after n games is: (\sum_{i=1}{100} games won by person i / games played by person i) / total number of players and since the number of games played by all people is n (only 100 people), then this reduces to n / 100n = 1%. Suppose there are more than 100 people, and people play an unequal number of games (as in real Fortnite), then it is not true. For a counterexample, suppose there are 101 people. Suppose the first has players 1,...,100, and player 1 wins. suppose the second game has players 2,..,101 play, and player 101 wins. Then the average win rate after these two games is is (100% + 100% + 98 * 0%) / 101 = 1.98%. Idk if anyone cares lol.
October 17th, 2018, 02:43 PM #2 Global Moderator Joined: May 2007 Posts: 6,628 Thanks: 622 Please clarify description and question.
October 20th, 2018, 11:46 AM #3 Member Joined: Oct 2018 From: Netherlands Posts: 39 Thanks: 3 OK, bigchest01 (though it is 99*0% but the outcome is the same) Now assume that you played the same games and players #50 and #51 won instead of #1 and #101. Then the average win rate would have been (50% + 50% + 99*0%)/101 ~ 0,99%. Clearly, average win rates are not a constant factor in the group performance for this game (and games with a similar formula). Low participation winners up the group average, high participation winners do the opposite. Last edited by Arisktotle; October 20th, 2018 at 12:08 PM.
Tags average, fortnite, math, problem, rate, win
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post Scully Elementary Math 2 August 25th, 2015 11:53 PM Shayna Calculus 1 June 4th, 2014 06:51 PM headsupman Algebra 1 February 6th, 2013 03:59 PM ChristinaScience Calculus 2 October 20th, 2011 12:45 AM symmetry Algebra 6 June 12th, 2007 02:53 PM
Contact - Home - Forums - Cryptocurrency Forum - Top | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8020016551017761, "perplexity": 2890.4930046828035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746227.72/warc/CC-MAIN-20181120035814-20181120061814-00470.warc.gz"} |
https://www.physicsforums.com/threads/finding-electric-field-intensity.416594/ | # Homework Help: Finding Electric Field intensity
1. Jul 17, 2010
### pat666
1. The problem statement, all variables and given/known data
A proton travelling along the x-axis is slowed by a uniform electric field E. At x = 20 cm, the proton has a speed of 3.5 x 106 ms-1 and at x = 80 cm, its speed is zero. Find the magnitude and direction of electric field intensity.
2. Relevant equations
W=Fx
W=Eqx
W=ΔK_E
ΔK_E=1/2 mv^2-1/2 mu^2
3. The attempt at a solution
1st i found the change in energy (work) to slow the proton to a stop. then thats equal to Eqx. I found E to be 1.06*10^5 N/C (positive direction) please check my answer if you have time - its worth marks.
2. Jul 17, 2010
### kuruman
The magnitude of the electric field is correct. In what direction should the force on the proton be and why?
3. Jul 17, 2010
### pat666
thanks kuruman - I thought positive (is that whats meant by direction??) because they need to be "like" to repel??
4. Jul 17, 2010
### kuruman
I am talking about the force on the proton. If it moves left to right (x is increasing), in what direction should the force be to stop it?
5. Jul 17, 2010
### pat666
towards the origin - - but left or right would be dependent on the way the cc were defined???? +x could be towards the left or right couldn't it???
6. Jul 17, 2010
### kuruman
Yes it could, so let me rephrase my question. Relative to the direction of the proton's velocity, in what direction should the force on it point? Same or opposite direction? The answer to this question has nothing to do with which way is positive and which way is negative.
7. Jul 17, 2010
### pat666
opposite to slow it down??
8. Jul 17, 2010
### kuruman
Yes. Now how is the direction of the force on a positively charged particle related to the direction of the electric field? Same or opposite?
9. Jul 17, 2010
### pat666
same- the direction of an electric field is the way a positively charged test particle would go
10. Jul 17, 2010
### kuruman
So put it together. Is the direction of the electric field the same as or opposite to the direction of the proton's velocity?
11. Jul 17, 2010
### pat666
so the full answer is 1.06*10^5 N/C in the opposite direction of the protons velocity.
12. Jul 17, 2010
### kuruman
That is correct.
13. Jul 17, 2010
### pat666
thanks kuruman!!!!!!!!!!!!!!!!! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9070519804954529, "perplexity": 1066.512838536287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267163146.90/warc/CC-MAIN-20180926022052-20180926042452-00126.warc.gz"} |
http://swmath.org/software/11497 | # TSRK
Subquadrature expansions for TSRK methods. There are several ways to derive convergent two-step Runge-Kutta (TSRK) methods. In this paper, the authors investigate B-series to derive the necessary order conditions up to an order six and higher as polynomials of quadratures and subquadrature expressions. Additionally, the authors compute expressions for the error coefficients of a given method up to order six. In the paper, only Runge-Kutta methods are covered, but the approach can also be extended to other classes of integrators. In addition to the theoretical findings, the authors provide and explain a MAPLE code to generate forms of the derived order conditions and the error coefficients. Hence, this MAPLE code could be used to compare different Runge-Kuttas methods. (netlib numeralgo na32)
## References in zbMATH (referenced in 1 article , 1 standard article )
Showing result 1 of 1.
Sorted by year (citations) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9325546026229858, "perplexity": 961.2202959307232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319575.19/warc/CC-MAIN-20170622135404-20170622155404-00582.warc.gz"} |
http://math.stackexchange.com/questions/112938/when-is-sin-x-an-algebraic-number-and-when-is-it-non-algebraic/112947 | # When is $\sin x$ an algebraic number and when is it non-algebraic?
Show that if $x$ is rational, then $\sin x$ is algebraic number when $x$ is in degrees and $\sin x$ is non algebraic when $x$ is in radians.
Details: so we have $\sin(p/q)$ is algebraic when $p/q$ is in degrees, that is what my book says. of course $\sin (30^{\circ})$, $\sin 45^{\circ}$, $\sin 90^{\circ}$, and halves of them is algebraic. but I'm not so sure about $\sin(1^{\circ})$.
Also is this is an existence proof or is there actually a way to show the full radical solution.
One way to get this started is change degrees to radians. x deg = pi/180 * x radian. So if x = p/q, then sin (p/q deg) = sin ( pi/180 * p/q rad). Therefore without loss of generality the question is show sin (pi*m/n rad) is algebraic. and then show sin (m/n rad) is non-algebraic.
-
For the second part you'll also need to assume $x\ne0$. – Henning Makholm Feb 24 '12 at 17:28
Claim is false: $\sin 0^{\circ}$ and $\sin 0$ (in radians) are both algebraic. – Arturo Magidin Feb 24 '12 at 17:29
Hint for the first part: Instead of considering $\sin(\frac{\pi}{180}x)$, view it as the real part of $z=-ie^{\frac{\pi}{180}xi}$ and consider $z^{180q}$ to see that $z$ is algebraic. Then the sine, being $\frac{z}{2}+\frac{\bar z}{2}$, is also algebraic, because the algebraic numbers are closed under addition. – Henning Makholm Feb 24 '12 at 17:31
You may be interested in this and Hardy's comment there about Niven's Theorems and some links. Ofcourse, this comes for free with enlightening answers from various others. – user21436 Feb 24 '12 at 17:45
$\sin\left(\frac{p}{q}\pi\right)=\sin\left(\frac{p}{q}180^\circ\right)$ is always algebraic for $\frac{p}{q}\in\mathbb{Q}$: Let $$\alpha=e^{\frac{i\pi}{q}}=\cos\frac{\pi}{q}+i\sin\frac{\pi}{q}.$$ Then $\alpha^q+1=0$, i.e. $\alpha$ is an (algebraic) $2q^\text{th}$ root of unity, i.e. it is a root of $x^{2q}-1$. Hence, so is its power $\alpha^p$ and reciprocal/conjugate power, which for $p$ an $q$ in lowest terms are roots of $x^q-(-1)^p=0$. Therefore, so too are $$\cos\frac{p\pi}{q}=\frac{\alpha^p+\alpha^{-p}}{2} \qquad\text{and}\qquad \sin\frac{p\pi}{q}=\frac{\alpha^p-\alpha^{-p}}{2i},$$ by the closure of the algebraic numbers as a field.
Ivan Niven gives a nice proof at least that $\sin x$ is irrational for (nonzero) rational $x$. As @Aryabhata points out, the Lindemann-Weierstrass theorem gives us that these values of $\sin$ and $\cos$ are transcendental (non-algebraic), by using the fact that the field extension $L/K$ of $L=\mathbb{Q}(\alpha)$ over $K=\mathbb{Q}$ has transcendence degree 1.
-
Why is a^q +1 = 0 algebraic? I thought algebraic number means it is the root of a polynomial with integer coefficients. this might be beyond the scope of my knowledge. But your alpha is a root of unity , and is a complex number. – bob thornton Feb 24 '12 at 18:19
I edited my post. Is it clear now? $\alpha$ is a root of $x^q+1=0$ and hence also $x^{2q}-1=0$. – bgins Feb 24 '12 at 18:23
@bob: The relevant polynomial is $x^q + 1$, not $\alpha^q + 1$. As $\alpha$ is a root, and $sin$ can be expressed linearly in $\alpha$, we have that $sin$ is algebraic at that value too. – mixedmath Feb 24 '12 at 18:23
Lindemann-Weierstrass theorem implies that for $\alpha$ non-zero algebraic, $\sin \alpha$ is transcendental.
-
well sin (pix) , pix is not algebraic since pi is not algebraic. That is sufficient to prove sin(pi*x) is algebraic? Are you saying that for any 'a' transcendental sin(a) is algebraic? I should qualify 'a' to be a real transcendental. so is sin(e) algebraic? – bob thornton Feb 24 '12 at 18:48
@bobthornton: No, I am saying for any $a$ algebraic (and non-zero), $\sin a$ is transcendental. The other portion of your question was answered by bgins. No clue when $a$ is transcendental (except rational multiples of $\pi$). – Aryabhata Feb 24 '12 at 18:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.934521496295929, "perplexity": 332.2490020161233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159031.19/warc/CC-MAIN-20160205193919-00021-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://brilliant.org/problems/functional-equation-6/ | # Functional Equation!
Algebra Level 5
$\large{f \left( x + \dfrac{y}{x} \right ) = f(x) + \dfrac{f(y)}{f(x)} + 2y}$
Let $f(x): \mathbb{Q}^+ \rightarrow \mathbb{R}$ be a function such that it satisfies the above functional equation for every $x,y$ belonging to the set of positive rational numbers. Then find the value of $f(2015)$.
×
Problem Loading...
Note Loading...
Set Loading... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132072329521179, "perplexity": 390.8644473595688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00249.warc.gz"} |
https://www.physicsforums.com/threads/in-many-statements-in-probability-there-is-an-assumption-like-bounded.367704/ | # In many statements in probability, there is an assumption like bounded
1. Jan 7, 2010
### forumfann
In many statements in probability, there is an assumption like bounded fourth moment, so is there any random variable which has unbounded fourth moment?
2. Jan 7, 2010
### bpet
Re: moment
Some heavy-tailed distributions such as Cauchy & Pareto distributions have infinite moments. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648625254631042, "perplexity": 1333.2411965477045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511806.8/warc/CC-MAIN-20181018105742-20181018131242-00048.warc.gz"} |
http://new.math.uiuc.edu/public403/isometries/distance.html | ## 1. Synthetic and Analytic Geometry
In his Erlangen Program, the great 19th century German Geometer, Felix Klein, proposed that the basis of geometry should be groups of point transformations. Euclid’s geometry was based on axioms that made assertions about primitive geometrical concepts, like point, line, angle etc which were considered to need no definition since "everybody" agreed on what they were anyway. All subsequent geometrical knowledge had to be derived logically from the axioms. We call this an axiomatic system.
Euclid’s program had flaws that were not completely resolved until the end of the 19th century. Applied to geometry, the axiomatic method developed into what is called synthetic geometry, where the "undefined terms" became abstractions, the axioms became sentences relating the undefined terms. As is taught in any course on the subject (such as MA 402 at Illinois), things become interesting only if the undefined terms and their relations are interpreted in a familiar setting, in which the axioms can be checked to be true or not.
In contrast to synthetic geometry, analytic geometry, which derives from the work of Descartes and Fermat, is based on the properties of numbers. You can, if you wish, build up the number system axiomatically (as is done in pute mathematics) or you can just accept numbers as you learned about them in school. Now, geometric objects are sets of numbers, and their relations are expressed in terms of set theory. Of course we still think geometrically about them, but consistency is derived ultimately from the number systems and their algebra. This is the geometry you learned about first in high school, and later added the knowledge of vectors and their properties in calculus.
The dispute as to which approach is "better", the synthetic or the analytic, was resolved by David Hilbert (1900), who proved through logic, that each can be derived from the other. Thus, if the number systems have logical errors, then these will show up in the geometry too, and vice versa. Few mathematicians worry about these issues nowadays, since there are more important things to discover and deeper controversies to resolve.
One of Euclid’s primitives was the idea of congruence. When are two figure the "same" in a geometrical sense. You should recall such criteria as side-angle-side (SAS), which were theorems for Euclid, but his "proof" was not correct. In synthetic geometry, congruence remains undefined, and SAS becomes an axiom.
In analytic geometry, as we shall see below, a congruence is a point transformation (points go to points) which preserves length. More precisely, the distance between any two points remains the same after as it was before the transformation. We define distance in terms of the familiar formula of the Pythagorean theorem, which we express in coordinate and in vector form, where that $X = (x,y), W =(u,v)$
For $X = (x,y), W =(u,v)$ $dist( X, W) = \sqrt{(x-u)^2 + (y-v)^2}$ $dist( X , W ) = | X - W |$ (1)
Question 1.
How are these two definitions related by the dot product?
Recall the dot product, $X \circ W = xu + yv$, permits us to write $|X| = \sqrt{x^2 + y^2} = \sqrt{ X \circ Y }$. Now expand the RHS of the second line of (1) to obtain the first.
## 2. Background
Recall that translations and dilatations are point transformations of the Euclidean plane with several important properties. In particular, displacement vectors are preserved (by translations or scaled dilatations. That is, if tau = tau_A and delta=delta_{Q,r}, and we write the image of a point under a transformation by decorating its name with a superscript, then
X^tau -W^tau =X-W X^delta - W^delta =r(X-W).
Question 2.
Do you recall the formulas for a translation and a dilatation? If you do, or have to look them up, then prove this assertion on your scratch pad by the side of your computer!
Do some algebra to see that
$X^\delta - W^\delta = Q + r(X-Q) - Q - r(W-Q) = r (X-W)$.
The algebra is easier for translations.
Comment.
From the way these transformations affect displacements we see that translations always preserve distance. So these are definitely isometries. For dilatations $r = \pm 1$ will yield isometries. The "plus" case is uninteresting, in the sense that the answer is obvious. Recall that the more interesting "minus" case is for the central reflection, discussed earlier.
### 2.1. Length, Norm, and Magnitude
Geometrically, these are all the same non-negative real number associated with a vector. We like to use length when we represent the vector as one of a collection of mutually parallel "arrows" in the plane. The norm is a more abstract concept in higher algebra. We tend to use magnitude in the same breath as direction. Any physical object that has magnitude and direction can be represented as a vector. For example, velocity and acceleration have magnitude and direction, and therefore are vector quantities in physics.
The magnitude is intimately connected with the dot product, and in a sense they are equivalent concepts.
Comment on Notation
It is generally inconvenient to use any special symbol for the dot product, unless other products involving vectors are used in the same discussion. It detracts from the ability to scan algebraic manipulations you worked so hard in high school to master.
So we shall, when no ambiguity threatens, just write $XY$ instead of $X$ • $Y$. This allows us to use powers, as in $X X = X^2$.
The commutative and distributive properties of numbers carries over to vectors. Just remember that $XY$ is a number, not a vector. Here is a list of properties you learned in calculus.
For vectors $X,Y,Z, O$ and scalars $r, s, 1, 0$ we have the
### 2.2. Bilinearity Properties of the Dot Product
Formula Property name
$X Y = Y X$
commutative for dot product
$X(Y+Z)= XY + XZ$
distributive for dot product
$(rX)Y = r(XY)$
associative for scalar product
$(r+s)X = rX + sX$
distributive for scalar product
$1X = X$
scalar product with 1
$0X = O$
scalar product with 0
Note the associative property does not hold: $(XY)Z \ne X(YZ)$. The LHS is a vector parallel to $Z$, and the RHS has the same direction as $X$. Also note that when it is important to distinguish between the scalar zero $0$ and the zero vector, $O$, we follow the Tondeur’s convention of using the same letter for the point int he plane, and the vector defined by the displacement from the origin to that point. When these distinctions are not obvious from the context, and it is important to emphasize them, you can underline vectors, or stick a little arrow on top of them. The less notational fuss the better, especially in emails.
For a displacement vector B-A, we have |B-A|^2= (B-A)(B-A). By the bilinearity of the dot product, we can write this as (B-A)^2=B^2-2AB+A^2. From this we see we could "define" the dot product itself in terms of norms.
The length of a displacement vector between two points has the following properties:
• L1: |B-A| >= 0.
• L2: |B-A|=0 if and only if B=A.
• L3: |C-A| <= |C-B|+|B-A|.
Note that these are precisely the properties of the distance between two points. Property L3 is called the triangle inequality because any side of a triangle is no longer than the sum of the other two sides.
Question 3.
When is the triangle inequality an equality?
When the three points are collinear, and $B$ lies between $A$ and $B$.
## 3. Definition of an Isometry
So, to preserve distances we want to preserve the length of displacement vectors. Any point transformation that preserves it will be called an isometry. More precisely, we have the
Question 4.
Why is this not the same as saying that $(X^\alpha - Y^\alpha) = (X - Y)$ or that |(X-Y)^alpha|=|X-Y|?
The first would say that $\alpha$ preserves the entire displacement vector, body and soul, namely the direction as well as the magnitude. The second confuses the displacement of the images of two points, with the image of the displacement of two points. The displacement $X-Y$ is a vector. A transformation is defined on points. So $(X-Y)^\alpha$ can only mean the image of the point $(X-Y)$.
A counterexample for the second goes as follows. Let $X^\alpha = X +D$ be a translation which is not the identity. Then $\| (X-Y)^\alpha\| = \| X-Y+D\|$. Now take the special case that $X = Y$ and conclude that $\| D\| = 0$, contrary to our assumption.
Translations preserve displacement vectors, and central reflections reverse displacement vectors. Thus both of these point transformations are isometries.
• Translations: B^tau - A^tau = B-A rArr |B^tau - A^tau|=|B-A|.
• Central Reflections: sigma_Q = delta_{Q,-1} where X^sigma = 2Q-X = Q-(X-Q). So
X^sigma - Y^sigma = -(X-Y) |X^sigma - Y^sigma| = |X-Y|.
Question 5.
Let \delta = delta_{Q,r} be a dilatation. When is delta an isometry?
Since B^delta - A^delta =r(B-A) rArr |B^delta - A^delta|=|r(B-A)|=|r|\ |B-A|, delta is an isometry if and only if r=+-1.
Another example of an isometry is a rotation rho_{Q,theta} about the point Q by theta degrees. This is intuitively obvious, since rotations are rigid motions, like translations. We will study rotations more thoroughly in a subsequent lesson.
We end this section with a proposition that has an easy proof.
Proof. Suppose alpha and beta are two isometries and let gamma=beta alpha be their composition. Then for any pair of points,
|X^gamma - Y^gamma|\ = |beta alpha(X) - beta alpha(Y)| = |beta(X^alpha) - beta(Y^alpha)| = |X^alpha - Y^alpha| (beta is an isometry ) = |X-Y| (alpha is an isometry).
Comment.
Note that we pay a price for using the convenient superscript notation for point transformations. Compositions in functional notations have to be read from right to left, while in the exponent they read from left to right: $\alpha \beta (X) = (X^\beta)^\alpha$.
Moreover, above we defined isometries to be point transformations, and hence we assume they are 1:1 and onto although we will prove that this is a consequence of the distance preserving property. We could, with Tondeur, just assume that isometries are transformations of the plane that takes points to points, and derive the bijectivity.
Recall that for subsets of the point transformations to be a groups, in addition to closure which we have just proved, we only need to check that inverses preserve distance.
Question 6.
Let $\omega = \alpha^{-1}$ be the inverse of an isometry. Why is $\|X^\omega - Y^\omega \| = \| X - Y \|$ ?
$\| X - Y \| =_{bb{1}} \| \alpha (X^\omega) - \alpha (Y^\omega) \| =_{bb{2}} \|X^\omega - Y^\omega \|$
where $=_{bb{1}}$ is true because $\alpha \omega = \iota$, and $=_{bb{2}}$ follows because $\alpha$ is given to be an isometry. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9379193782806396, "perplexity": 584.676137851328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213247.0/warc/CC-MAIN-20180818001437-20180818021437-00640.warc.gz"} |
http://cms.math.ca/cjm/msc/11N25 | Sums of Two Squares in Short Intervals Let $\calS$ denote the set of integers representable as a sum of two squares. Since $\calS$ can be described as the unsifted elements of a sieving process of positive dimension, it is to be expected that $\calS$ has many properties in common with the set of prime numbers. In this paper we exhibit unexpected irregularities'' in the distribution of sums of two squares in short intervals, a phenomenon analogous to that discovered by Maier, over a decade ago, in the distribution of prime numbers. To be precise, we show that there are infinitely many short intervals containing considerably more elements of $\calS$ than expected, and infinitely many intervals containing considerably fewer than expected. Keywords:sums of two squares, sieves, short intervals, smooth numbersCategories:11N36, 11N37, 11N25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9402998685836792, "perplexity": 180.94184176668747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096944.75/warc/CC-MAIN-20150627031816-00015-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://mathhelpforum.com/trigonometry/130262-solved-help-simplify-hyperbolic-trig-function-print.html | # [SOLVED] Help simplify this hyperbolic trig function
• February 22nd 2010, 10:46 PM
downthesun01
[SOLVED] Help simplify this hyperbolic trig function
This should be really easy, but I'm not getting it for whatever reason
$\sqrt{sinh^2(t) - cosh^2(t) + 1^2}$
The answer should be $\sqrt{2}cosh(t)$ But I don't understand how. There must be some simple identity that I'm missing.
• February 22nd 2010, 11:22 PM
Prove It
Quote:
Originally Posted by downthesun01
This should be really easy, but I'm not getting it for whatever reason
$\sqrt{sinh^2(t) - cosh^2(t) + 1^2}$
The answer should be $\sqrt{2}cosh(t)$ But I don't understand how. There must be some simple identity that I'm missing.
Remember that $\cosh^2{t} - \sinh^2{t} = 1$...
• February 22nd 2010, 11:26 PM
downthesun01
Quote:
Originally Posted by Prove It
Remember that $\cosh^2{t} - \sinh^2{t} = 1$...
Yeah, I realize that but you're going to have to dumb down your point a bit for me haha.. :(
• February 22nd 2010, 11:28 PM
Prove It
Quote:
Originally Posted by downthesun01
Yeah, I realize that but you're giving to have to dumb down your point a bit for me haha.. :(
This means $\sinh^2{t} = \cosh^2{t} - 1$.
Substitute it in and simplify...
Upon further inspection, in order to obtain the answer of $\sqrt{2}\cosh{t}$, the original expression needs to be $\sqrt{\sinh^2{t} \color{red}+\color{black}\cosh^2{t} + 1^2}$
• February 22nd 2010, 11:36 PM
downthesun01
I figured out the problem. The original expression is
$\sqrt{sinh^2(t) + (-cosh(t))^2 + 1^2}$
Which is the same as $\sqrt{sinh^2(t) + cosh^2(t) + 1^2}$
So,
$\sqrt{2cosh^2(t)}$
$
\sqrt{2}cosh(t)$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8978360891342163, "perplexity": 1645.2261313684062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.esaral.com/q/an-electron-gun-with-its-collector-at-a-potential-of-100-v-fires-32215 | # An electron gun with its collector at a potential of 100 V fires
Question:
An electron gun with its collector at a potential of 100 V fires out electrons in a spherical bulb containing hydrogen gas at low pressure (10−2 mm of Hg). A magnetic field of 2.83 × 10−4 T curves the path of the electrons in a circular orbit of radius 12.0 cm. (The path can be viewed because the gas ions in the path focus the beam by attracting electrons, and emitting light by electron capture; this method is known as the ‘fine beam tube’ method. Determine e/m from the data.
Solution:
Potential of an anode, = 100 V
Magnetic field experienced by the electrons, B = 2.83 × 10−4 T
Radius of the circular orbit r = 12.0 cm = 12.0 × 10−2 m
Mass of each electron = m
Charge on each electron = e
Velocity of each electron = v
The energy of each electron is equal to its kinetic energy, i.e.,
$\frac{1}{2} m v^{2}=e V$
$v^{2}=\frac{2 e V}{m}$ ....(1)
It is the magnetic field, due to its bending nature, that provides the centripetal force $\left(F=\frac{m v^{2}}{r}\right)$ for the beam. Hence, we can write:
Centripetal force = Magnetic force
$\frac{m v^{2}}{r}=e v B$
$e B=\frac{m v}{r}$
$v=\frac{e B r}{m}$ ...(2)
Putting the value of v in equation (1), we get:
$\frac{2 e V}{m}=\frac{e^{2} B^{2} r^{2}}{m^{2}}$
$\frac{e}{m}=\frac{2 V}{B^{2} r^{2}}$
$=\frac{2 \times 100}{\left(2.83 \times 10^{-4}\right)^{2} \times\left(12 \times 10^{-2}\right)^{2}}=1.73 \times 10^{11} \mathrm{C} \mathrm{kg}^{-1}$
Therefore, the specific charge ratio $(e / m)$ is $1.73 \times 10^{11} \mathrm{C} \mathrm{kg}^{-1}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8765691518783569, "perplexity": 1038.1210074433955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00456.warc.gz"} |
https://www.physicsforums.com/threads/what-is-the-inverse-of-this-logarithm-equation.662086/ | What is the inverse of this logarithm equation?
1. Jan 2, 2013
supernova1203
1. The problem statement, all variables and given/known data
What is the inverse of this logarithm equation?
y=-log5(-x)
i tried it and i got y=-5(-x)
hm.. you know how people say that if you want to find the inverse graph of something just switch the x and y coordinates from the table of values? Well i also tried that approach and apparently the inverse is y=log5x
atleast thats how it looks on table of values and on the graph....i dunno why i went with the y=-5(-x)
I dont know if i got it right or not, someone kind enough to take a look and see if i got it right or not? or maybe just give me the solution outright so i know if i did it correctly or not? :P
Last edited: Jan 2, 2013
2. Jan 2, 2013
symbolipoint
Your result is correct. If you start from your original equation, multiply both sides by negative 1, you can easily switch x and y, and carry couple simple steps to find y as a function of x.
3. Jan 2, 2013
supernova1203
hm... so y=log5x is correct?
4. Jan 2, 2013
symbolipoint
No! Your first result was correct. y=-5^(-x)
5. Jan 2, 2013
ahh ok ty
6. Jan 2, 2013
symbolipoint
Original equation: $y=-log_{5}(-x)$
$-y=log_{5}(-x)$
Now express exponential form :
$5^{-y}=-x$
Now, switch x and y to create inverse:
$5^{-x}=-y$,
and then simply, $y=-5^{-x}$
7. Jan 2, 2013
Mentallic
To check if you're right, when you get to the equation $x=-5^{-y}$, just plug that value of x into your original equation $y=-\log_5(-x)$ and see if you get an equality.
Similar Discussions: What is the inverse of this logarithm equation? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652899861335754, "perplexity": 520.2100603367359}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823114.39/warc/CC-MAIN-20171018195607-20171018215607-00013.warc.gz"} |
http://www.handsonmechanics.org/dynamics/106 | # A Day at the Races: Moment of Inertia
### Overview
This is a demonstration of the basic principles underlying the behavior of rotating bodies. A cylinder “race” is used to show that the closer the mass of an object is concentrated to an axis of rotation, the faster it will spin because it has a lower moment of inertia, which is a measure of a body’s resistance to rotation. The video below provides a brief synopsis of the demonstration.
### Principle
The mass moment of inertia (pg. 1296 of the McGraw-Hill Vector Mechanics for Engineers—Dynamics text) is a rigid body’s resistance to rotation and is a measure of the distribution of mass of a rigid body relative to a given axis of rotation. In its most general form, the mass moment of inertia is given by:
$I = \int_B r^2 dm$
where:
“I” is the mass moment of inertia
“dm” is a differential element of mass of the rigid body
“r” is the perpendicular distance from the axis of rotation to a differential element of mass
“B” represents the rigid body
For a solid cylinder and a hollow cylinder, the equations for the mass moment of inertia about the axis of interest in our demonstration reduce to those in figure 1 (http://hyperphysics.phy-astr.gsu.edu). “M” represents the mass of the rigid body and “R” represents the radius of the solid cylinder, and “a” and “b” represent the inner and outer radii of the hollow cylinder.
Figure 1: Mass Moment of Inertia
### What You Need
These materials can be easily manufactured. Additionally they (or similar materials) can be obtained from businesses that specialize in building teaching aids, such as Arbor Scientific.
Item Quantity Description/Clarification
Inclined Plane 1 An inclined plane able to support both rolling cylinders simultaneously (Figure 2)
“Steel Wheel” 1 A steel cylinder of approximately 3” length and 1” diameter with a wooden core which weighs approximately the same as “Rolling Timber” (Figure 3)
“Rolling Timber” 1 A wooden cylinder of approximately 3” length and 1” diameter with a steel core which weighs approximately the same as “Steel Wheel” (Figure 3)
12″ Ruler 1 Used to start the race
Figure 2: Inclined Plane
Figure 3: Steel Wheel (left) and Rolling Timber (right)
### How It’s Done
Before Class: Obtain materials. Measure and calculate the basic parameters (mass, diameters). Practice demonstration.
In Class: First establish the scenario by introducing and describing “the players” in your best race announcer’s voice. Then, without any analysis, ask the students to guess which “player” will win the competition. Ensure you record this on the board. Build up to the start of the race and stop just short of letting them go. Ask students to consider if weight might be a factor affecting the outcome, then provided the class with the weight of each player and see if their guesses change (record on the board). Again, build up to the start and stop short to discuss the concept of mass being a resistance to translation and moment of inertia as resistance to rotation. Have students help you calculate the moment of inertia for each of the players (figure 4) and take a final tally of the bets. Finally, let the race happen!
Figure 4: Calculation of Inertia
Observations: The student should observe that the outcome is a factor of the moment of inertia and not the weight (given the overall dimensions and the weight of the players are approximately the same.
### That Little Extra
Hype up the event by dramatizing the race! Consider using racetrack videos and noises which can easily be found on the internet. Ask students to guess which will win once the scenario is set up and before the principles are discussed. Build suspense by working your way to the “gun shot” starting the race, then backing off the start to analyze another aspect. If you’re using Power Point, incrementally build your slides to build suspense!
Furthermore, cylinders of different materials and diameters might be considered (figure 5).
Figure 5
### Cite this work as:
Al Estes, Charlie Packard, and Tom Messervey (2014), "A Day at the Races: Moment of Inertia," http://www.handsonmechanics.org/dynamics/106. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8156313896179199, "perplexity": 1103.2026747712318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815435.68/warc/CC-MAIN-20180224053236-20180224073236-00385.warc.gz"} |
https://www.physicsforums.com/threads/show-that-dim-u-n.355653/ | # Show that dim U <= n
• Thread starter brru25
• Start date
• #1
29
0
## Homework Statement
Let U be a vector subspace of C2n such that
sum(xi*yi) = 0 for 1 <= i <= 2n for any x, y ∈ U. Show that dim U <= n. Give an example of such a subspace U with dim U = n
2. The attempt at a solution
I tried just writing out the summation and was thinking along the lines of linear independence but I don't think that applies here (maybe it does, I'm not sure). Could I think of a linear map contained in U that maps two vectors x and y to be the sum = 0? I think I'm confusing myself here.
## Answers and Replies
• #2
lanedance
Homework Helper
3,304
2
i'm not sure i understand the question correctly... so is that effectively the compex innner product of 2 vectors in the subsapce is always zero?
$$<\texbf{x},\texbf{y}> = \sum_i x_i^* y_i$$
but if that were the case, as U is a vector space, if x is in U, then so is c.x, but
$$<\texbf{x},\texbf{cx}> = c||x||^2$$
• #3
Hurkyl
Staff Emeritus
Gold Member
14,916
19
Maybe by * he meant multiplication rather than complex conjugation? We'll have to wait for him to clarify, I guess.
• #4
Mark44
Mentor
34,976
6,729
Glad you guys (lanedance and Hurkyl) jumped in on this one. I was thinking along the lines that lanedance described, except I was thinking of this product of a vector with itself.
$$\sum_{i = 1}^{2n} x_i*x_i~=~0$$
which suggests that all the x_i's are 0.
• #5
29
0
it's multiplication not conjugate (sorry about the mix-up everybody!)
• #6
Mark44
Mentor
34,976
6,729
brru25, You're sure you have given us the exact problem description, right?
• #7
29
0
positive, word-for-word.....see why I'm confused? :-)
• #8
lanedance
Homework Helper
3,304
2
ok, think I'm getting it now, sounds like what Hurkyl was thinking...
I haven't worked it, but would start with an example in the 2D case in $\mathbb{C}^2$, so n = 1
so say you have a vector (a,b) which is in U, it satisfies the rule with itself
$$\sum_i x_i^2 = a^2 + b^2 = 0$$
so, first can you find a vector that satisfies above... and 2nd can you show given a vector in U, there can be no other linearly independent vectors in U?
Last edited:
• Last Post
Replies
1
Views
862
• Last Post
Replies
4
Views
1K
• Last Post
Replies
5
Views
1K
• Last Post
Replies
2
Views
1K
• Last Post
Replies
6
Views
1K
• Last Post
Replies
12
Views
6K
• Last Post
Replies
6
Views
2K
• Last Post
Replies
7
Views
558
• Last Post
Replies
5
Views
8K
• Last Post
Replies
4
Views
2K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9190276265144348, "perplexity": 1348.4761467252877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487616657.20/warc/CC-MAIN-20210615022806-20210615052806-00299.warc.gz"} |
http://www.oaj.pku.edu.cn/sxjz/CN/10.11845/sxjz.2013.42.02.0153 | 北京大学期刊网 | 作者 审稿人 编委专家 工作人员 首页 | 关于 | 浏览 | 投稿指南 | 新闻公告
数学进展
研究论文
关于$*$-$n$-仿正规算子的一个注记 A Note on $*$-$n$-paranormal Operators 左飞1, 申俊丽2 ZUO Fei1, SHEN JunLi2 1. 河南师范大学数学与信息科学学院,新乡, 河南,453007;2. 新乡学院数学系,新乡, 河南,453000 1. College of Mathematics and Information Science, Henan Normal University, Xinxiang, Henan, 453007, P. R. China; 2. Department of Mathematics, Xinxiang University, Xinxiang, Henan, 453000, P. R. China 出版日期: 2013-04-25 DOI: 10.11845/sxjz.2013.42.02.0153
75 浏览 引用导出
0
/ / 推荐
摘要 设$n$为正整数,称$T$为$*$-$n$-仿正规算子,若$||T^{1+n}x||^{\frac{1}{1+n}}\geq||T^{*}x||$对$H$中的每个单位向量$x$都成立;称$T$为$*$-$\hat{n}$-仿正规算子,若$||T^{1+i}x||^{\frac{1}{1+i}}\geq||T^{*}x||$对$H$中的每个单位向量$x$及$i\geq n$都成立.若对任意$\lambda\in \mathbb{C}$,$T-\lambda$都是$*$-$\hat{n}$-仿正规算子,则称$T$为完全$*$-$\hat{n}$-仿正规算子.若$T$是$*$-$n$-仿正规算子,它的近似点谱和联合近似点谱是相等的.另外证明了若$T$或者$T^{*}$是完全$*$-$\hat{n}$-仿正规算子,则Weyl定理对$f(T)$成立,其中$f\in H(\sigma(T))$, 还证明了若$T^{*}$是完全$*$-$\hat{n}$-仿正规算子,则$\alpha$-Weyl定理对$f(T)$成立. 关键词 : $*$-$n$-仿正规算子, Weyl定理, $\alpha$-Weyl定理, $\alpha$-Browder定理 Abstract:Let $n$ be a positive integer. An operator $T$ belongs to class $*$-$n$-paranormal if $||T^{1+n}x||^{\frac{1}{1+n}}\geq||T^{*}x||$ for unit vector $x$. An operator $T\in B(H)$ is said to be $*$-$\hat{n}$-paranormal if$||T^{1+i}x||^{\frac{1}{1+i}}\geq||T^{*}x||$ for unit vector $x$ and $i\geq n$. An operator $T\in B(H)$ is said to be totally $*$-$\hat{n}$-paranormal, if $T-\lambda$ is $*$-$\hat{n}$-paranormal for every $\lambda\in \mathbb{C}$. It is showed that if $T$ belongs to class $*$-$n$-paranormal operators, then its approximate point spectrum and joint approximate point spectrum are identical. We also prove that if either $T$ or $T^{*}$ is totally $*$-$\hat{n}$-paranormal, then Weyl's theorem holds for $f(T)$ for every $f\in H(\sigma(T))$, and also $\alpha$-Weyl's theorem holds for $f(T)$ if $T^{*}$ is totally$*$-$\hat{n}$-paranormal. Key words: Weyl's theorem $\alpha$-Weyl's theorem $\alpha$-Browder's theorem
No related articles found!
Viewed
Full text
Abstract
Cited
Discussed | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9211564660072327, "perplexity": 2058.8213801796337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302740.94/warc/CC-MAIN-20220121071203-20220121101203-00433.warc.gz"} |
http://mathoverflow.net/revisions/963/list | 2 deleted 2 characters in body
In general, the (parametric) h-principle for Legendrian immersions implies that Legendrian immersions f:L->(M,\xi) are classified up to homotopy (through Legendrian immersions) by the following bundle-theoretic invariant: Choosing a compatible almost complex structure on \xi allows one to complexify the differential of f to an isomorphism d_C f: TL\otimes C -> f*\xi, and the relevant invariant is the homotopy class of this isomorphism of complex vector bundles (of course this is independent of the almost complex structure since the space of compatible almost complex structures is contractible).
The above holds in any contact manifold (M,\xi) of arbitrary dimension. Of course when M is 3-dimensional and L is S^1, f^*\xi is the unique complex vector line bundle over S^1, automorphisms of which are parametrized up to homotopy by pi_1(U(1))=Z. So (given that the h-principle also implies that any loop in a 3-manifold is homotopic to a Legendrian loop) it appears to always be the case that the "Legendrian fundamental group" surjects onto the standard fundamental group, with kernel Z.
When M=R^3 this invariant is equivalent to the rotation number that Steven mentioned. There's a proof of the relevant h-principle in the book by Eliashberg and Mishachev. The above discussion is partly based on Section 3.3 of arXiv:0210124 by Ekholm-Etnyre-Sullivan.
1
In general, the (parametric) h-principle for Legendrian immersions implies that Legendrian immersions f:L->(M,\xi) are classified up to homotopy (through Legendrian immersions) by the following bundle-theoretic invariant: Choosing a compatible almost complex structure on \xi allows one to complexify the differential of f to an isomorphism d_C f: TL\otimes C -> f*\xi, and the relevant invariant is the homotopy class of this isomorphism of complex vector bundles (of course this is independent of the almost complex structure since the space of compatible almost complex structures is contractible).
The above holds in any contact manifold (M,\xi) of arbitrary dimension. Of course when M is 3-dimensional and L is S^1, f^*\xi is the unique complex vector bundle over S^1, automorphisms of which are parametrized up to homotopy by pi_1(U(1))=Z. So (given that the h-principle also implies that any loop in a 3-manifold is homotopic to a Legendrian loop) it appears to always be the case that the "Legendrian fundamental group" surjects onto the standard fundamental group, with kernel Z.
When M=R^3 this invariant is equivalent to the rotation number that Steven mentioned. There's a proof of the relevant h-principle in the book by Eliashberg and Mishachev. The above discussion is partly based on Section 3.3 of arXiv:0210124 by Ekholm-Etnyre-Sullivan. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653214812278748, "perplexity": 709.2157201641672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701153213/warc/CC-MAIN-20130516104553-00027-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://mathhelpforum.com/geometry/132899-proof-related-inversions.html | ## Proof related to inversions
Let $P$ be a point outside a given circle $q$, let $PT$ be a straight line, and let $PAB$ be a secant with chord $AB$. Then, if $PA \cdot PB = PT^{2}$, then $PT$ is tangent to circle $q$.
The converse of this is not hard to prove, but I'm stuck on proving that statement. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9904903769493103, "perplexity": 81.76100638213451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860115672.72/warc/CC-MAIN-20160428161515-00009-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://2022.help.altair.com/2022.1/activate/business/en_us/block_reference_guide/Activate/MathOperations/Gain.html | # Gain
This block implements a gain operation where the output is obtained by multiplying the gain parameter by the input. If the gain parameter is scalar, it is multiplied with every entry of the input to produce the output. If it is a matrix, an element-wise product is performed.
MathOperations
## Description
The Gain block implements a gain operation where the output is obtained by multiplying the gain parameter by the input. If the gain parameter is scalar, then it is multiplied with every entry of the input to produce the output.
If it is a matrix, an element-wise product is performed. When input and parameter gain are both matrices, the dimensions should be identical or one must be scalar.
Inputs can be either real or complex.
Inputs are either scalar, vector or matrix.
To perform a matrix multiplication, use the MatrixGain block instead.
## Parameters
NameLabelDescriptionData TypeValid Values
gain
Gain
The gain value
Matrix
overflow
Do on overflow
This parameter defines the way overflow shoudl behandled for integer datatype.
String
'Nothing'
'Saturate'
'Error'
externalActivation
External activation
Specifies whether the block receives an external activation or inherits its activation through its regular input ports. When External Activation is selected, an additional activation port is added to the block. By default, external activation is not selected.
Number
0
1
## Ports
NameTypeDescriptionIO TypeNumber
Port 1
explicit
input
1
Port 2
explicit
output
1
Port 3
activation
input
externalActivation
NameValueDescription
always active
no
direct-feedthrough
yes
Yes. Unless the norm value of the gain is zero
zero-crossing
no
mode
no
continuous-time state
no
discrete-time state
no | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9628428816795349, "perplexity": 2438.6386702651926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00756.warc.gz"} |
http://msemac.redwoods.edu/~darnold/math50c/matlab/pderiv/index.xhtml | ## Partial Derivatives in Matlab
Suppose that we have a function f:R^2\to R defined by
f(x,y)=9-x^2-y^2.
Let's use Matlab to draw the surface represented by the function f over the domain {(x,y): -2 le x,y le 2}. We begin by creating a grid of (x,y) pairs.
[x,y]=meshgrid(-2:.25:2);
When used in this form, Matlab uses the entries in the vector -2:.25:2 for both x and y.
Next, calculate the value of z at each point (x,y) in the grid.
z=9-x.^2-y.^2;
Next, we plot the surface, storing a handle to the surface in h1.
h1=surf(x,y,z);
We use the handle to set some property-value pairs.
set(h1,'FaceColor','magenta',...
'FaceAlpha',0.5,...
'EdgeColor','k')
Some comments of explanation are in order:
1. h1 contains a "handle." It is a numerical value associated with the surface created with the surf command. If you type get(h1) at the Matlab prompt, you will get a list of the current properties and their values for the surface in Figure 1.
2. You use Matlab's set command to change or "set" the value of a property. In this case, we set values for three properties:
• We set the 'Facecolor' to mangenta ('m').
• We set the 'FaceAlpha' to 0.5. This is the amount of transparency we want. The value must be a number between 0 and 1, with 0 being fully transparent and 1 being completely opaque (unable to see through the surface).
• We also set the 'EdgeColor' to black ('k'). These are the gridlines that flow through the surface.
We turn on the grid and rotate the view into the standard orientation we use in class.
grid on
view(150,20)
Finally, we add some appropriate annotations.
xlabel('x-axis')
ylabel('y-axis')
zlabel('z-axis')
title('The surface defined by f(x,y) = 9 - x^2 - y^2.')
The result of this sequence of commands is the surface shown in Figure 1.
### The Partial Derivative with Respect to x.
The partial derivative of f with respect to x is defined as follows.
f_x(x,y)=lim_(h\to 0)frac(f(x+h,y)-f(x,y))(h)
Note how y is "fixed" while x varies from x to x+h. This is an important observation.
Suppose, for example, that we wish to calculate the partial derivative of f with respect to x at the point (1,1). The definition then becomes:
f(1,1)=lim_(h\to 0)frac(f(1+h,1)-f(1,1))(h).
Because f(x,y)=9-x^2-y^2,
$\begin{eqnarray} f_x(1,1) &=&\lim_{h\to 0}\frac{[9-(1+h)^2-(1)^2]-[9-(1)^2-(1)^2]}{h}\\ &=&\lim_{h\to 0}\frac{[8-(1+h)^2]-7}{h}\\ &=&\lim_{h\to 0}\frac{8-1-2h-h^2-7}{h}\\ &=&\lim_{h\to 0}\frac{-2h-h^2}{h}\\ &=&\lim_{h\to 0}(-2-h)\\ &=&-2\end{eqnarray}$
Geometrical Interpretation: One question remains: how do we interpret the result f_x(1,1)=-2? From single variable calculus, we know that the first derivative f'(1) gives the slope of the tangent line at x=1. It is therefore completely natural to think that f_x(1,1)=-2 gives us the slope of the tangent line to the surface at the point (1,1) in the x-direction.
We can visualize this interpretation of f_x(1,1)=-2 by first noting that y=1 is "fixed." This leads us to add the plane y=1 to the plot in Figure 1. Drawing a vertical plane in Matlab is a bit tricky. The first thing to realize is the fact that y is a function of x and z. True, y=1 is constant on its domain, but it is still a function of x and z. Thus, the first step is to choose a good domain over which to draw the plane represented by y=1.
In Figure 1, note that the x-values run from -2 to 2; i.e., -2 le x le 2. Also, note that the z-values run from 1 to 9; i.e., 1le z le 9. Hence, lets create a grid of (x,z) points on this domain.
[x,z]=meshgrid(-2:.25:2,1:0.5:9);
Now for the tricky part. We need y=1 at each of these (x,z) pairs. We do this with Matlab's ones command.
y=ones(size(x));
Hold the previous plot.
hold on
Now we draw the plane and save a handle to the plane in the variable h2.
h2=surf(x,y,z);
Finally, we set the face color to a shade of gray and turn off edge plotting.
h2=surf(x,y,z);
set(h2,'FaceColor',[0.7,0.7,0.7],...
'EdgeColor','none')
The result is the image shown in Figure 2.
Adding a Tangent Line: . The fact that f_x(1,1)=-2 tells us that the slope of the tangent line to the surface at the point (1,1,f(1,1)) is -2. Because f(x,y)=9-x^2-y^2, note that
(1,1,f(1,1))=(1,1,7),
so the point of tangency is the point P(1,1,7). To find the equation of the tangent line through this point in the direction of x, we need to find a vector pointing in the direction of the tangent line.
As we move in the direction of x, the y-coordinate must remain fixed. However, a slope of -2 indicates that for each positive unit moved in the x-direction, z must drop 2 units. If we start at the point P(1,1,7), then move one unit in the positive x-direction, followed by a negative 2 units in the z-direction, we would arrive at the point Q(2,1,5). Thus, the vector vec(PQ) will point in the direction of the line tangent to the surface at the point P(1,1,7) in the direction of x.
vec(PQ)=langle 2-1,1-1,5-7 rangle=langle 1,0,-2 rangle
If we let X(x,y,z) be an arbitrary point on the tangent line, then the equation of the tangent line is:
vec(PX)=t vec(PQ).
With X(x,y,z), P(1,1,7), and vec(PQ)=langle 1,0,-2 rangle, this becomes
langle x-1,y-1,z-7 rangle=t langle 1,0,-2 rangle.
Therefore, the parametric equations of the line tangent to the surface at P(1,1,7) in the direction of x is:
$\begin{eqnarray} x&=&1+t\\ y&=&1\\ z&=&7-2t. \end{eqnarray}$
We will now add this tangent line to the image in Figure 2. The first task is to determine reasonable values of the parameter t. We could just guess values for the parameter t, but the image in Figure 2 provides clues. Note that -2le x le 2. Thus,
-2 le 1+t le 2.
Solving for t,
-3 le t le 1.
But Figure 2 also requires that 1le z le 9; that is,
1 le 7-2t le 9.
Solving for t,
-1 le t le 3.
We need both requirements on t, namely -3 le t le 1 and -1 le t le 3. This means that we should choose -1 le t le 1. Thus, we should sketch the parametric equations of the line on the domain [-1,1].
t=linspace(-1,1);
x=1+t;
y=ones(size(t));
z=7-2*t;
line(x,y,z,...
'color','blue',...
'linewidth',2)
Note that you can add property-value pairs directly in the line command. We chose the color 'blue' and made the linewidth a bit thicker. If you wished, you could use instead h3=line(x,y,z), then follow with set(h3,'color','blue','linewidth',2). If you use this approach, you can capture a complete list of possible properties for the line command with the command get(h3).
The final result is shown in Figure 3.
The key thing to note is the fact that the blue line in Figure 3 is tangent to the graph of f(x,y)=9-x^2-y^2 at the point P(1,1,7) in the direction of x.
### Matlab Files
Although the following file features advanced use of Matlab, we include it here for those interested in discovering how we generated the images for this activity. You can download the Matlab file at the following link. Download the file to a directory or folder on your system.
pderiv.m
The file pderiv.m is designed to be run in "cell mode." Open the file pderiv.m in the Matlab editor, then enable cell mode from the Cell Menu. After that, use the entries on the Cell Menu or the icons on the toolbar to execute the code in the cells provided in the file. There are options for executing both single and multiple cells. After executing a cell, examine the contents of your folder and note that a PNG file was generated by executing the cell.
### Exercises
1. The partial derivative of f in the direction of y is defined by
f_y(x,y)=lim_(h\to 0) frac(f(x,y+h)-f(x,y))(h).
Perform each of the following tasks.
• Again define f(x,y)=9-x^2-y^2. Use the definition above to find the partial derivative of f with respect to y at the point (1,1).
• As in the activity, sketch the surface f.
• Add the plane x=1 to your image.
• Add the tangent line to the surface at the point P(1,1,7) in the direction of y. Include pencil and paper work needed to develop the parametric equations of the tangent line.
• Include a printout of the M-code that produced your image. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8041660785675049, "perplexity": 664.5307102610757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443883.29/warc/CC-MAIN-20141017005723-00157-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/calculating-the-visible-emission-spectrum-for-hydrogen.585319/ | # Calculating the visible emission spectrum for hydrogen
1. ### ciaranciaran1
2
so the title speaks for itself i hope. my problem is i dont know how to work the formula. what are the ns about??
lamba^-1 = R(n1^-2 - n2^-2)
2. ### ciaranciaran1
2
I should of said, r, the rydberg constant, is given. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9018098711967468, "perplexity": 3172.7760052478347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928562.33/warc/CC-MAIN-20150521113208-00228-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/adjustment-of-significant-figures.956188/ | # Adjustment of Significant Figures
• #1
115
24
## Homework Statement
The diameter and length of solid cylinder measured with a vernier calipers of least count 0.01 cm are 1.22 cm and 5.35 cm respectively. Calculate the volume of the cylinder and the uncertainty involved within it.
V= 1/4 πd2 l
## The Attempt at a Solution
After substituting values in above formula V= 6.2509 cm3. The uncertainty involved is 0.11 cm3. Please see the pdf file attached for full solution as given in my book. My observation is that the answer should be 6.25± 0.11 cm3. I suggest there should be 3 significant figures in the final answer (volume) because as per rule least significant digits in the final result have to adjusted according to the measurement with least significant figures (3 in both given measurements). Moreover in the absolute uncertainty the decimal places have to be accommodated according to the decimal places in the final result. Please guide me on this.
#### Attachments
• 250.6 KB Views: 57
## Answers and Replies
Related Introductory Physics Homework Help News on Phys.org
• #2
verty
Homework Helper
2,164
198
I agree with you that the answer given in the book is not correct. By my calculations, the volume could be as much as 6.37, so 6.2 +- 0.1 is not good enough. The answer I get is 6.255 +- 0.115. This covers the range that the volume could be.
• #3
115
24
Thanks for the reply. But you have shown four significant figures in the final result 6.225 which should be three only as demanded by the given measurements and the least count/absolute uncertainty has to be 0.11 not 0.115 as you wrote. Please reply.
• #4
verty
Homework Helper
2,164
198
Thanks for the reply. But you have shown four significant figures in the final result 6.225 which should be three only as demanded by the given measurements and the least count/absolute uncertainty has to be 0.11 not 0.115 as you wrote. Please reply.
Then I think it should be 6.25 +- 0.12. What results do you get with pi = 3.14159?
• #5
verty
Homework Helper
2,164
198
See, if I use pi = 3.14159 I get 6.254 +- 0.114, but if you write that as 6.25 +- ?, the ? should be 1.2 because it can be as high as 6.368. Does that make sense? This is my belief, that it should cover the gap.
• #6
115
24
Thank you very much indeed for the time.
My point is still not taken. I am referring to the rule of adjustment of significant figures in the final answer. The rule says, number of significant figures in the final answer, when measurements are multiplied or divided , could not be more than least number of significant figures in either of the measurements. In the current case 1.22 cm and 5.35 cm both have coincidentally, 3 significant figures, so 6.254---- cm has to be rounded off till there are 3 significant figures left. In this case it will be 6.25 cm, not 6.254 cm (for it has 4 significant figures). Once we have decided 6.25 cm as the final answer with correct number of significant figures, what remains is its synchronization with the absolute uncertainty. 6.25 cm allows only two decimal places to be retained in the absolute uncertainty. Hence 0.1125 cm has to be rounded off as 0.11 cm and final answer may be written as 6.25± 0.11 cm3. Please reflect.
• #7
olivermsun
1,244
118
The method used in the book answer is to carry the errors as percent (proportional) errors because you are multiplying two numbers with uncertainty. This is a more accurate way to deal with the error than just counting significant digits. However, I think that the absolute error due to least count of 0.01 cm might be better estimated as ±0.005 cm, which changes the answer.
• #8
115
24
Pl recommend some book to be helpful in this case.
• #9
verty
Homework Helper
2,164
198
This is the best resource I can find. It says that the uncertainty is the scientist's "best estimate" of the range of values. It also says that uncertainties should have one significant figure, but some scientists use 2 significant figures if the uncertainty starts with 1.
Now in the case above, we have 3 significant figures and the result of the calculation is 6.254 +- 0.114. (Do you agree? Use pi = 3.14159 because pi is exact so we can do that.) Now we can use 2 figures for the uncertainty but it should be a best estimate. I think a best estimate is 6.25 +- 0.12 because we know the range is from 6.24 to 6.37.
• #10
115
24
Many thanks for the source you provided and the further explanation you added. The things seem settled now.
High regards.
Zahid
• #11
olivermsun
1,244
118
This is the best resource I can find. It says that the uncertainty is the scientist's "best estimate" of the range of values. It also says that uncertainties should have one significant figure, but some scientists use 2 significant figures if the uncertainty starts with 1.
Yeah, just be careful that this is not the convention set by, e.g., NIST and ISO. There the uncertainty is reported to whatever significant digits are applicable, based on the method (such as combined standard uncertainty, confidence intervals, etc.), Also, the methods used should be clearly described.
• #12
115
24
Thanks Olivermsun for the help.
Regards
Zahid
• Last Post
Replies
5
Views
17K
• Last Post
Replies
0
Views
2K
• Last Post
Replies
2
Views
1K
• Last Post
Replies
14
Views
8K
• Last Post
Replies
3
Views
533
• Last Post
Replies
3
Views
662
• Last Post
Replies
7
Views
25K
• Last Post
Replies
1
Views
33K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8497758507728577, "perplexity": 797.9294147800515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494331.42/warc/CC-MAIN-20200329105248-20200329135248-00505.warc.gz"} |
https://email.esm.psu.edu/pipermail/macosx-tex/2007-April/030113.html | # [OS X TeX] Formatting the titles of lists
Jeffrey J Weimer weimerj at email.uah.edu
Thu Apr 12 22:52:29 EDT 2007
On Apr 12, 2007, at 9:09 PM, Michael Millett wrote:
> Yes, the above is what I mean. I should have said "vertically
> aligned."
>> This might be handled by a package (paralist as Charilaos suggested).
> I looked at the documentation for "Paralist" in "LaTeX Companion"
> Second Edition, but did not find quite what I was looking for. All
> the formatting here went from top to bottom, rather than providing
> for a highlight to the left.
Look at CTAN for the documentation for the paralist package. It may
have more than you think. Also look for packages to create resumes. I
think there is one with just that name???
>> Alternatively, define a new type of command/list environment using
>>
>> \begin{minipage}{}\parbox[c]{text}\hfill\parbox[c]{text}\end
>> {minipage}
>
> This will be a new LaTeX skill for me, and I am looking forward to
> learning it.
The simplest setup is something like ...
\begin{minipage}{\linewidth}
\parbox[c]{0.3\linewidth}{Year01}%
\hspace*{\fill}\parbox[c]{0.65\linewidth}{%
Put your discussion here .... End
the last line with a period.
}
\end{minipage}
After this, you have to play with \mbox{} commands to get the center
alignments of the two inner paragraphs to work. This is where Kopka
and Daly come in handy (and some experimentation too!).
Again, if you can find (or someone suggests) a package to do
automatically what you want, go for it! I am learning that
reinventing the wheel with LaTeX is only worthwhile if you want to
make a better wheel, not if you want to go somewhere on that wheel.
>> See page 88 of the 2nd edition of Kopka and Daly (A Guide to
>> LaTeX) for a start on aligning adjoining paragraphs, especially on
>> the tricks of using "empty" \mbox{} commands to get things to work.
>
> I do not have this book. But I have noticed several times already
> in my few days on the list that this is a recommended book. I
> bought Leslie Lamport's book as my basic guide. I like it a lot. It
> just doesn't describe some things I would like to do. And The LaTeX
> Companion is often too complicated for my current LaTeX skill level.
IMO, Kopka and Daly is a step above Lamport in that it far better
organized when you really want to DO something with LaTeX rather than
> Charilaos made reference to the difference between LaTeX and
> TeXShop. I am not far enough in my experience with LaTeX to make
> sense of this issue, yet. All I knew is ...
LaTeX is a programming method that uses computer code to TYPESET
(create a wonderful document from) a text file. The LaTeX computer
code runs on WinXX, Unix, Linux, Mac, Solaris, ....
As a first approximation, TeXShop is a text EDITOR that allows you,
sitting on your Mac, to create the text files for LaTeX code to
typeset. On WinXX, they use (gosh, I don't even know!!!) for an
EDITOR. On Unix, they use vi or emacs. On Linux, they use ...
You can take your LaTeX text file, hand it to someone on a WinXX
machine who has the LaTeX code installed, and he or she can generate
the same document that you did using TeXShop. He/She will NOT use
TeXShop to do this, just as you will (likely) not be using vi on your
Mac when someone from the Unix world hands you a .tex file.
HTH
--
J. J. Weimer, Chemistry / Chemical & Materials Engineering
University of Alabama in Huntsville, MSB 125, 301 Sparkman Dr
Huntsville, AL 35899 phone: 256-824-6954 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8786287307739258, "perplexity": 3567.7692421346646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500215.91/warc/CC-MAIN-20230205032040-20230205062040-00056.warc.gz"} |
https://beta.wikiversity.org/wiki/Relation_(mathematics) | # Relation (mathematics)
In mathematics, a finitary relation is defined by one of the formal definitions given below.
• The basic idea is to generalize the concept of a two-place relation, such as the relation of equality denoted by the sign “${\displaystyle =\!}$” in a statement like ${\displaystyle 5+7=12\!}$ or the relation of order denoted by the sign “${\displaystyle {<}\!}$” in a statement like ${\displaystyle 5<12.\!}$ Relations that involve two places or roles are called binary relations by some and dyadic relations by others, the latter being historically prior but also useful when necessary to avoid confusion with binary (base 2) numerals.
• The concept of a two-place relation is generalized by considering relations with increasing but still finite numbers of places or roles. These are called finite-place or finitary relations. A finitary relation involving ${\displaystyle k\!}$ places is variously called a ${\displaystyle k\!}$-ary, ${\displaystyle k\!}$-adic, or ${\displaystyle k\!}$-dimensional relation. The number ${\displaystyle k\!}$ is then called the arity, the adicity, or the dimension of the relation, respectively.
## Informal introduction
The definition of relation given in the next section formally captures a concept that is actually quite familiar from everyday life. For example, consider the relationship, involving three roles that people might play, expressed in a statement of the form ${\displaystyle X~{\text{suspects that}}~Y~{\text{likes}}~Z.\!}$ The facts of a concrete situation could be organized in the form of a Table like the one below:
${\displaystyle {\text{Person}}~X\!}$ ${\displaystyle {\text{Person}}~Y\!}$ ${\displaystyle {\text{Person}}~Z\!}$ ${\displaystyle {\text{Alice}}\!}$ ${\displaystyle {\text{Bob}}\!}$ ${\displaystyle {\text{Denise}}\!}$ ${\displaystyle {\text{Charles}}\!}$ ${\displaystyle {\text{Alice}}\!}$ ${\displaystyle {\text{Bob}}\!}$ ${\displaystyle {\text{Charles}}\!}$ ${\displaystyle {\text{Charles}}\!}$ ${\displaystyle {\text{Alice}}\!}$ ${\displaystyle {\text{Denise}}\!}$ ${\displaystyle {\text{Denise}}\!}$ ${\displaystyle {\text{Denise}}\!}$
Each row of the Table records a fact or makes an assertion of the form ${\displaystyle X~{\text{suspects that}}~Y~{\text{likes}}~Z.\!}$ For instance, the first row says, in effect, ${\displaystyle {\text{Alice suspects that Bob likes Denise.}}\!}$ The Table represents a relation ${\displaystyle S\!}$ over the set ${\displaystyle P\!}$ of people under discussion:
${\displaystyle P~=~\{{\text{Alice}},{\text{Bob}},{\text{Charles}},{\text{Denise}}\}\!}$
The data of the Table are equivalent to the following set of ordered triples:
${\displaystyle {\begin{smallmatrix}S&=&\{&{\text{(Alice, Bob, Denise)}},&{\text{(Charles, Alice, Bob)}},&{\text{(Charles, Charles, Alice)}},&{\text{(Denise, Denise, Denise)}}&\}\end{smallmatrix}}\!}$
By a slight overuse of notation, it is usual to write ${\displaystyle S({\text{Alice}},{\text{Bob}},{\text{Denise}})\!}$ to say the same thing as the first row of the Table. The relation ${\displaystyle S\!}$ is a triadic or ternary relation, since there are three items involved in each row. The relation itself is a mathematical object, defined in terms of concepts from set theory, that carries all the information from the Table in one neat package.
The Table for relation ${\displaystyle S\!}$ is an extremely simple example of a relational database. The theoretical aspects of databases are the specialty of one branch of computer science, while their practical impacts have become all too familiar in our everyday lives. Computer scientists, logicians, and mathematicians, however, tend to see different things when they look at these concrete examples and samples of the more general concept of a relation.
For one thing, databases are designed to deal with empirical data, and experience is always finite, whereas mathematics is nothing if not concerned with infinity, at the very least, potential infinity. This difference in perspective brings up a number of ideas that are usefully introduced at this point, if by no means covered in depth.
## Example 1. Divisibility
A more typical example of a two-place relation in mathematics is the relation of divisibility between two positive integers ${\displaystyle n\!}$ and ${\displaystyle m\!}$ that is expressed in statements like ${\displaystyle {}^{\backprime \backprime }n~{\text{divides}}~m{}^{\prime \prime }\!}$ or ${\displaystyle {}^{\backprime \backprime }n~{\text{goes into}}~m{}^{\prime \prime }.\!}$ This is a relation that comes up so often that a special symbol ${\displaystyle {}^{\backprime \backprime }|{}^{\prime \prime }\!}$ is reserved to express it, allowing one to write ${\displaystyle {}^{\backprime \backprime }n|m{}^{\prime \prime }\!}$ for ${\displaystyle {}^{\backprime \backprime }n~{\text{divides}}~m{}^{\prime \prime }.\!}$
To express the binary relation of divisibility in terms of sets, we have the set ${\displaystyle P\!}$ of positive integers, ${\displaystyle P=\{1,2,3,\ldots \},\!}$ and we have the binary relation ${\displaystyle D\!}$ on ${\displaystyle P\!}$ such that the ordered pair ${\displaystyle (n,m)\!}$ is in the relation ${\displaystyle D\!}$ just in case ${\displaystyle n|m.\!}$ In other turns of phrase that are frequently used, one says that the number ${\displaystyle n\!}$ is related by ${\displaystyle D\!}$ to the number ${\displaystyle m\!}$ just in case ${\displaystyle n\!}$ is a factor of ${\displaystyle m,\!}$ that is, just in case ${\displaystyle n\!}$ divides ${\displaystyle m\!}$ with no remainder. The relation ${\displaystyle D,\!}$ regarded as a set of ordered pairs, consists of all pairs of numbers ${\displaystyle (n,m)\!}$ such that ${\displaystyle n\!}$ divides ${\displaystyle m.\!}$
For example, ${\displaystyle 2\!}$ is a factor of ${\displaystyle 4,\!}$ and ${\displaystyle 6\!}$ is a factor of ${\displaystyle 72,\!}$ which two facts can be written either as ${\displaystyle 2|4\!}$ and ${\displaystyle 6|72\!}$ or as ${\displaystyle D(2,4)\!}$ and ${\displaystyle D(6,72).\!}$
## Formal definitions
There are two definitions of ${\displaystyle k\!}$-place relations that are commonly encountered in mathematics. In order of simplicity, the first of these definitions is as follows:
Definition 1. A relation ${\displaystyle L\!}$ over the sets ${\displaystyle X_{1},\ldots ,X_{k}\!}$ is a subset of their cartesian product, written ${\displaystyle L\subseteq X_{1}\times \ldots \times X_{k}.\!}$ Under this definition, then, a ${\displaystyle k\!}$-ary relation is simply a set of ${\displaystyle k\!}$-tuples.
The second definition makes use of an idiom that is common in mathematics, saying that “such and such is an ${\displaystyle n\!}$-tuple” to mean that the mathematical object being defined is determined by the specification of ${\displaystyle n\!}$ component mathematical objects. In the case of a relation ${\displaystyle L\!}$ over ${\displaystyle k\!}$ sets, there are ${\displaystyle k+1\!}$ things to specify, namely, the ${\displaystyle k\!}$ sets plus a subset of their cartesian product. In the idiom, this is expressed by saying that ${\displaystyle L\!}$ is a ${\displaystyle (k+1)\!}$-tuple.
Definition 2. A relation ${\displaystyle L\!}$ over the sets ${\displaystyle X_{1},\ldots ,X_{k}\!}$ is a ${\displaystyle (k+1)\!}$-tuple ${\displaystyle L=(X_{1},\ldots ,X_{k},\mathrm {graph} (L)),\!}$ where ${\displaystyle \mathrm {graph} (L)\!}$ is a subset of the cartesian product ${\displaystyle X_{1}\times \ldots \times X_{k}~\!}$ called the graph of ${\displaystyle L.\!}$
Elements of a relation are sometimes denoted by using boldface characters, for example, the constant element ${\displaystyle \mathbf {a} =(a_{1},\ldots ,a_{k})\!}$ or the variable element ${\displaystyle \mathbf {x} =(x_{1},\ldots ,x_{k}).\!}$
A statement of the form “${\displaystyle \mathbf {a} \!}$ is in the relation ${\displaystyle L\!}$” is taken to mean that ${\displaystyle \mathbf {a} \!}$ is in ${\displaystyle L\!}$ under the first definition and that ${\displaystyle \mathbf {a} \!}$ is in ${\displaystyle \mathrm {graph} (L)\!}$ under the second definition.
The following considerations apply under either definition:
• The sets ${\displaystyle X_{j}~\!}$ for ${\displaystyle j=1~{\text{to}}~k\!}$ are called the domains of the relation. In the case of the first definition, the relation itself does not uniquely determine a given sequence of domains.
• If all the domains ${\displaystyle X_{j}~\!}$ are the same set ${\displaystyle X,\!}$ then ${\displaystyle L\!}$ is more simply referred to as a ${\displaystyle k\!}$-ary relation over ${\displaystyle X.\!}$
• If any domain ${\displaystyle X_{j}~\!}$ is empty then the cartesian product is empty and the only relation over such a sequence of domains is the empty relation ${\displaystyle L=\varnothing .\!}$ Most applications of the relation concept will set aside this trivial case and assume that all domains are nonempty.
If ${\displaystyle L\!}$ is a relation over the domains ${\displaystyle X_{1},\ldots ,X_{k},\!}$ it is conventional to consider a sequence of terms called variables, ${\displaystyle x_{1},\ldots ,x_{k},\!}$ that are said to range over the respective domains.
A boolean domain ${\displaystyle \mathbb {B} \!}$ is a generic 2-element set, say, ${\displaystyle \mathbb {B} =\{0,1\},\!}$ whose elements are interpreted as logical values, typically ${\displaystyle 0=\mathrm {false} \!}$ and ${\displaystyle 1=\mathrm {true} .\!}$
The characteristic function of the relation ${\displaystyle L,\!}$ written ${\displaystyle f_{L}\!}$ or ${\displaystyle \chi (L),\!}$ is the boolean-valued function ${\displaystyle f_{L}:X_{1}\times \ldots \times X_{k}\to \mathbb {B} ,\!}$ defined in such a way that ${\displaystyle f_{L}(\mathbf {x} )=1\!}$ just in case the ${\displaystyle k\!}$-tuple ${\displaystyle \mathbf {x} =(x_{1},\ldots ,x_{k})\!}$ is in the relation ${\displaystyle L.\!}$ The characteristic function of a relation may also be called its indicator function, especially in probabilistic and statistical contexts.
It is conventional in applied mathematics, computer science, and statistics to refer to a boolean-valued function like ${\displaystyle f_{L}\!}$ as a ${\displaystyle k\!}$-place predicate. From the more abstract viewpoints of formal logic and model theory, the relation ${\displaystyle L\!}$ is seen as constituting a logical model or a relational structure that serves as one of many possible interpretations of a corresponding ${\displaystyle k\!}$-place predicate symbol, as that term is used in predicate calculus.
Due to the convergence of many traditions of study, there are wide variations in the language used to describe relations. The extensional approach presented in this article treats a relation as the set-theoretic extension of a relational concept or term. An alternative, intensional approach reserves the term relation to the corresponding logical entity, either the logical comprehension, which is the totality of intensions or abstract properties that all the elements of the extensional relation have in common, or else the symbols that are taken to denote those elements and intensions.
## Example 2. Coplanarity
For lines ${\displaystyle \ell \!}$ in three-dimensional space, there is a triadic relation picking out the triples of lines that are coplanar. This does not reduce to the dyadic relation of coplanarity between pairs of lines.
In other words, writing ${\displaystyle P(\ell ,m,n)\!}$ when the lines ${\displaystyle \ell ,m,n\!}$ lie in a plane, and ${\displaystyle Q(\ell ,m)\!}$ for the binary relation, it is not true that ${\displaystyle Q(\ell ,m),\!}$ ${\displaystyle Q(m,n),\!}$ and ${\displaystyle Q(n,\ell )\!}$ together imply ${\displaystyle P(\ell ,m,n),\!}$ although the converse is certainly true: any two of three coplanar lines are necessarily coplanar. There are two geometrical reasons for this.
In one case, for example taking the ${\displaystyle x\!}$-axis, ${\displaystyle y\!}$-axis, and ${\displaystyle z\!}$-axis, the three lines are concurrent, that is, they intersect at a single point. In another case, ${\displaystyle \ell ,m,n\!}$ can be three edges of an infinite triangular prism.
What is true is that if each pair of lines intersects, and the points of intersection are distinct, then pairwise coplanarity implies coplanarity of the triple.
## Remarks
Relations are classified by the number of sets in the cartesian product, in other words, the number of places or terms in the relational expression:
${\displaystyle L(a)\!}$ Monadic or unary relation, in other words, a property or set ${\displaystyle L(a,b)~{\text{or}}~aLb\!}$ Dyadic or binary relation ${\displaystyle L(a,b,c)\!}$ Triadic or ternary relation ${\displaystyle L(a,b,c,d)\!}$ Tetradic or quaternary relation ${\displaystyle L(a,b,c,d,e)\!}$ Pentadic or quinary relation
Relations with more than five terms are usually referred to as ${\displaystyle k\!}$-adic or ${\displaystyle k\!}$-ary, for example, a 6-adic, 6-ary, or hexadic relation.
## References
• Peirce, C.S. (1870), “Description of a Notation for the Logic of Relatives, Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic”, Memoirs of the American Academy of Arts and Sciences 9, 317–378, 1870. Reprinted, Collected Papers CP 3.45–149, Chronological Edition CE 2, 359–429.
• Ulam, S.M., and Bednarek, A.R. (1990), “On the Theory of Relational Structures and Schemata for Parallel Computation”, pp. 477–508 in A.R. Bednarek and Françoise Ulam (eds.), Analogies Between Analogies : The Mathematical Reports of S.M. Ulam and His Los Alamos Collaborators, University of California Press, Berkeley, CA.
## Bibliography
• Bourbaki, N. (1994), Elements of the History of Mathematics, John Meldrum (trans.), Springer-Verlag, Berlin, Germany.
• Halmos, P.R. (1960), Naive Set Theory, D. Van Nostrand Company, Princeton, NJ.
• Lawvere, F.W., and Rosebrugh, R. (2003), Sets for Mathematics, Cambridge University Press, Cambridge, UK.
• Maddux, R.D. (2006), Relation Algebras, vol. 150 in Studies in Logic and the Foundations of Mathematics, Elsevier Science.
• Mili, A., Desharnais, J., Mili, F., with Frappier, M. (1994), Computer Program Construction, Oxford University Press, New York, NY.
• Minsky, M.L., and Papert, S.A. (1969/1988), Perceptrons, An Introduction to Computational Geometry, MIT Press, Cambridge, MA, 1969. Expanded edition, 1988.
• Peirce, C.S. (1984), Writings of Charles S. Peirce : A Chronological Edition, Volume 2, 1867–1871, Peirce Edition Project (eds.), Indiana University Press, Bloomington, IN.
• Royce, J. (1961), The Principles of Logic, Philosophical Library, New York, NY.
• Tarski, A. (1956/1983), Logic, Semantics, Metamathematics, Papers from 1923 to 1938, J.H. Woodger (trans.), 1st edition, Oxford University Press, 1956. 2nd edition, J. Corcoran (ed.), Hackett Publishing, Indianapolis, IN, 1983.
• Ulam, S.M. (1990), Analogies Between Analogies : The Mathematical Reports of S.M. Ulam and His Los Alamos Collaborators, A.R. Bednarek and Françoise Ulam (eds.), University of California Press, Berkeley, CA.
• Venetus, P. (1984), Logica Parva, Translation of the 1472 Edition with Introduction and Notes, Alan R. Perreiah (trans.), Philosophia Verlag, Munich, Germany.
## Document history
Portions of the above article were adapted from the following sources under the GNU Free Documentation License, under other applicable licenses, or by permission of the copyright holders. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 144, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9537439942359924, "perplexity": 603.2164164916572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00542.warc.gz"} |
https://www.projectrhea.org/rhea/index.php/Student_summary_sampling_part1_ECE438F09 | ## Basic Definition of Sampling
Sampling is the extraction of values of a continuous signal at fixed intervals. We learn more about the frequency spectrum of a signal the faster we sample it. Naturally, if the signal changes much faster than the sampling rate, these changes will not be captured accurately and aliasing occurs.
## Nyquist Sampling Theorem
The Nyquist Sampling theorem says that in order to capture all the frequency information of a bandlimited signal, the sampling frequency must be twice the maximum frequency of the signal. In other words, each frequency component must be sampled at least twice per period.
Nyquist Sampling Criteria
$\displaystyle f_m=\text{The max frequency of the signal being sampled}$
$\displaystyle f_s=\text{The sampling frequency}$
$\displaystyle f_s > 2f_m$
There are several ways to think about this idea, if it is not already intuitive. First, consider a sinusoid of arbitrary frequency. The Nyquist Sampling theorem says we must sample at least two points within one period of this sinusoid in order to determine its frequency, given that we won't be doing any guess work. Now consider what the Fourier Transform is. The Fourier Transform is a weighted sum of complex exponentials. For a real signal, the Fourier Transform allows us to break this signal up into a sum of sines and cosines of varying magnitude and phase. All this information is conveniently packaged within the complex coefficients of the Fourier Transform. So if it is understood that a sinsuoid must be sampled twice within a single period to determine its frequency, then one can also understand that once we break up the signal into a sum of sinusoids, the sampling frequency must be fast enough to properly sample the fastest sinusoid of which the signal is composed of.
## The Sampling Process
In theory, here is how we would like to sample our signals.
Step 1: Begin with a continuous function x(t).
Step 2: Sample x(t) using an impulse generator or comb function.
Dirac Comb or Impulse Train:
$p_T(t)=\sum_{k=-\infty}^\infty\delta(t-kT_s)$
Sampling of x(t):
$x_s(t)=p_T(t)x(t)=\sum_{k=-\infty}^\infty x(kT_s)\delta(t-kT_s)$
Step 3: Discretize the signal.
$\displaystyle x[n]=x_s(t2\pi/f_s)2\pi/f_s$
After Step 3, the signal is ready to be put through a discrete filter.
It is important to note that this is an idealization of the sampling process. To adhere to the Nyquist sampling theorem, the sampling frequency must be at least twice the maximum frequency. Often, we do not know what the maximum frequency of the signal is. To minimize the effects of aliasing, the signal is first put through a lowpass filter. This effectively sets the maximum frequency of the signal equal to the cutoff frequency of the filter and will allow us to determine a sampling frequency that will satisfy the Nyquist Sampling theorem. This will reduce the effects of aliasing, but may also distort the signal, since higher frequencies are inevitably lost. We also cannot generate an impulse in real life. The actual methods used to sample a continuous time signal will be introduced in sampling part 2. Finally, a sampled signal must be quanitized before discretization. This is because digital filters are limited in what numbers they can represent. This depends on the number of bits your computer is based off of.
To get a better understanding of what is actually happening between Steps 1-3, it is good to observe the frequency domain representation of the signal as it passes through each stage of the sampling process. The following explanation adheres to the idealization of the sampling process.
## From a Frequency Standpoint
Step 1: The signal x(t) may be periodic or aperiodic. If the signal is periodic, the frequency domain representation is discrete. If the signal is aperiodic, the frequency domain representation is continuous. A good way to remember this is to remember that sampling in time is equivalent to convolving the frequency domain representation of your signal with an impulse train in the frequency domain. Conversely, sampling in the frequency domain is equivalent to convolving the time domain representation of your signal with an impulse train.
$P_T(f)=1/T\sum_{k=-\infty}^\infty\delta(f-kf_s)$
Step 2: When the signal x(t) is multiplied by the dirac comb p(t), this is equivalent to convolving the frequency domain representation of x(t) with the frequency domain representation of p(t). Since the Fourier Transform of the comb is also an impulse train in the frequency domain, the convolution of X(f) with P(f) simply makes copies of X(f) at each impulse with the magnitude of X(f) scaled by the sampling frequency. The sampled signal now has a frequency domain representation which is periodic with respect to the sampling frequency.
$X_s(f)=X(f)*P_T(f)=1/T\sum_{k=-\infty}^\infty X(f-kf_s)$
Step 3: To discretize the sampled signal, the frequency must be scaled such that the frequency is periodic with respect to $2\pi$. This is because discrete time filters are periodic with respect to $2\pi$. The reason can be seen below.
$x_s(t)=\sum_{k=-\infty}^\infty x(kT_s)\delta(t-kT_s)$
$\mathcal{F}\lbrace x_s(t) \rbrace=\int_{-\infty}^{\infty}\sum_{k=-\infty}^\infty x(kT_s)\delta(t-kT_s)e^{-jwkT_s}dt=\sum_{k=-\infty}^\infty x(kT_s)e^{-jwkT_s}$
The trick to step two is to realize that taking the integral of a weighted sum of impulses is simply the sum of the weights. The result is the Discrete Time Fourier Transform. To solve this summation, we generally use the formula for the sum of a geometric series. This leaves a $e^{-jw}$ term in the DTFT, which causes $\omega$ to become 'trapped' in the complex exponential term, causing the periodicity of the DTFT to be $2\pi$ when plotted versus frequency.
$\sum_{n=-\infty}^\infty x[n]e^{-jwn}$
$X(e^{jw_d})=X_s(f_c f_s/2\pi)$
To transform the continuous time sampled signal the its discrete time representation, let $f_c=\frac{f_d f_s - 2\pi f_s k}{2\pi}$ (for all k = integer), where f_s is the sampling frequency and f_c is a frequency corresponding to the continuous time frequency domain representation. f_d is the corresponding frequency for the discrete time representation of the sampled signal. Since the sampled signal is periodic in the frequency domain, the 2*pi*k term is there to account for this. How this is actually done will be discussed on sampling part 2.
$X_s(e^{j\2pi f_c})=X_s(f_c)\arrowvert_{f_c=\frac{f_d f_s - 2\pi f_s k}{2\pi}}$
Now that the signal has been discretized, a discrete time filter may be applied to it.
## Reconstructing the Signal
After the signal has been sampled, discretized, and processed in discrete time, the signal must be reconstructed. The discrete time signal is periodic with respect to 2*pi. To place it back into context with the continuous time domain, the frequency must be scaled again such that the frequency domain is periodic with respect to the sampling frequency. To do this, let $f_d=\frac{2\pi f_c - 2\pi f_s k}{f_s}$.
An impulse generator is used to create impulses separated by a time interval equal to the sampling rate with the weight of each impulse equal to the values of the discrete time signal. In the frequency domain, we now have the frequency domain representation of the filtered signal periodic with respect to the sampling frequency. To extract just 1 period of the frequency domain representation of the signal, the signal is convolved with sinc(t/T_s). This is equivalent to an ideal lowpass filter with a cutoff frequency of $\frac{f_s}{2}$ and magnitude of $T_s$. Remember that the when the original signal was sampled, the frequency domain was scaled by $f_s$. Multiplying by $T_s$ will undo this scaling. The end result is a perfect reconstruction of the processed signal. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081273078918457, "perplexity": 283.5971338907736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688674.52/warc/CC-MAIN-20191019013909-20191019041409-00300.warc.gz"} |
https://www.ideals.illinois.edu/handle/2142/70901 | ## Files in this item
FilesDescriptionFormat
application/pdf
8511613.pdf (5MB)
(no description provided)PDF
## Description
Title: An Application of the Lie Group Theory of Continuous Point Transformations to the Vlasov-Maxwell Equations (Plasma Physics) Author(s): Haill, Thomas Arthur Department / Program: Nuclear Engineering Discipline: Nuclear Engineering Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Engineering, Nuclear Abstract: The concept of invariance of partial differential equations under Lie groups of continuous point transformations is employed to study the Vlasov-Maxwell equations of plasma physics. These equations are first expressed in arbitrary orthogonal, curvilinear coordinates. Their invariance properties are studied in Cartesian, cylindrical and spherical geometries. One-to-one mappings between the admitted groups of point transformations in the different geometries are demonstrated.The invariance properties of the electrostatic Vlasov-Maxwell equations in one-dimensional Cartesian, cylindrical and spherical geometries are also studied. Group invariants are used to reduce these equations to similarity form with one less independent variable. An attempt is made to solve the reduced Vlasov-Maxwell equations for a particular self-similar solution.Finally, relationships are demonstrated between the groups of point transformations admitted by the Vlasov-Maxwell equations and the groups of point transformations admitted by the moment equations derivable from the Vlasov-Maxwell equations. Issue Date: 1985 Type: Text Description: 187 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1985. URI: http://hdl.handle.net/2142/70901 Other Identifier(s): (UMI)AAI8511613 Date Available in IDEALS: 2014-12-16 Date Deposited: 1985
| {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9085261821746826, "perplexity": 2961.10289155371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591497.58/warc/CC-MAIN-20180720041611-20180720061611-00195.warc.gz"} |
https://slideplayer.com/slide/716787/ | # Electricity & Magnetism Lecture 5: Electric Fields and Field Lines.
## Presentation on theme: "Electricity & Magnetism Lecture 5: Electric Fields and Field Lines."— Presentation transcript:
Electricity & Magnetism Lecture 5: Electric Fields and Field Lines
Summary: Lecture 4 Coulomb's Law –Electrostatic Force between charges Coulombs Law (vector) form Coulomb force vs Gravity –Electrostatic force is in general much stronger Superposition
Todays Topics The Electric field Vector Fields Superposition & Electric Field Electric Field Lines
The Electric Field Van de Graaf Generator and thread Van de Graaf Generator and many threads
Electric Field Physicists did not like the concept of action at a distance i.e. a force that was caused by an object a long distance away They preferred to think of an object producing a field and other objects interacting with that field Thus rather than... +- they liked to think... + -
+Q 0 Electric Field Electric Field E is defined as the force acting on a test particle divided by the charge of that test particle Thus Electric Field from a single charge is Q
Electric Field of a single charge + +Q 0 Note: the Electric Field is defined everywhere, even if there is no test charge is not there. +Q 0 Electric field from test particles Electric Field from isolated charges (interactive)
Charged particles in electric field +Q-Q Using the Field to determine the force
Vector & Scalar Fields The Electric Field
Electric Field as a vector field The Electric Field is one example of a Vector Field A field (vector or scalar) is defined everywhere A vector field has direction as well as size The Electric Field has units of N/C
Other examples of fields: Elevation above sea level is a scalar field Elevation is defined everywhere (on the earth) Elevation has a size (and unit), i.e. length, measured in m Elevation does not have a direction A contour diagram 100m 50m Elevation
Other examples of fields: Slope Slope is a vector field Slope is defined everywhere (on the earth) Slope has a size (though no dimension), i.e. 10%, 1 in 10, 2º Slope does have a direction A contour diagram
Representation of the Electric Field Electric Field Lines
Representation of the Electric Field It would be difficult to represent the electric field by drawing vectors whose direction was the direction of the field and whose length was the size of the field everywhere
Representation of the Electric Field Instead we choose to represent the electric field with lines whose direction indicates the direction of the field Notice that as we move away from the charge, the density of lines decreases These are called Electric Field Lines These are called Electric Field Lines
Drawing Electric Field Lines The lines must begin on positive charges (or infinity) The lines must end on negative charges (or infinity) The number of lines leaving a +ve charge (or approaching a -ve charge) is proportional to the magnitude of the charge electric field lines cannot cross
Field is zero at midpoint Field is not zero here Electric Field Lines
Field lines for a conductor
Drawing Electric Field Lines: Examples From Electric field vectors to field lines Field lines from all angles Field lines representation
Quiz: The field direction A charge +q is placed at (0,1) A charge –q is placed at (0,-1) What is the direction of the field at (1,0) –A) i + j –B) i - j –C) -j –D) -i
Electric Field Lines Define since we know The number density of field lines is
Interpreting Electric Field Lines The electric field vector, E, is at a tangent to the electric field lines at each point along the lines The number of lines per unit area through a surface perpendicular to the field is proportional to the strength of the electric field in that region
Superposition & Electric Field
Q1Q1 Q2Q2
Summary: Lecture 5 The Electric Field is related to Coulombs Force by Thus knowing the field we can calculate the force on a charge The Electric Field is a vector field Field lines illustrate the strength & direction of the Electric field Using superposition we thus find
Download ppt "Electricity & Magnetism Lecture 5: Electric Fields and Field Lines."
Similar presentations | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9756286144256592, "perplexity": 614.041551534585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589470.9/warc/CC-MAIN-20180716213101-20180716233101-00317.warc.gz"} |
http://math.stackexchange.com/questions/100549/limit-of-supremum-of-a-sequence | # limit of supremum of a sequence
Can any one help me with this?
Let $c$ be a real number. I would like to show that $$\limsup_{n \to \infty}\sqrt[n]{\left|\frac{i}{2}\left(\frac{(c-i)^{n+1}-(c+i)^{n+1}}{c^{2}+1}\right)\right|}=\sqrt{c^{2}+1}.$$
I came up with this when I was trying to prove that the radius of convergence for the power series of then function $\frac{1}{x^2+1}$ at a real point $c$ is $\sqrt{c^2+1}$.
-
I edited your post. I hope that I maintained your original question, but please edit the post if I accidentally changed the meaning of anything. Also, note that $|i| = 1$ and $s^{1/n} \to 1$ for any positive number $s$. Thus, you need only consider $$\limsup_{n \to \infty} \sqrt{|(c-i)^{n+1} - (c+i)^{n+1}|}.$$ – JavaMan Jan 19 '12 at 19:46
Write $c+\mathrm i=r\mathrm e^{\mathrm it}$ with $r=\sqrt{c^2+1}$ and $t$ in $(0,\pi)$ such that $\cos(t)=c/r$. Then, the LHS is $$R_n=\sqrt[n]{\left|r^{n-1}\,\sin((n+1)t)\right|}=r^{1-1/n}\cdot u_n,$$ with $$u_n=\sqrt[n]{\left|\sin((n+1)t)\right|}.$$ Then $r^{1-1/n}\to r$. Note that there is no reason to believe the sequence $(u_n)$ should converge. For example, if $c=1$, $t=\pi/4$, hence it has two limit points: $0$ for the subsequence $(u_{4n+3})$ (which is identically $0$), and $1$ otherwise.
Fortunately, the question asks for the limsup of $(R_n)$, not for its (nonexistent) limit. The key remark here is that, for every fixed $t$ in $(0,\pi)$, $u_n^n=\left|\sin((n+1)t)\right|$ is periodically at least some $\varepsilon(t)\gt0$. On the other hand, $u_n\leqslant1$ for every $n$, hence $$1=\lim\limits_{n\to\infty}\sqrt[n]{\varepsilon(t)}\leqslant\limsup\limits_{n\to\infty}\ u_n\leqslant1.$$ Here is a proof that $\varepsilon(t)$ exists. Since $t\gt0$, the sequence of angles $(n+1)t$ grows to infinity, and since $t\lt\pi$, it jumps strictly less than $\pi$ at a time, hence it must land periodically in an interval of length $t$ centered at a point of $\pi/2+2\pi\mathbb Z$. On such an interval, the sine is uniformly at least $\varepsilon(t)=\cos(t/2)\gt0$, QED.
Finally, $$\limsup\limits_{n\to\infty}\ R_n=r\cdot\limsup\limits_{n\to\infty}\ u_n=r=\sqrt{c^2+1}.$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9711070656776428, "perplexity": 79.24291132732102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824995.51/warc/CC-MAIN-20160723071024-00153-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1680800/how-does-one-project-the-gradient-at-a-point-on-a-surface-into-a-plane | # How does one project the gradient at a point on a surface into a plane?
I am studying Multivariable Calculus and have come to the following excerpt in my book:
I can see clearly how they get from the given function to
$$y'(x)\ =\ \frac{3y}{x}$$
And understand that this slope passes through the given point. The following line leaves me totally lost, however:
You can verify that the solution to this differential equation is $$y\ =\ \frac{4x^3}{27}$$ and the projection of the path of steepest descent in the xy-plane is the curve $$y\ =\ \frac{4x^3}{27}$$
How did they get from any of the given information--the initial equation, its gradient, the slope of the gradient, etc.--to the above equation for y? Furthermore, how am I to find the projection of the gradient in the xy-plane?
• You have a Separable DEQ that leads to $\displaystyle \int \dfrac{1}{y}~dy = \int \dfrac{3}{x}~dx$ with initial condition $y(3) = 4$. – Moo Mar 3 '16 at 14:35
• @Moo - Fist bump for getting it right--and an awesome username – StudentsTea Mar 3 '16 at 17:48
• For the rest of your questions, I think this write-up and Examples would do it: aleph0.clarku.edu/~djoyce/ma131/directional.pdf – Moo Mar 3 '16 at 18:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9077287316322327, "perplexity": 198.7878938406059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529006.88/warc/CC-MAIN-20191210205200-20191210233200-00150.warc.gz"} |
http://onelab.info/wiki/Magnetostriction | # Magnetostriction
2D model of inductor with magnetostriction.
## Introduction
To run the model, open choke.pro with Gmsh.
This 2D model computes the mechanical deflections of a two-column inductor (choke) with distributed air gaps along the limbs. It takes into account:
• the magnetostriction effect in magnetic sheets
• the Maxwell stress tensor into the whole domain (airgaps and magnetic core)
Magnetostatics and structural mechanics are weakly coupled, assuming that mechanical deflections do not modify the magnetic field distribution. The magnetic field distribution is first computed, and the resulting Maxwell stress tensor and magnetostriction strain tensor are then calculated. The mechanical model also allows to estimate the different resonant frequencies or the inductor.
File details:
• choke.geo: parametrized geometry of the two-limb inductor
• choke.pro: .pro file associated to choke.geo
• Magsta2D.pro: the magnetostatics physics
• Elasticity_2D.pro: the structural mechanics
• jacobian.pro: the integration and Jacobian methods
• magnetostriction.txt: the magnetostriction curve $$\Lambda(B)$$
## Geometry
A two columns inductor is modelled and the geometry can be modified using parameters:
• column height
• column width
• number of airgap
• airgap thichness
• yoke length
• winding thickness
## Boundaries
### Magnetic
The magnectic vector potential a is fixed to 0 on the external boundaries of the system. The current density is imposed into the winding and deduced thanks to the nominal current, the number of turns and the cross section of the winding.
### Mechanical
The inferior yoke is assumed fixed (displacement u=0).
## Magnetics
A 2D Magnetostatic solver is used for this study. The model computes the magnectic vector potential a. Materials are considered linear and sources of currents are directly imposed. The solver is coded in the file MagSta_2D.pro
$$\begin{eqnarray} \nabla^2 \mathbf{A} + \mu_0 \mu_r \mathbf{j_z} = \mathbf{0} \label{eq:vector_potential} \end{eqnarray}$$
Different post-proccessing are predefined:
• The flux density B
• The magnetic field H
• The energy per length unit (J/m)
## Maxwell Stress Tensor
The Maxwell Stress Tensor is deduced directly from the magnetic field and is injected into the mechanical solver.
$$\begin{eqnarray} \sigma_{mst}= \frac{1}{\mu_0\mu_r}\begin{bmatrix} B_x^2 -\frac{1}{2}B^2 & B_xB_y\\ B_xB_y & B_y^2 -\frac{1}{2}B^2 \end{bmatrix} \label{eq:stress} \end{eqnarray}$$
The following code in used :
sig_maxwell[]=Vector[ CompX[\$1]*CompX[\$1]-Norm[\$1]*Norm[\$1]/2,
CompY[\$1]*CompY[\$1]-Norm[\$1]*Norm[\$1]/2,
CompX[\$1]*CompY[\$1]];
## Magnetostriction strain tensor
The magnetostriction tensor is obtained using the flux density map and with the magnetostriction curve of the material witch links the strain with the flux density. This kind of curve is presented on the following figure:
The curve permits to determine the strain tensor in the referential (Bt,Bn) presented below:
The strain tensor is decomposed into two phenomenas: the normal and tangential magnetostriction, called respectively $$\lambda_N$$ and $$\lambda_T$$. $$\lambda_T$$ could be obtained using experimental data, or by asuming that magnetostriction doesn't modify the volume, a simple relation is obtained$\lambda_N=-\lambda_N/2$
$$\begin{eqnarray} \epsilon_{(B_t,B_n)}=\begin{bmatrix} \lambda_T & 0 \\ 0 & \lambda_N \end{bmatrix} \label{eq:strain_tensor1} \end{eqnarray}$$
In order to be injected into the mechanical model, the strain tensor needs to be converted into the (x,y) basis. This is done by computing the change of basis matrix P. In this way the strain tensor in the basis (x,y) is determined$\begin{eqnarray} P=\begin{bmatrix} \frac{B_x}{B} & -\frac{B_y}{B} \\ \frac{B_y}{B} & \frac{B_x}{B} \\ \end{bmatrix} \label{eq:P} \end{eqnarray}$
$$\begin{eqnarray} \epsilon_{(x,y)}=P \epsilon_{(B_t,B_n)}\ P^t \label{eq:PPT} \end{eqnarray}$$
$$\begin{eqnarray} \epsilon_{(x,y)}=\begin{bmatrix} \frac{1}{B^2}(\lambda_T B_x^2+\lambda_N B_y^2) & \frac{B_xB_y}{B^2}(\lambda_T-\lambda_N) \\ \frac{B_xB_y}{B^2}(\lambda_T-\lambda_N) & \frac{1}{B^2}(\lambda_T B_y^2+\lambda_N B_x^2) \\ \end{bmatrix} \label{eq:strain_xyz} \end{eqnarray}$$
The following code in used :
lamb[]=Vector[lambdap[Norm[\$1]],lambdaper[Norm[\$1]],0];
sig_vect[]=C_m[]*lamb[\$1]; sig_mat[]=Tensor[CompX[sig_vect[\$1]], CompZ[sig_vect[\$1]] , 0, CompZ[sig_vect[\$1]], CompY[sig_vect[\$1]] , 0, 0 , 0 , 0 ]; // Change of basis P[]=Tensor[ CompX[\$1]/Norm[\$1] , -CompY[\$1]/Norm[\$1] , 0, CompY[\$1]/Norm[\$1] , CompX[\$1]/Norm[\$1] , 0, 0 , 0 , 1 ]; PP[]=Transpose[P[\$1]];
sig_PPP[]=P[\$1]*sig_mat[\$1]*PP[\$1]; sig_magnetostriction[]=Vector[CompXX[sig_PPP[\$1]],CompYY[sig_PPP[\$1]],CompXY[sig_PPP[\$1]]];
## Mechanical
A 2D mechanical solver is added to the model. This harmonic solver permits to compute the deflection of the inductor due to the magnetostriction and the Maxwell stress tensor. It also permits to obtain the eigen modes of the magnetic core. No damping is currently added so the resonance magnitude is only limited by numerical damping effect.
## Resolution
3 resolution methods are provided:
• A pure magnetostatic study
• A magneto-mechanical study with allows to estimate the complete deflection of the magnetic core due to magnetic effects (combined effect of magnetostrictive and reluctant forces)
• A mechanical study to obtain the eigenmodes of the inductor.
## References
Models developed by Mathieu Rossi, Jean Le Besnerais and Christophe Geuzaine. Copyright (c) 2015 EOMYS | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9463945627212524, "perplexity": 3079.540685284714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320679.64/warc/CC-MAIN-20170626050425-20170626070425-00510.warc.gz"} |
https://asiminah.github.io/research/ | My research interests are in arithmetic geometry and algebraic number theory. I am interested in studying elliptic curves and Galois representations. My advisor is Álvaro Lozano-Robledo.
I am currently working on computing the proportion of sneaky primes for pairs of elliptic curves (both non-CM and CM) with John Cullinan and Gabrielle Scullard.
In fall 2019 to spring 2020, I worked on my undergraduate honors thesis with Andrew Obus. In my honors thesis, we calculated the probability that the gcd of a pair of quadratic integers $$n,m$$ chosen randomly, uniformly, and independently from the set of quadratic integers of norm $$x$$ or less, is $$k$$. We also calculated the expected norm of the gcd( $$n,m$$ ) as $$x$$ tends to infinity, with explicit error terms. We determined the probability and expected norm of the gcd for quadratic integer rings that are UFDs. We also outlined a method to determine the probability and expected norm of the gcd of elements in quadratic integer rings that are not UFDs.
In the summer of 2019, I participated in the NSF REU at Texas A&M University. In our research project, we proved that the crank partition function is asymptotically equidistributed modulo $$Q$$, for any odd number $$Q$$. To prove this, we obtained effective bounds on the error term from Zapata Rolon’s asymptotic estimate for the crank function. We then used those bounds to prove the surjectivity and strict log-subadditivity of the crank function. This was joint work with Wei-Lun Tsai and Aaron Kreigman.
In the summer of 2018, I participated in the NSF REU at Oregon State University. In our research project, we showed that eta-quotients which are modular for any congruence subgroup of level $$N$$ coprime to 6 can be viewed as modular for $$\Gamma_0(N)$$. We then categorized when even weight eta-quotients can exist in $$M_k(\Gamma_1(p))$$ and $$M_k(\Gamma_1(pq))$$, for distinct primes $$p,q$$. We also provided some new examples of elliptic curves whose corresponding modular forms can be written as a linear combination of eta-quotients, and we described an algorithmic method for finding additional examples. This was joint work with Holly Swisher, Michael Allen, Nicholas Anderson, and Benjamin Oltsik. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.850071132183075, "perplexity": 430.04998646844086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00778.warc.gz"} |
https://chemistry.stackexchange.com/questions/50994/what-is-the-oxidation-state-of-copper-and-iron-in-chalcopyrite | # What is the oxidation state of copper and iron in chalcopyrite?
What is the oxidation state of Cu and Fe in chalcopyrite ($\ce{CuFeS2}$)? Some websites tell that it is $\ce{Cu(I)Fe(III)S2}$, and others tell that it is $\ce{Cu(II)Fe(II)S2}$. Which is right?
A number of papers report that the formal valency states of chalcopyrite are best considered as $\ce{Cu+Fe^3+S2}$, based on various computational and spectroscopic evidence. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441904783248901, "perplexity": 1109.1948821483659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540508599.52/warc/CC-MAIN-20191208095535-20191208123535-00341.warc.gz"} |
https://www.meritnation.com/ask-answer/question/what-is-the-difference-between-density-and-relative-density/gravitation/1517235 | # what is the difference between density and relative density of a substance
DENSITY: the density of the object is defined as a mass of a unit volume.
RELATIVE DENSITY:It is the ratio of the density of the object to the density of the water.
RELATIVE DENSITY=DENSITY OF THE OBJECT/DENSITY OF WATER
• 37
• -10
When you are talking about the mass by unit volume, it is called as density.
For eg. Density of water = 1000 kg m-3
Density of any other substance = x
Depending upon what the value of 'x' is, the other substance may sink / float.
But when you are 'comparing' the density of value of object x to the density of water, it is reffered to as relative density.
For eg. Density of any substance = 1000 kg m-3
But, if Density of water = 1 kg m-3,
Then acc. to the formula,
Relative Density = DENSITY OF THE OBJECT/DENSITY OF WATER
= 1000/1 = 1000
Since it is more than the 1, obviously it will sink.
This is the concept.
Hope it helps.
Cheers!
• 3
Density is defined as mass per volume. Density of a material is the amount of substance contained in unit volume of the material. SI unit of density is kg/m3.
Relative density is the ratio of density of a substance to the density of a reference material. Usually, the reference material is water. Relative density is also known as specific gravity.
• 2
Density:
The density of a substance is defined as mass of the substance per unit volume i.e
Density = Mass / Volume .
The SI unit of density is kg/m3
Relative Density:
The Relative Density of a substance is the ratio of its density to that of water. i.e
Relative Density = Density of substance / Density of water.
Relative Density is unitless.
• 24
The density of a substance is defined as the mass per unit volume of that substance. The SI unit of density is kg/m3.
The relative density of a substance is the ratio of its density to that of a reference material. Usually, the reference material is water. Relative density is also known as specific gravity. It is a pure number, and has no unit
• -4
What are you looking for? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9933957457542419, "perplexity": 1049.5236100467318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146033.50/warc/CC-MAIN-20200225045438-20200225075438-00324.warc.gz"} |
https://helpingwithmath.com/unit-circle/ | Home » Math Theory » Geometry » Unit Circle
Unit Circle
Introduction
To study the unit circle and learn about its properties and importance in mathematics, let us first recall what a circle is. We have learnt that a circle is a type of shape. It is a two-dimensional figure formed by a set of points that are at a constant or at a fixed distance from a fixed point in the plane. This fixed distance is called the radius of the circle and the fixed point is called the centre of the circle.
Parts of a Circle
Circumference – The circumference of a circle is the distance around the boundary of the circle. In other words, it is the perimeter of the circle.
Radius – Radius is the distance from the centre of a circle to any point on the boundary of the circle.
Diameter – Diameter is the line from one point on the boundary of the circle to another point and passing through the centre of the circle. It is twice the length of the radius.
Chord of a Circle – The chord of a circle is the line from one point on the boundary of the circle to another point. The diameter is the longest chord of a circle.
In the circle above, O is the centre of the circle.
The line segment AO is the radius of the circle.
The lone segment PQ is the diameter of the circle. Note that the line segment PQ is passing through the centre O
The line segment AQ is a chord of the circle that joins two points A and Q that lie on the boundary of the circle.
The curve formed by AQ is the arc of the circle.
A type of circle is the unit circle. Let us find out what do we mean by a unit circle?
Definition:
The meaning of unit, as we all know is 1. Therefore it can easily be made out that a unit circle is a circle with a radius of 1. The following figure represents the circle with the radius as 1 unit. We have learnt in the circle that that radius and the centre are two important components for studying circles. So, how do we define the values of the centre and the radius of the unit circle? Let us find out.
By the very definition of the unit circle, it is clear that the radius of the unit circle is always one unit.
Centre of the Unit Circle
We now know that by definition, the radius of the unit circle is always 1. Where does the centre of the unit circle lie? Can it be any point on the Cartesian Coordinate Plane or it is also fixed like the radius of the unit circle?
The centre of the unit circle just like its radius is a fixed point. The centre of the unit circle is the point of origin, i.e. ( 0, 0 ).
Equation of a Unit Circle
The equation of the circle is derived from the equation of the circle and it has all the properties of the circle. In other words, the equation of a unit circle is represented using the second-degree equation with two variables x and y.
We know that the general equation of a circle is given by
( x – a )2 + ( y – b )2 = r2, where,
The centre of the circle is given by the point ( a, b ) and the radius of the circle is r.
Now, we will use the above equation to derive the general equation of the unit circle.
We have just learnt that the radius of the unit circle is one unit while the centre of the unit circle lies at the point of origin, i.e. (1, 1 )
Therefore, for the above equation, in case of unit circle,
r = 1 and ( a, b ) = ( 0, 0 )
Substituting these values of ( a, b ) and r in the above equation, we will get
( x – 0 )2 + ( y – 0 )2 = 12
⇒ x2 + y2 =1
Now we have understood what a unit circle is. But, the question arises that what is so peculiar about a unit circle and what is its importance in mathematics that we need to learn about it separately inline any other circle having a different centre and a radius. Let us find out.
Importance of Unit Circle in Mathematics
The unit circle plays a significant role in a number of different areas of mathematics. Some of the areas where we find extensive use and application of unit circles include:
1. The functions of trigonometry are most simply defined using the unit circle.
2. The unit circle has the ability to explicitly write the coordinates of a number of points lying on the unit circle with very little computation.
3. The unit circle can also be considered to be the contour in the complex plane defined by |x| = 1, where |x| represents the complex modulus
4. The unit circle is considered as the so-called ideal boundary of the two-dimensional hyperbolic plane H2 in both the Poincaré hyperbolic disk and Klein-Beltrami models of hyperbolic geometry.
Let us now study the use of unit circles in trigonometry.
Trigonometric Functions using Unit Circle
The trigonometric functions of sine, cosine, and tangent can be calculated using a unit circle. In order to calculate the trigonometric functions, we will first need to apply the Pythagoras theorem in a unit circle. Therefore, let us consider a right triangle placed in a unit circle in the Cartesian coordinate plane. In this plane,
• The hypotenuse of the right triangle is represented by the radius of the circle.
• An angle θ with the positive x-axis is made by the radius vector.
• The coordinates of the endpoint of the radius vector are (x, y)
• The length of the base is the value of x while the length of the altitude is the value of y
Therefore, for any angle t, we can label the intersection of its side and the unit circle by its coordinates, ( x, y ). The coordinates x and y will be the outputs of the trigonometric functions f(t)=cost and f(t)=sin t respectively. This means:
x = cos t
y = sin t
The following diagram shows these coordinates:
This can also be verified by using the Pythagoras theorem, as stated above. Using Pythagoras theorem, we will get:
sin θ = $\frac{Altitude}{Hypotenuse}=\frac{x}{1}=x$
cos θ = $\frac{Base}{Hypotenuse}=\frac{y}{1}=y$
Now that we have obtained the value of sin θ and cos θ, we can find out the value of tan θ
tan θ = $\frac{\sin \theta }{\cos\theta}$
Similarly, we find the values of other trigonometric functions.
Let us understand this through an example.
Suppose, we have the below d that shows the point $(\frac{-\sqrt{2}}{2}, \frac{\sqrt{2}}{2})$ with coordinates on a unit circle.
We have already learnt the x-coordinate of any unit circle is equal to sin t while the y-coordinate of the unit circle is equal to cost t. This means that
x = $\frac{-\sqrt{2}}{2}$,
y = $\frac{\sqrt{2}}{2}$
Therefore,
tan t = $\frac{\sin t}{\cos t}$
To understand the unit circle in radians, we first need to understand what we mean by radians. Radians are another way of measuring angles, and the measure of an angle can be converted between degrees and radians. Some key points regarding radians are:
• One radian is the measure of the central angle of a circle such that the length of the arc is equal to the radius of the circle.
• One complete circle which measures 360o is equal to 2π radians. This implies that 1 radian =$\frac{180^{\circ }}{\pi }$
• The formula used to convert between radians and degrees is defined by:
Angle in degrees = Angle in radians x $\frac{180^{\circ }}{\pi }$
• If s is the length of an arc of a circle, and r is the radius of the circle, then the measure of the radian is given by the central angle that contains that arc.
The difference between the degrees and the radians can be described as –
In trigonometry, most calculations use radians. Therefore, it is important to know how to convert between degrees and radians using the following conversion factors. Therefore, now that we have understood that the complete range of a circle can be represented by either 360 degrees or 2π radians, we can relate radians and degrees in the following manner:
1 radian = $\frac{360^{\circ }}{2\pi }$
1 radian = $\frac{180^{\circ }}{\pi }$
The following figure shows the relation between the radians and the degrees at various points on the circle.
In the figure, above, the circle is marked and labelled in both radians and degrees at all quadrant angles and angles that have reference angles of 30°, 45°, and 60°. At each angle, the coordinates are given. These coordinates can be used to find the six trigonometric values/ratios. The x-coordinate is the value of cosine at the given angle and the y-coordinate is the value of sine. It is important to remember that the unit circle demonstrates the periodicity of trigonometric functions by showing that they result in a repeated set of values at regular intervals. By periodicity here, we mean the quality of a function with a repeated set of values at regular intervals.
Let us understand this through an example.
Example
Convert an angle measuring $\frac{\pi}{9}$ radians to degrees.
Solution
In order to convert the given radians into degrees, we will use the relation between the radians and the degrees that we have defined above. We know that:
Angle in degrees = Angle in radians x $\frac{180^{\circ }}{\pi }$
So, Angle in degrees = $\frac{\pi}{9}\times \frac{180^{\circ }}{\pi }$ = 20o
Hence, $\frac{\pi}{9}$ radians = 20o
The relation between the radians and the degrees can also be presented through the following chart.
Unit Circle Table
The following chart describes different values of the trigonometric functions both in radians as well as in degrees.
It is important to note that the sign of a trigonometric function is dependent on the signs of the coordinates of the points on the terminal side of the angle. So, we know the quadrant in which the terminal side of an angle lies, we can also get to know the signs of all the trigonometric functions. So, how do we determine the sign of the trigonometric function? Let us find out.
We know that the distance from a point to the origin is always positive. However, the signs of the x and y coordinates may be positive or negative, depending upon their position on the unit circle. The entire unit circle is divided into four quadrants as described in the below diagram:
Now, let us take the case of each quadrant one by one.
First Quadrant – In the first quadrant, we can see that both the x and the y coordinates are all positive. This means that all the six trigonometric functions will have positive values in the first quadrant.
Second Quadrant – In the second quadrant, only sine and cosecant (the reciprocal of sine) are positive while all the remaining trigonometric functions will have negative values.
Third Quadrant – In the third quadrant, only tangent and cotangent are positive while all the remaining trigonometric functions will have negative values.
Fourth Quadrant – In the fourth quadrant, only cosine and secant are positive while all the remaining trigonometric functions will have negative values.
The following is the representation of the trigonometric functions in the four quadrants.
How to remember these signs?
The sign on a trigonometric function depends on the quadrant that the angle falls in, and the mnemonic phrase “A Smart Trigo Class” is used to identify which functions are positive in which quadrant, where
A – All positive in First Quadrant
Smart – Sine positive in second quadrant
Trigo – Tan positive in the third quadrant
Class – Cos positive in the fourth quadrant.
Solved Examples
What point corresponds to the angle −π on the unit circle?
Solution
We are required to find the point that corresponds to the angle −π on the unit circle.
Now, we know that the unit circle is the circle of radius one that has its centre at the origin (0,0) in the Cartesian coordinate system.
We also know that −π is equivalent to −180o which corresponds to the point ( −1, 0 ) on the unit circle.
Hence, the point that corresponds to the angle π on the unit circle is (-1, 0 )
If we consider a unit circle on a Cartesian coordinate plane, then what will be the coordinates for an angle of $\frac{\pi}{4}$?
Solution
We have been given the angle $\frac{\pi}{4}$. We need to find the coordinates of this angle on the Cartesian coordinate plane.
Now, we know that The coordinates of the point on the circle for each angle are ( cosθ, sinθ ).
We also know that,
$cos\frac{\pi}=\frac{\sqrt{2}}{2}$ and $sin\frac{\pi}=\frac{\sqrt{2}}{2}$
Therefore, the point will be $(\frac{\sqrt{2}}{2}, \frac{\sqrt{2}}{2})$.
Hence, the coordinates for an angle of $\frac{\pi}{4}$ will be $(\frac{\sqrt{2}}{2}, \frac{\sqrt{2}}{2})$.
Solution
We know that the relationship between degrees and the radians is given by:
Angle in degrees = Angle in radians x $\frac{180^{\circ }}{\pi }$
Therefore,
120 = Angle in radians x $\frac{180^{\circ }}{\pi }$
Hence,
Angle in radian = $\frac{120\times \pi}{180}=\frac{2\pi}{3}$
Therefore, 120° will be equal to $\frac{2\pi}{3}$ radians.
Remember
• The equation of a unit circle is x2 + y2 =1
• The radius of the unit circle is always one unit.
• The centre of the unit circle is the point of origin, i.e. ( 0, 0 ).
• One radian is the measure of the central angle of a circle such that the length of the arc is equal to the radius of the circle.
• One complete circle which measures 360o is equal to 2π radians. This implies that 1 radian = $\frac{180^{\circ }}{\pi }$
• The formula used to convert between radians and degrees is defined by:
Angle in degrees = Angle in radians x $\frac{180^{\circ }}{\pi }$
• The standard unit is used to measure angles in mathematics. The measure of a central angle of a circle that intercepts an arc equal in length to the radius of that circle.
• The x and y coordinates at a point on the unit circle given by an angle t are defined by the functions x=cos t and y=sin t.
• The unit circle demonstrates the periodicity of trigonometric functions by showing that they result in a repeated set of values at regular intervals.
• Periodicity means the quality of a function with a repeated set of values at regular intervals.
• The sign on a trigonometric function depends on the quadrant that the angle falls in, and the mnemonic phrase “A Smart Trigo Class” is used to identify which functions are positive in which quadrant. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.968739926815033, "perplexity": 248.7700777472097}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00467.warc.gz"} |
https://www.physicsforums.com/threads/rotational-inertia-in-two-cylinders.745279/ | # Rotational Inertia in two cylinders
1. Mar 25, 2014
### TheAce3317
Calculate the kinetic energies of two uniform solid cylinders, each rotating about its central axis. They have the same mass, 1.25 kg, and rotate with the same angular velocity, 235 rad/s, but the first has a radius of 0.18 m and the second a radius of 0.73 m.
I got 12112188.868 for ω, and 0.050625 for I
I have been using I=mr^2, and .5*Iω^2
I plugged everything in with out changing any units, and got 1906125390.62499825 for the small one, which i know to be wrong. Then I thought I remembered something my teacher said about multiplying the radians by 2pi, and then I plugged all the numbers back in and got another large number.
2. Mar 25, 2014
### TheAce3317
nvm, ran out of time. getting help from my teacher tomorrow | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8694682717323303, "perplexity": 1007.4648205618695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823482.25/warc/CC-MAIN-20171019231858-20171020011858-00816.warc.gz"} |
https://www.techwhiff.com/learn/what-have-been-the-three-major-changes-in-human/333202 | What have been the three major changes in human evolution? Anthropology question
Question:
What have been the three major changes in human evolution?
Anthropology question
Similar Solved Questions
The market risk premium is 8.4% and the risk-free rate is 2.3%. The beta of the...
The market risk premium is 8.4% and the risk-free rate is 2.3%. The beta of the stock is 1.22. What is the required return of the stock? Enter you answer as a percentage. Do not include a percent sign in your answer. Enter your answer rounded to 2 DECIMAL PLACES. Enter your response below. % Click &...
How do you integrate int(x)/((3x-2)(x+2)(2x-1)) using partial fractions?
How do you integrate int(x)/((3x-2)(x+2)(2x-1)) using partial fractions?...
What is 4/5 as a decimal?
What is 4/5 as a decimal?...
ASAP PLEASE (6)(20 points) (a) Let G be a cyclic group of order n. Prove that...
ASAP PLEASE (6)(20 points) (a) Let G be a cyclic group of order n. Prove that for every divisor dof n there is a subgroup of G having order d. (b) Characterize all factor groups of Z70. (7)(20 points) (a) State the Fundamental Theorem of Finitely Generated Abelian Groups....
The equilibrium constant (Kp) for the interconversion of PCls and PCI3 is 0.0121 PCL5 () PC13...
The equilibrium constant (Kp) for the interconversion of PCls and PCI3 is 0.0121 PCL5 () PC13 (8) + Cl2 (9) A vessel is charged with PCs giving an initial pressure of 0.123 atm. At equilibrium, the partial pressure of PCIz is atm. 0.0782 0.0455 0.0908 0.0330 0.123...
Here are two bases of Ma2 CR ve fve 2 3 -y o Here are two bases of Ma2 CR ve fve 2 3 -y o
Here are two bases of Ma2 CR ve fve 2 3 -y o Here are two bases of Ma2 CR ve fve 2 3 -y o...
N If f(x) = Σ a;x' is a polynomial in R[x], recall the derivative f'(x) is...
n If f(x) = Σ a;x' is a polynomial in R[x], recall the derivative f'(x) is a polynomial as well i=0 (we'll talk more about the fact that derivatives are linear, in chapter 3). Recall I write R[x]n for the polynomials of degree < N. Let P(x) = aixº be degree N, N i=0 a.k.a. ...
Help label each step and what happens from start to finish.. identify what structure you are...
Help label each step and what happens from start to finish.. identify what structure you are referring to. How does the reaction progress. Please help explain thank you... I’m not understanding The Mechanism for Hydroxide-lon-Promoted Hydrolysis of an Ester o. 0 CH + CH,OH R O:...
Three electric charges are arranged on a straight line as shown in the figure. Select True...
Three electric charges are arranged on a straight line as shown in the figure. Select True or False for the following statements. True: If Q1 is negative, Q2 is negative and Q3 is positive, then Q2 MUST feel a net force to the right. True: If Q1 is positive, Q2 is negative and Q3 is positive, then ...
6-16 The rotating shaft shown in the figure is máchined from AISI 1020 CD steel. It...
6-16 The rotating shaft shown in the figure is máchined from AISI 1020 CD steel. It is subjected o a force of F = 6 kN. Find the minimum factor of safety for fatigue based on infinite life. If the life is not infinite, estimate the number of cycles. Be sure to check for yielding. 500 25 D 35 ...
I understand that the question here has the answer. However, what I need help with is understandi...
I understand that the question here has the answer. However, what I need help with is understanding the concepts/how they got the answer. 1. Is there any significance regarding the firm being a non-price taking firm? If so what is it? 2. What is "constant returns to scale"? 3. Is there any s... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8807350993156433, "perplexity": 1643.838025516356}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.70/warc/CC-MAIN-20221202215443-20221203005443-00457.warc.gz"} |
http://math.stackexchange.com/questions/71435/optimizing-with-absolute-value-objective-function | # Optimizing with Absolute Value Objective Function
max : $w = |q^T y|$
subject to
$A y \leq b$
$y \geq 0$
Please describe how one could solve the non-linear programming prob. above by using linear programming methods.
I tried changing $y$ to $y' - y''$ in the constraints and $y' + y''$ for the objective function. However, my Excel solver says that "the cells do not converge". How should I solve this?
Thanks a bunch!
-
the maximum value of $w$ may be infinite. do you have any information about $A,b,q$? – Ilya Oct 10 '11 at 13:42
no, but there's a hint saying: "Try breaking it into 2 linear programming problems. Then, could you think of combining them into just 1 problem?" – John Oct 10 '11 at 13:46
To follow the advise given to you, consider two problems: $$\begin{cases} w^+ &= q^Ty^+\to\max, \\ Qy^+&\leq0, \\ Ay^+&\leq b, \\ y^+&\geq 0. \end{cases}$$ and
$$\begin{cases} w^- &= q^Ty^-\to\max, \\ Qy^-&\geq0, \\ Ay^-&\leq b, \\ y^-&\geq 0. \end{cases}$$
Then $w = \max\{w^+,w^-\}$. Here matrix $Q = (q\quad0\quad\dots\quad0)^T$. With such decomposition you just consider two possible cases for the absolute value.
-
Thanks for your answer! However, is there any way to solve this using just 1 linear programming problem? – John Oct 10 '11 at 17:05
@John: I'm afraid, I don't know such a way. – Ilya Oct 10 '11 at 17:29
Hint: let $y = y_1 + y_2$ where $q^Ty_1 \le 0$ and $q^Ty_2 \ge 0$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9472376108169556, "perplexity": 648.378511818559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775404.88/warc/CC-MAIN-20141217075255-00111-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/substances-whose-chemical-reaction-is-easily-perceptiple.706209/ | # Substances whose chemical reaction is easily perceptiple
1. Aug 18, 2013
### jaumzaum
I was trying to solve the following exercise:
Choose the option that has the pair of substances whose chemical reaction is easily perceptiple
a)Br2(aq) + NaCl(aq)
b) Cl2(aq) + NaI(aq)
c) H2(g) + MgSO4(aq)
d) Ag(c) + ZnSO4(aq)
e) HCl(aq) + Cu(c)
I would say if the salt was an alkali salt, a would discolor the solution (Br2 is brown/red), as Br2 + H2O -> HBrO + HBr, and the equilibrum tends to form the products when the solution is basic. But the salt is NaCl, so nothing would happen
The second one I would say no change would be noticed
The third I would say some of the SO4-- could reduce to SO2 or some of the Mg++ to Mg, oxydating the H2 to H2O, but in a very low scale, so nothing would be noticed too
For the third, as ZnSO4 forms a acid solution, I would say some of the silver would oxydate to Ag+, even silver being a noble metal. This will occur with a low velocity but I would say it could be noticed.
For the last one, as Cu isn't a noble metal, I would say it will fastly oxidate to Cu2+, and the solution will color to blue, easily noticed
I chose e as the answer, but by book is saying it is b. Why is that?
2. Aug 18, 2013
### ZacharyM
The reaction in b) will make sodium chloride and iodine. The iodine would give the solution a very dark brown colour that would be hard to miss. The answer isn't e) because that reaction doesn't happen. Copper is one of the few metal which are actually less reactive than hydrogen, so it can't replace it in the HCl.
3. Aug 19, 2013
### jaumzaum
Thanks, I saw where was my mistake now,i thought ag and cu had negative reducing potentials, not positive (even ag being a noble metal)
Last question
In another exercise i have the following reaction
H2SO4(l) + NaCl(c)
The answer is that it forms a gas
But I searched the reducing potential of the reaction
SO4 2- + 4H+ + 2e- -> H2SO3 + H2O
This reaction is in atkins' book , and i presume H2SO4 should be H2O and SO2
Anyway, the standard potential is +0.17v, assuming H+ = 1mol/l and so2 = 1 mol/l, which is not any absurd
And this potential is not even closer to the potential required to oxidate cl- (1.36 v), or the one to oxidate oh- to H2O and O2 (0.4v), so in an aquous solution i would say nothing would occur. But it says H2SO4 is liquid (=pure), what happens in this case?
4. Aug 19, 2013 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8367546796798706, "perplexity": 3876.3458936286706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051342447.93/warc/CC-MAIN-20160524005542-00175-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/227352/constructing-an-independent-uniform-random-variable-from-two-independent-ones | # Constructing an independent uniform random variable from two independent ones
Does there exist a continuous (differentiable) function $h:[0,1]\times [0,1] \to [0,1]$ such that if $\alpha,\beta\in [0,1]$ are independent and uniformly distributed on $[0,1]$, the random variable $h(\alpha,\beta)$ is uniformly distributed on $[0,1]$ independent of $\alpha,\beta$?
Clarification: By independent I mean pairwise independent, i.e.
$\mathbb{P}[h(\alpha,\beta)\leq x\mid \alpha]=x$ for all $x,\alpha\in[0,1]$
and
$\mathbb{P}[h(\alpha,\beta)\leq x\mid \beta]=x$ for all $x,\beta\in[0,1]$..
Thanks a lot!
• It depends if you mean jointly independent or pairwise independent of $\alpha,\beta$. – Anthony Quas Dec 31 '15 at 3:10
• I am sorry for the imprecise question I mean pairwise independent, i.e. $\mathbb{P}[h(\alpha,\beta)\leq x\mid \alpha]=\mathbb{P}[h(\alpha,\beta)\leq x\mid \beta]=x$ for all $x\in[0,1]$. – Peter Dec 31 '15 at 3:27
• You maybe know this, but if you drop the "continuous" requirement then the answer is yes: let $h(x,y)$ be the number whose $n$th bit is the xor of the $n$th bits of $x,y$. – Nate Eldredge Dec 31 '15 at 3:42
• Thanks again! Yes, I know this construction. A bit off-topic: do you know if it is used in any applied context (computerscience, gametheory, ..)? – Peter Dec 31 '15 at 3:46
• Besides the example given by Nate, you can also take $h(\alpha, \beta) = \alpha + \beta$ (mod 1). This one is arguably a bit more continuous than xor. It also shows that for torus the answer is true. – John Jiang Dec 31 '15 at 4:03
I think this works for a continuous $h$.
Let $f : \mathbb{R} \to [0,1]$ be the "triangle wave" function given on $[0,1]$ by $$f(u) = \begin{cases}1-2u, & 0 \le u \le \frac{1}{2} \\ 2u-1, & \frac{1}{2} \le u \le 1 \end{cases}$$ and extended periodically. Note that for any $t \in [0,1]$ we have $$m(\{x \in [0,1] : f(x) \le t\}) = m\left(\left[\frac{1-t}{2}, \frac{1+t}{2}\right]\right) = t.$$ By the translation invariance of $m$ and the 1-periodicity of $f$ it also follows that for any $y \in \mathbb{R}$ we have $m(\{x \in [0,1] : f(x-y) \le t\}) = t$. So take $h(x,y) = f(x-y)$. Then if $\alpha, \beta$ are iid $U(0,1)$ the above computation says precisely that $P(h(\alpha, \beta) \le t \mid \beta) = t$, i.e. $h(\alpha, \beta)$ is $U(0,1)$ and independent of $\beta$. Since $f$ is an even function we have $h(x,y) = h(y,x)$ and thus by symmetry $h$ is also independent of $\alpha$.
Inspired by Nate's answer, here is why it cannot be differentiable. First for each $y \in [0,1]$, there must be some $x \in [0,1]$ such that $h(x,y) = 0$, otherwise by compactness of the unit interval and continuity of $h$, we would have $P(h(\alpha, y) < \epsilon) = 0$ for some small $\epsilon$. Next I claim that such $x$ cannot be in the interior of $[0,1]$. If it is, then by linear approximation near $x$ and the limit definition of derivative, $P(h(\alpha, y) < \epsilon) > \epsilon$ for sufficiently small $\epsilon$, because the derivative $\partial_1 h(x,y)$ must be zero. This is clearly enough to get a contradiction: it means that $h(0,y) = 0$ or $h(1,y)=0$ for all $y \in [0,1]$. Reversing the role of $x$ and $y$, we see that $P(h(0, \beta) < \epsilon) \geq 1/2$ or $P(h(1, \beta) < \epsilon) \geq 1/2$. Since $h$ is continuous, this argument also works in the almost sure category. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9858489632606506, "perplexity": 116.31297350244328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673538.21/warc/CC-MAIN-20191017095726-20191017123226-00021.warc.gz"} |
http://www.khronos.org/registry/vulkan/specs/1.0/man/html/VkEventCreateFlags.html | ## C Specification
typedef VkFlags VkEventCreateFlags;
## Description
VkEventCreateFlags is a mask of zero or more VkEventCreateFlagBits. It is used as a member and/or parameter of the structures and commands in the See Also section below. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777238965034485, "perplexity": 2471.581031585579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948589512.63/warc/CC-MAIN-20171216220904-20171217002904-00402.warc.gz"} |
https://fr.maplesoft.com/support/help/maplesim/view.aspx?path=DocumentTools/Components/Dial&L=F | Dial Component - Maple Help
DocumentTools[Components]
Dial
generate XML for a Dial Component
Calling Sequence Dial(rng, opts)
Parameters
rng - (optional) range(realcons); range of position values, defaults to 0..100 opts - (optional) ; one or more keyword options as described below
Options
• action : string; A string which parses to one or more valid statements in 1-D Maple notation. These statements form the Value Changed Action Component Code that executes when the Dial position is manually adjusted.
• anglerange : nonnegative; The range of angles for the position of the dial's indicator. The angle range can be between 0 and 360 degrees.
• continuous : truefalse; Indicates whether dragging of the sliding component will result in continuous updates using the action code.
• enabled : truefalse; Indicates whether the component is enabled. The default is true. If enabled is false then the inserted sliding component is grayed out and interaction with it cannot be initiated.
• identity : {name,string}; The reference name of the component.
• image : {string,Matrix,Array,identical(none)}; The Image displayed on the dial, specified as either the name of an external image file or a Matrix or Array as recognized by commands in the ImageTools package.
• height : posint; The height in pixels of the component.
• majorticks : numeric; The interval between the major tickmarks on the sliding component. The default is .2 times the upper value of rng minus the lower value of rng.
• minorticks : numeric; The interval between the minor tickmarks on the sliding component. The default is 1/2 majorticks.
• position : numeric; The initial position of the sliding component. If not specified the value of the left endpoint of the rng argument is used.
• showlabels : truefalse; Indicates whether values are shown beside tickmarks. The default is true.
• showticks : truefalse; Indicates whether tickmarks are shown. The default is true.
• startangle : nonnegative; The angle, measured clockwise from straight down, representing the lower value. The angle is between 0 and 360 degrees.
• tooltip : string; The text that appears when the mouse pointer hovers over the component.
• visible : truefalse; Indicates whether the component is visible. The default is true.
• width : posint; The width in pixels of the component.
Description
• The Dial command in the Component Constructors package returns an XML function call which represents a Dial Component.
• The generated XML can be used with the results of commands in the Layout Constructors package to create an entire Worksheet or Document in XML form. Such a representation of a Worksheet or Document can be inserted into the current document using the InsertContent command.
Examples
> $\mathrm{with}\left(\mathrm{DocumentTools}\right):$
> $\mathrm{with}\left(\mathrm{DocumentTools}:-\mathrm{Components}\right):$
> $\mathrm{with}\left(\mathrm{DocumentTools}:-\mathrm{Layout}\right):$
Executing the Dial command produces a function call.
> $\mathrm{Dial}\left(\mathrm{identity}="Dial0",\mathrm{image}=\mathrm{none}\right)$
${\mathrm{_XML_EC-Dial}}{}\left({"id"}{=}{"Dial0"}{,}{"lower-bound"}{=}{"0"}{,}{"upper-bound"}{=}{"100"}{,}{"control-position"}{=}{"0"}{,}{"major-tick-spacing"}{=}{"20"}{,}{"minor-ticks"}{=}{"10"}{,}{"show-labels"}{=}{"true"}{,}{"show-ticks"}{=}{"true"}{,}{"continuous-update"}{=}{"true"}{,}{"inputenabled"}{=}{"true"}{,}{"visible"}{=}{"true"}\right)$ (1)
By using commands from the DocumentTools:-Layout package a nested function call can be produced which represents a worksheet.
> $S≔\mathrm{Dial}\left(\mathrm{identity}="Dial0"\right):$
> $\mathrm{xml}≔\mathrm{Worksheet}\left(\mathrm{Group}\left(\mathrm{Input}\left(\mathrm{Textfield}\left(S\right)\right)\right)\right):$
That XML representation of a worksheet can be inserted directly.
> $\mathrm{InsertContent}\left(\mathrm{xml}\right):$
> $\mathrm{codestring}≔"\ns := true;\nt := false;"$
${\mathrm{codestring}}{≔}{"s := true; t := false;"}$ (2)
> $S≔\mathrm{Dial}\left(0...9.0,\mathrm{identity}="Dial0",\mathrm{tooltip}="My example Dial",\mathrm{showlabels},\mathrm{position}=4.1,\mathrm{width}=90,\mathrm{height}=90,\mathrm{action}=\mathrm{codestring}\right):$
> $\mathrm{xml}≔\mathrm{Worksheet}\left(\mathrm{Group}\left(\mathrm{Input}\left(\mathrm{Textfield}\left(S\right)\right)\right)\right):$
The previous example's call to the InsertContent command inserted a component with identity "Dial0", which still exists in this worksheet. Inserting additional content whose input contains another component with that same identity "Dial0" incurs a substitution of the input identity in order to avoid a conflict with the identity of the existing component.
The return value of the following call to InsertContent is a table which can be used to reference the substituted identity of the inserted component.
> $\mathrm{lookup}≔\mathrm{InsertContent}\left(\mathrm{xml},\mathrm{output}=\mathrm{table}\right)$
${\mathrm{lookup}}{≔}{table}{}\left(\left[{"Dial0"}{=}{"Dial1"}\right]\right)$ (3)
> $\mathrm{lookup}\left["Dial0"\right]$
${"Dial1"}$ (4)
> $\mathrm{GetProperty}\left(\mathrm{lookup}\left["Dial0"\right],\mathrm{value}\right)$
${4.1}$ (5)
>
Compatibility
• The DocumentTools:-Components:-Dial command was introduced in Maple 2015. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 19, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8027496337890625, "perplexity": 2406.4133257736503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103556871.29/warc/CC-MAIN-20220628142305-20220628172305-00187.warc.gz"} |
http://2013.igem.org/Team:Grenoble-EMSE-LSU/Project/Modelling/Building | # Team:Grenoble-EMSE-LSU/Project/Modelling/Building
Grenoble-EMSE-LSU, iGEM
Grenoble-EMSE-LSU, iGEM
• # Building the Model
The construction of our model was not a linear process: quite a few models were built, tried then abandoned. The aim was to find an explanation as simple as possible of the results of our experiments. This meant designing equations describing the behaviour of our bacterial cells with as few parameters as possible. Thus our equations consider the maturation of fluorescent proteins and the ability of the bacteria to repair themselves.
• ## Initial Model
### The equation
Our system is made of bacterial cells and ‘KillerRed’ proteins. Bacteria divide and produce KillerRed proteins, and KillerRed proteins respond to light: they fluoresce, degrade (photobleaching) and produce Radical Oxygen Species or ROS (phototoxicity). These reactions are exhibited by all fluorescent proteins, but the 3D structure of KillerRed is responsible for a ROS production 1000-fold greater than that of other fluorescent proteins.
$\bullet$ $C$ the amount of living bacteria per milliter of cell suspension.
$\bullet$ $K$ the amount of KillerRed inside the living bacteria per milliter of cell suspension.
$\bullet$ $I$ the amount of incident (white) light.
The evolution of C and K is linked to I by the set of equations :
$\left\{ \begin{array}{l l} \frac{dC}{dt}=rC-kIK \\ \frac{dK}{dt}=aC-bIK-kI\frac{K^2}{C} \\ \end{array} \right.$
$\diamond$ $rC$ describes bacterial growth.
$\diamond$ $kIK=kI\frac{K}{C}C$ the amount of bacteria killed by KillerRed and light.
$\diamond$ $aC$ the production of KillerRed.
$\diamond$ $bIK$ the amount of KillerRed photobleached.
$\diamond$ $kIK\frac{K}{C}$ the amount of KillerRed in the bacteria killed in the final time step.
$r$, $a$, $k$ and $b$ are constants that are described a bit lower.
Unfortunately, $C$ and $K$ are not measurable variables. The only quantities we can quickly and easily measure are the optical density (OD) associated the amount of dead AND living bacteria, and the global fluoresence associated with the amount of KillerRed in the dead AND living bacteria. In order to compare our model with experimental results, we need two additional variables :
$\bullet$ $D$ the amount of dead bacteria per milliliter of cell suspension. We consider that dead bacteria have the same Optical Density as living ones, because ROS damage does not lyse the cell.
$\bullet$ $K_D$ the amount of KillerRed inside the dead bacteria per milliliter of cell suspension.
$\left\{ \begin{array}{l l} \frac{dD}{dt}=kIK \\ \frac{dK_D}{dt}=kI\frac{K^2}{C}-bIK_D\\ \end{array} \right.$
$D$ and $K_D$ are necessary for the model, because they appear in the measurement of $OD_{600}$ and fluorescence, but dead bacteria don't grow anymore, and a KillerRed protein that is in a dead bacteria has no more effect on the $OD_{600}$ evolution.
The simplest possible units were used, corresponding to the measurable quantities :
$C$ and $D$ are in '$OD_{600}nm$' units.
$K$ and $K_D$ are in Relative Fluorescent Unit (RFU). Bacterial auto-fluorescence is considered negligible compared to KillerRed fluorescence.
4 parameters appear in those equations:
$r$: the rate of growth of bacteria in $min^{-1}$
$a$: the production of KillerRed per bacteria in $RFU.OD^{-1}.min^{-1}$
$b$: the efficiency of photobleaching in $RFU.UL^{-1}.min^{-1}$
$k$: the toxicity of KillerRed in $OD.RFU^{-1}.UL^{-1}.min^{-1}$
The evolution of the absorbance ($C+D$) and the global fluorescence ($K+K_D$) can also be written:
$\left\{ \begin{array}{l l} \frac{d(C+D)}{dt}=rC \\ \frac{d(K+K_D)}{dt}=aC-bI(K+K_D)\\ \end{array} \right.$
The derivative of absorbance is proportional to the amount of living bacteria, therefore, a linear growth of the absorbance is characteristic of a constant population of bacteria, and this will stay true even with the more complete models. The evolution of fluorescence is simply the combination of the production and the photobleaching terms.
### Analytical Solution
This simple model can be partially solved, for $C(t)$ or $I(t)$ for example constant :
If $C$ is constant, i.e. $\forall t, C(t)=C_0$, we have :
$\left\{ \begin{array}{l l} rC_0=kIK \\ \frac{dK}{dt}=aC_0-bIK-kI\frac{K^2}{C_0}\\ \end{array} \right.$
and therefore : $\frac{dK}{dt}=\left(a-\frac{br}{k}\right)C_0-rK$
which gives : $K(t)=\left(\frac{a}{r}-\frac{b}{k}\right)C_0+Be^{-rt}$
Then $I(t)=\frac{rC_0}{kK(t)}$ should give a constant concentration of living cells.
Asymptotically, the light intensity that stabilizes the concentration of living cells is $I_0=\frac{r^2}{ak-rb}$.
But if we assume that $I$ is constant, i.e. $\forall t, I(t)=I_0$, we need another variable to be able to easily solve our equation :
We define $Y=\frac{K}{C}$ as the amount of KillerRed per bacteria.
$\frac{dY}{dt}=\frac{d}{dt}\left(\frac{K}{C}\right)=a-(bI_0+r)Y$
which gives $Y=\frac{a}{bI_0+r}+Be{-(bI_0+r)t}$
$Y$ tends toward a steady-state value, $\frac{a}{bI_0+r}$. Let's see how C evolves :
$\frac{dC}{dt}=C(r-kI_0Y)$
$\frac{dC}{dt}=C\left(r-\frac{kI_0a}{bI_0+r}+Be^{-(bI_0+r)t}\right)$
Thus, in the specific case that $I_0=\frac{r^2}{ak-rb}$, we have $\lim_{t\to\infty}\frac{dC}{dt}(t)=0$.
The resolution of this equation shows the possibility of stabilizing the system.
The resolution of the latter equation shows that it is possible to stabilize the system by means of a suitable (constant) light intensity.
### Comparison with experiments
This first model is very interesting for understanding which parameters govern the evolution of the living cell population and to show that conditions exist to stabilize it. But unfortunately this set of equations is insufficient to account for the results of the experiments.
Figure 1.
Evolution of absorbance in OD.
Figure 2.
Evolution of fluorescence in UF.
Whereas we observe a lag between the onset of light and the decrease of fluorescence, the first model predicts an immediate decrease. This discrepancy requires the introduction of other phenomena to explain the lag between the stimulus (the light) and the reaction (the decrease of fluorescence and the OD stabilization). Of course this explanation should be supported by biological facts.
• ## Maturation Time
### The maturation of fluorescent proteins
After translation and spontaneous polypeptide folding, a fluorescent protein still has to maturate before becoming fluorescent. Fluorescent proteins mature after an oxidation reaction where three amino acids rearrange to form the fluorophore. For GFP, this time is typically 30 minutes [1]. The maturation time of KillerRed is significant for our experiments.
### Second model
We consider maturation to be a simple chemical reaction, and the conversion of immature Killer Red to the mature form to be adequately described by first-order reaction kinetics. A new variable is needed:
$\bullet$ $K_m$ the amount of mature KillerRed inside the living bacteria per milliliter of cell suspension.
$\bullet$ $K_i$ the amount of immature KillerRed inside the living bacteria per milliliter of cell suspension. As an immature fluorescent protein does not have a chromophore, it does not degrade with light, and so is not affected by photobleaching.
$\left\{ \begin{array}{l l} \frac{dC}{dt}=rC-kIK_m \\ \frac{dK_i}{dt}=aC-kI\frac{K_i^2}{C}-mK_i\\ \frac{dK_m}{dt}=-kI\frac{K_m^2}{C}-bIK_m+mK_i\\ \end{array} \right.$
$\diamond$ $mK_i$ is the term expressing the maturation of KillerRed. We have a new parameter:
$m$: the maturation rate of KillerRed in $min^{-1}$
Similarly, immature KillerRed is also found in dead cells and its evolution is described by the following set of equations :
$\left\{ \begin{array}{l l} \frac{dD}{dt}=kIK_m \\ \frac{dK_{Di}}{dt}=kI\frac{K_i^2}{C}-mK_{Di}\\ \frac{dK_{Dm}}{dt}=kI\frac{K_m^2}{C}-bIK_{Dm}+mK_{Di}\\ \end{array} \right.$
Where :
$\diamond K_{Dm}$ the amount of immature KillerRed inside dead bacteria per milliter of cell suspension.
$\diamond K_{Dm}$ the amount of mature KillerRed inside dead bacteria per milliter of cell suspension.
### Comparison with experiments
The curves drawn from the model give the right trend, observed in the experiments: the lag of the reaction, the peak of fluorescence short after light is switched on and then the swift decrease of fluorescence in the long term are qualitatively described.
Figure 3.
Evolution of absorbance in OD.
Figure 4.
Evolution of fluorescence in UF.
Nonetheless, it is impossible to get a good fit between the prediction of the model and the experiment. The maturation step alone does not explain why the production of KillerRed is so high two hours after the beginning of the illumination and the decrease of fluorescence is so rapid four hours after the illumination.
• ## Damage Accumulation
### Third Model
Until now, our equations describe the phototoxic effect of KillerRed as instantaneous: in the presence of light Killer Red produces ROS, which immediately kills a certain proportion of the bacteria but has no lasting effect. It is well-known, however, that cells can repair damages due to ROS, up to a certain level. We can thus consider that bacteria are unable to instantly repair all the damages caused by ROS, and with damages accumulating, they are more and more fragile and close to death.In other words, the effect of a certain amount of KillerRed at a certain time $u$, $K(u)$, illuminated by a light intensity $I(u)$, will affect cell growth at time $t$ later than $u$, weighted by a factor $e^{-p(t-u)}$ that vanishes as $t$ increases. The effect of this ROS production at time $u$ will thus exponentially decrease with time. The term $– kI.K$ (photokilling) was thus replaced by the integral:
$-\int_{u=0}^t k'I(u)K(u)e^{-p(t-u)}du$
And the equation of bacterial growth:
$\frac{dC}{dt}(t)=rC(t)-\int_{u=0}^t k'I(u)K(u)e^{-p(t-u)}du$
$p$: the ability of the cell to repair in one minute, in $min^{-1}$.
There is no analytical solution for this equation. But it is simple to treat it in its discrete-time form.
Written in its discrete-time form, the evolution of $C$ was described by
$\tau$ the value of a step of time: $C(t+\tau)-C(t)=rC(t)-kI(t)K(t)$
We now write
$\left\{ \begin{array}{l l} C(t+\tau)-C(t)=rC(t)-\mbox{tox}(t) \\ \mbox{tox}(t+\tau)=l.\mbox{tox}(t)+k'I(t)K(t)\\ \end{array} \right.$
with $l\in[0,1]$
The variable 'tox' is representative of the amount of damages inflicted to bacteria, and so of their probability of dying. During a time step (for us, a minute), bacteria heal part of their injuries ($l<1$) and suffer new damages ($k'I(t)K(t)$). At first, few bacteria die : $\mbox{tox}(t)\cong k'I(t)K(t)$, then, 'tox' increases until it reach $\mbox{tox}(t)\cong\frac{k'}{1-l}I(t)K(t)$. $k'$ is the same in the continuous-time and the discrete-time forms, it will from now be named simply $k$. $p$ and $l$ are directly related : $p=-\frac{\ln(l)}{\tau}$.
We have our last variable:
$l$: the rate of healing of the bacteria by step of time. unit less
Considering this accumulation allows us to shift the effect of illumination, and so to have a more accurate model.
### Comparison with experiments
With this model, we can now properly describe our data:
Figure 5.
Evolution of absorbance in OD.
Figure 6.
Evolution of fluorescence in UF.
As this fit seems to decribe well the kinetics observed, we will use this model to predict all our systems. But we still have parameters to adjust to find the best fit possible. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.830959677696228, "perplexity": 1339.9987999499056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878879.33/warc/CC-MAIN-20201022024236-20201022054236-00296.warc.gz"} |
https://imcs.dvfu.ru/cats/static/problem_text-cpid-763851.html | ## Problem K. Kripke Model ≡
Author: ACM ICPC 2009-2010, NEERC, Northern Subregional Contest Time limit: 3 sec Input file: kripke.in Memory limit: 256 Mb Output file: kripke.out
### Statement
Testing and quality assurance are very time-consuming stages of software development process. Different techniques are used to reduce cost and time consumed by these stages. One of such techniques is software verification. Model checking is an approach to the software verification based on Kripke models.
A Kripke model is a 5-tuple (AP, S, S0, R, L), where AP is a finite set of atomic propositions, S is a finite set of model's states, S0 ⊂ S is a set of initial states, R ⊂ S × S is a transition relation, and L ⊂ S × AP is a truth relation. In this problem we will not take initial states into account and relation R will be a reflexive relation, so R(s, s) will be true for all states s ∈ S.
A path π beginning in state s in the Kripke model is an infinite sequence of states s0 s1 such that s0=s, and for each i ≥ 0 the (si, si+1) ∈ R.
Temporal logic and its subset Computational tree logic (CTL) are used to describe propositions qualified in terms of time. Kripke models are often used to check properties, described in CTL.
There are two types of formulae in CTL: state formulae and path formulae. The values of state and path formulae are evaluated for states and paths correspondingly.
If p ∈ AP then p is a state formula that holds in state s iff (s, p) ∈ L.
If f is a path formula, then A f and E f are state formulae, where A and E are path quantifiers:
• A f holds in a state s, iff f holds for each path beginning in the state s;
• E f holds in state s, iff there exists a path π, beginning in the state s, such that f holds for π.
If f and g are state formulae, then G f and fU g are path formulae, where G and U are temporal operators:
• G f (Globally) holds for a path π = s0 s1 iff for each i ≥ 0 the formula f holds in the state si;
• f U g (Until) holds for a path π = s0 s1 if there exists i ≥ 0 such that f holds for each state in the range s0, s1, …, si1, and g holds in state si;
To verify a property described by a state formula f means to find all states, f holds for. Verification of an arbitrary property is a pretty complex problem. Your problem is much easier — you are to write a program that verifies a property described by a temporal logic formula E (x U(A G y)), where x and y are some atomic propositions.
### Input file format
The first line of the input file contains three positive integer numbers n, m and k — number of states, transitions and atomic propositions (1 ≤ n ≤ 10 000; 0 ≤ m ≤ 100 000; 1 ≤ k ≤ 26).
The following n lines describe one state each. The state i (1 ≤ i ≤ n) is described by ci — a number of atomic propositions which are true for this state and a space-separated list of these atomic propositions (0 ≤ ci ≤ k). Atomic propositions are denoted by first k small English letters.
The last line of the input file contains the formula of the property to be verified. This formula always has the form E(xU(AGy)), where "x" and "y" are some atomic propositions.
### Output file format
The first line of the output file must contain the number of states for which the verified property holds. The following lines must contain the numbers of these states listed in increasing order.
### Sample tests
No. Input file (kripke.in) Output file (kripke.out)
1
7 8 2
1 a
1 a
2 a b
1 b
1 b
1 a
1 a
1 2
2 3
3 4
4 5
5 3
2 6
6 7
7 6
E(aU(AGb))
5
1
2
3
4
5
0.040s 0.006s 15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9221349358558655, "perplexity": 955.0288728289031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00028.warc.gz"} |
https://www.physicsforums.com/threads/unit-step-and-unit-impulse.669804/ | # Unit step and Unit impulse
1. Feb 6, 2013
### socrates_1
1. The problem statement, all variables and given/known data
Hi,can someone explain to me through real world examples what the unit step and unit impulse means in systems control?How are they related to the system's input? Thank you.
2. Relevant equations
δ(t) = unit umpulse
u(t) = unit step
3. The attempt at a solution
2. Feb 6, 2013
### rude man
Well, for one thing they ARE system inputs.
If you have a network with a battery as input, and you throw a switch which changes the input to your network from 0 volts to E volts, then your system input is E*u(t).
δ(t) is more complex. Picture a voltage E again, but this time throw the switch on and off for a very short time T. By 'very short' I mean much shorter than the shortest time constant in your network. Then the input to your network is E*T*δ(t) volts. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8163259029388428, "perplexity": 1207.6753629967461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811655.65/warc/CC-MAIN-20180218042652-20180218062652-00776.warc.gz"} |
https://www.physicsforums.com/threads/finding-the-determinant.593189/ | # Finding the Determinant
1. Apr 3, 2012
### deana
1. The problem statement, all variables and given/known data
Let A be the matrix with eigenvalues x1 = 2, x2 = 1, x3 = 1/2 , x4 = 10
and corresponding eigenvectors v1: <1,-1,1,0>, v2: <1,-1,0,0>, v3: <1,0,0,1>, v4: <0,0,1,1>
Calculate |A|
2. Relevant equations
See above
3. The attempt at a solution
I'm not really sure how to start this problem but i know that:
For nxn matrices X, Y , Z
|XYZ| = |X| |Y| |Z| and |X^ (-1)|= 1 / |X|
Maybe I could use this to solve the problem?
Any input or suggestions about how to start this problem would be helpful!
Thanks!:)
Last edited: Apr 3, 2012
2. Apr 3, 2012
### Dick
What does the matrix of your linear transformation look like if you express it in the basis {v1,v2,v3,v4}?
3. Apr 3, 2012
### Ray Vickson
Do you know the relationship between the eigenvalues of a matrix and the determinant of that matrix? It is a standard result. If it is not in your textbook or course notes, it can certainly be found through Google.
RGV
Similar Discussions: Finding the Determinant | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9011804461479187, "perplexity": 1049.9009552520363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826049.46/warc/CC-MAIN-20171023130351-20171023150351-00687.warc.gz"} |
https://www.clutchprep.com/chemistry/practice-problems/80969/in-a-study-of-the-decomposition-of-hydrogen-peroxide-in-dilute-sodium-hydroxide- | Problem: In a study of the decomposition of hydrogen peroxide in dilute sodium hydroxide at 20°C. H2O2 (aq) →H2O (l) + 1/2O2 (g) the concentration of H2O2 was followed as a function of time. It was found that a graph of ln[H2O2] versus time in minutes gave a straight line with a slope of -1.61 x 10-3 min-1 and a y-intercept of -3.44. Based on this plot, the reaction is ______________ order in H2O2 and the half life for the reaction is ____________ minutes.
FREE Expert Solution
92% (202 ratings)
Problem Details
In a study of the decomposition of hydrogen peroxide in dilute sodium hydroxide at 20°C.
H2O(aq) →H2O (l) + 1/2O(g)
the concentration of H2O2 was followed as a function of time.
It was found that a graph of ln[H2O2] versus time in minutes gave a straight line with a slope of -1.61 x 10-3 min-1 and a y-intercept of -3.44.
Based on this plot, the reaction is ______________ order in H2O2 and the half life for the reaction is ____________ minutes. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8161659836769104, "perplexity": 2314.5473608430707}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141732835.81/warc/CC-MAIN-20201203220448-20201204010448-00703.warc.gz"} |
http://physics.stackexchange.com/questions/92221/reducing-massive-representation-of-the-poincare-group-to-the-massless-one | Reducing massive representation of the Poincare group to the massless one
I want to ask about the connection for massive and massless representation of the Poincare group. Sorry for the awkwardness.
First I must to represent the formalism for both of cases.
Massive representation.
There is the formalism for an arbitrary spin-$s$ and massive representation of the Poincare group: $$(\partial^{2} + m^{2})\psi_{a_{1}...a_{n}\dot {b}_{1}...\dot {b}_{k}}(x) = 0, \qquad (1)$$ $$\partial^{a \dot b}\psi_{a...a_{n - 1}\dot {b}...\dot {b}_{k - 1}}(x) = 0, \qquad (2)$$ where $n + k = 2s$; field $\psi$ is symmetric by the all of indices and has the following transformation law under the Lorentz group $\left(\frac{n}{2}, \frac{k}{2}\right)$: $$\quad \psi_{a_{1}...a_{n}\dot {b}_{1}...\dot {b}_{k}}' = N_{a_{1}}^{\quad c_{1}}...N_{a_{n}}^{\quad c_{n}}{N^{*}}_{\dot {b}_{1}}^{\quad \dot {d}_{1}}...{N^{*}}_{\dot {b}_{k}}^{\quad \dot {d}_{k}}\psi_{c_{1}...c_{n}\dot {d}_{1}...\dot {d}_{k}}.$$
$(1)$ gives the irreducibility by mass (or the statement $P^{2}\psi = m^2\psi$ for first Casimir operator of the Poincare group), while $(2)$ gives the irreducibility by spin (or the statement $W^{2}\psi = -m^{2}s(s + 1)\psi$ for the second Casimir operator). This representation has $2s + 1$ spin degrees of freedom.
For the case of an integer spin-$s$ representation refers to the traceless transverse symmetric tensor of rank s $A_{\mu_{1}...\mu_{s}}$: $$A^{\mu}_{\quad \mu ...\mu_{s - 2}} = 0, \quad \partial_{\mu}A^{\mu ...\mu_{s - 1}} = 0,$$
$$(\partial^{2} + m^{2})A_{\mu_{1}...\mu_{s}} = 0,\quad A_{\mu_{1}...\mu_{i}...\mu_{j}...\mu_{s}} = A_{\mu_{1}...\mu_{j}...\mu_{i}...\mu_{s}}.$$
Massless representation.
There is the formalism for an arbitrary helicity-$\lambda$ and massless representations (for given helicity there is infinite number of the representations!) of the Poincare group: $$\partial^{a \dot {d}}\psi_{a...a_{n - 1}\dot {b}_{1}...\dot {b}_{k}} = 0. \qquad (3)$$ $$\partial^{\dot {b} c}\psi_{a_{1}...a_{n}\dot {b}...\dot {b}_{k - 1}} = 0. \qquad (4)$$ Both of them realize the irreducibility by mass $\partial^{2} \psi = 0$ and irreducibility by helicity, $$W_{c \dot {c}}\psi_{a_{1}...a_{n}\dot {b}_{1}...\dot {b}_{k}} = \frac{n - k}{2}p_{c \dot {c}}\psi_{a_{1}...a_{n}\dot {b}_{1}...\dot {b}_{k}} = \lambda p_{c\dot {c}}\psi_{a_{1}...a_{n}\dot {b}_{1}...\dot {b}_{k}}.$$ The representation has only one independent component, as it must be for massless one. If the representation is real we must take the direct sum of representations $\left( \frac{n}{2}, \frac{k}{2} \right) + \left( \frac{k}{2} , \frac{n}{2} \right)$, so the number of degrees of freedom will increase from 1 to 2.
My question.
I'm interesting only in an integer spin representations.
For spin-1 massive case and field (I choosed one of three possible variants) $$\psi_{a \dot {b}} \to A_{\mu} = \frac{1}{2}\tilde {\sigma}_{\mu}^{\dot {b} a}\psi_{a \dot {b}}$$ there is three degrees of freedom, which refers to the transverse condition $\partial_{\mu}A^{\mu} = 0$.
But in massless case we also can introduce gauge transformations $$A_{\mu} \to A_{\mu}' = A_{\mu} + \partial_{\mu} \varphi$$ which don't change the transverse condition and Klein-Gordon equation: $$\partial_{\mu}A^{\mu}{'} = \partial_{\mu}A^{\mu} + \partial^{2}\varphi = 0 \Rightarrow \partial^{2}\varphi = 0,$$ $$\partial^{2}A_{\mu}' = \partial^{2}A_{\mu} = 0.$$ So we can also satisfy the condition $u_{\mu}A^{\mu} = 0$, where $u_{\mu}$ is some timelike vector. So the number of independent components reduce to two (we have the case when representation is real, so this representation gives one particle with two possible states of helicity).
By the full analogy with spin-1 massive case I got the result with spin-2 massive case.
In the beginning it has $$2s + 1 = 10 (symmetry) - 4 (transverse) - 1(traceless) = 5$$ independent components. When $m = 0$ we have the gauge transformations $$A_{\mu \nu} \to A_{\mu \nu}{'} = A_{\mu \nu} + \partial_{\mu}\varepsilon_{\nu} + \partial_{\nu}\varepsilon_{\mu},$$ by which we can set the condition $u_{\mu}A^{\mu \nu} = 0$ which decreases the number of degrees of freedom by three (the another one is equal to one $k_{\mu}A^{\mu \nu}u_{\nu}$ from $\partial_{\mu} A^{\mu \nu} = 0$). So we also have 2 independent components.
For the arbitrary integer spin representation I also can get two components, because there is gauge transformations $$A_{\mu_{1}...\mu_{s}}{'} = A_{\mu_{1}...\mu_{s}} + \partial_{\mu_{1}}\varepsilon_{\mu_{2}} + ...,$$ by which I also can reduce the number of degrees of freedom of my field.
Finally, my question (edited): can this reduction from massive to massless case be derived on the language of $(1)-(4)$?
-
To make sure I understand the question. Your (2) makes field divergenceless, in vector notation you impose something like $\partial^\nu \phi_{\nu\mu....}=0$. From (3) or (4) one gets (2), but they are stronger than (2), in particular they impose $\square \phi_{\nu\mu...}=0$ while (2) does not. So I do not think one can get (3) and (4) from (2) – John Jan 3 '14 at 20:12
I've got a feeling that $\psi$ in your sections are the same. In (1)-(2) your $\psi$ is a gauge potential, analog $A_\mu$, while your (3)-(4) is an analog of field stengh $F_{\mu\nu}$. The Bianchi identities $\partial_\mu F_{\nu\sigma}+2\mbox{more}=0$ implies that $F=dA$, i.e. $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$. This how the two approaches are related. The best way to show this is to solve equations by Fourier transform. Say for (3,4) you get $\int \xi_a...\xi_a \bar{\xi}_{\dot{b}}...\bar{\xi}_{\dot{b}} f(\xi) \exp{\xi_c\bar{\xi}_{\dot{d}} x^{c\dot{d}}}$. – John Jan 5 '14 at 8:51
One more example is the free graviton $g_{\mu\nu}$ that is described by $\psi_{aa,\dot{b}\dot{b}}$ and equivalently by the Weyl tensor $\psi_{aaaa}, \psi_{\dot{a}\dot{a}\dot{a}\dot{a}}$ – John Jan 5 '14 at 8:56
$\square A_\mu=0$ or $\square\psi_{a\dot{a}}=0$+constraint (3) can be solved analogously. There is nothing special about $4d$ spinor language you are using. The same things can be asked in any dimension. While it is nonlocal to go from (3,4) to (1,2), it is easy to see that $\partial ...\partial \psi$ with indices contracted appropriately obeys (3,4) as a consequence of (1,2). It is obvious for $A_\mu$ and $F_{\mu\nu}$, which both describes massless spin-one. – John Jan 5 '14 at 9:02
So, to summarize, there are two maps. One maps gauge potential (1,2) to gauge invariant (3,4), it is given by applying derivatives like in $A_\mu$ - $F_{\mu\nu}$ example. The map in the opposite direction is nonlocal, i.e. one can reconstruct $A_\mu$ from $F_{\mu\nu}$, but it is an integral. So the simplest way to relate the two approaches is to show that the solution spaces are isomorphic with the help of Fourier transform – John Jan 5 '14 at 19:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704011082649231, "perplexity": 297.45561523721636}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162648.4/warc/CC-MAIN-20160205193922-00280-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/43378/using-the-substitution-method-for-solving-recurrences | # Using the substitution method for solving recurrences
I have a question.
In my book they have the following recurrence:
$T(n) = 3T(\lfloor n/4\rfloor )+\theta(n^2)$
They try to guess that $T(n) = O(n^2)$ and they then use the substitution method to verify the guess. But they don't show the base case? Isn't that necessary?
I think maybe it is because they don't know what happen with $T(n)$ when $n=1.$ ??
In my book they also have the reccurence $T(n)=2T(\lfloor n/2 \rfloor)+n$ and $T(1)=1$
They then guess that $T(n)=O(n \ln n)$ And they use the substitution method to verify it.
They assume that $T(n)=O(n \ln n)$ for all positive m
$T(n) \leq 2( c \lfloor n/2 \rfloor \ln( \lfloor n/2 \rfloor)+n$
$\phantom{T(n)} \leq cn \ln(n/2)+n$
$\phantom{T(n)} = cn \ln(n)-cn \ln(2)+n$
$\phantom{T(n)} = cn\ln(n)-cn+n$
$\phantom{T(n)} \leq cn \ln(n)$
where the last step holds as long as $c\geq 1$.
Ok. They then say: "Mathematical induction requires us to show that our solution holds for the boundary conditions"
$T(1)\leq cl\ln(1)=0$
which is at odds with $T(1)=1$
but then they take advantage of asymptotic notation requiring them only to prove $T(n)\leq c n \ln(n)$ for $n\geq n_0$ where they get to choose $n_0$
Then they replace $T(1)$ by $T(2)=4$ and $T(3)=5$ as base cases in the inductive proof letting $n_0=2$
And my question is:
Why do I have to replace the base case $T(1)$ with $T(2)$ AND $T(3)$? Why not just replace it with $T(2)=4$
I can derive $T(2)=4$ from the recurrence and then say
$T(2)\leq c2 \ln(2) = c2$
Where $c \geq 1$ and I choose $c\geq 2$
Why do I have to consider $T(3)$ ?
-
There are two competing ideas here, that of induction, and that of asymptotics. Normally, induction requires a base case, but because of the way asymptotics work, the base case will be trivially satisfied, because we can always take $c$ to be large enough to outweigh what happens in the early cases. The fact that they did any base cases at all is purely pedagogical, so as to make things look like induction, but as we shall see the base case is not strictly necessary.
The recurrence $T(n)$ is $O(f(n))$ if there exists constants $c$ and $n_0$ such that $T(n+n_0)<cf(n+n_0)$ for every n>0$. Here we have phrased things in terms of$n+n_0$just so that the induction can start at$1$, but there is no harm in replacing$n$with$n_0$and starting the induction with$1+n_0$. Assume$f(n)>0$for all$n>k$. Since we don't care what$c$is, if all we are concerned about is making the asymptotic statement true, we can take$n_0$to be$k$, because we can push all the rest of the slack into our choice of the unspecified constant$c$. It might take many terms before$f$starts growing as fast as$T$, but$c$can take care of any finite number of terms before this happens. For example. if$T(n)=10^{6n}$for$n<10^6$and$0$afterwards,$T(n)$is$O(1)$. We start with$n_0=1$and we take$c=10^{10^{10^{10}}}$. Or bigger if we want. We take$c$to be whatever it needs to be so that the base case is immediately satisfied without any checking. Note that we could take$c=10^6$if we only cared about the base case, but a larger constant was required to make the statement true despite our poor choice of$n_0$. Of course, we might still need$n_0$to be large, because the growth of the sequence might depend on$n$in such a way that it behaves differently for small$n$, and our choice of$c$can only take care of the base case. We need$n_0$to make sure that the induction step still holds. In the above example, if we want induction to hold, we want to take$n_0>10^6\$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9169595241546631, "perplexity": 191.9358621942934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654667/warc/CC-MAIN-20140305060734-00019-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1085704/is-there-a-systematic-relation-between-the-generating-functions-for-the-rows-vs | # Is there a systematic relation between the generating functions for the rows vs that for columns of infinite sized Carleman-matrices?
(Roughly related, but generalizing, of this earlier question)
Background: The first part of the following(the column-wise-focus) is also described in Eri Jabotinski's 1953-treatize Representation of functions by matrices (at jstor)
Consider the (Carleman-)matrix of Stirling numbers 2nd kind, factorially rescaled; let's call it $S$. I show here only the top left edge; but it is actually meant as of infinite size:
$\qquad$
It is well known that the generating function for the $c$'th column is $f_c(x)=(\exp(x)-1)^c$ , and for instance the leftmost column (index $c=0$) is related to $f_0(x)=(\exp(x)-1)^0 = 1$ and the second column (index $c=1$ is related to the well known function $f_1(x)=\exp(x)-1$.
If I extend now that matrix by columns, for which the generating functions are accordingly $f_{-1}(x)=(\exp(x)-1)^{-1}$,$f_{-2}(x)=(\exp(x)-1)^{-2}$ and so on then I have not only to left-prepend new columns but also I must extend the matrix with prepended rows as well. The central segement of this now two-way infinite-indexed matrix, let's call it $S^*$ looks like this
$\qquad$
Well, I'm having that the gf columnwise are $f_c(x)=(\exp(x)-1)^c$ with the column-index now from $-\infty$ to $\infty$
Now the view along rows, where my question is originated from.
I have found by pattern analyzing, that the rowwise generating functions are $$g_r(t) = t/(1+t)/\log(1+t)^{r+1}$$ where I have now to replace $t =1/x$ to match the column-index for the exponents at $x$, so actually it is $$h_r(x)= 1/(1+x)/\log(1+1/x)^{r+1}$$ The index $r=c=0$ is at the single $1$ in the center of the image, and $r=1$ indicates the row below, which reads, form right to left, $g_1(t)=1 -1/2t+5/12t^2-3/8t^3 ...$ and is also $h_1(x)=1 -1/2/x+5/12/x^2-3/8/x^3 ...$
I've also checked the similarly starred version of the matrix of Stirling numbers 1st kind, whose entries column-wise are generated by the functions $f_c(x)=\log(1+x)^c$, and for the row-wise generating functions I've guessed $$g_r(t)= t \exp(t) / (\exp(t)-1)^{r+1}$$ and $$h_r(x) = g_r(1/x)$$ (Correct me if I'm wrong here)
My question:
Q: Is there a simple/memorizable rule for the relations of generating functions of the transposed Carleman matrices in comparable / general cases?
(Possibly this applies only to triangular Carlemanmatrices, but I don't know that)
[update] A reference to a discussion of this might be sufficient; I think I've seen something like this several years ago but could not remember, where... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8363525867462158, "perplexity": 326.1829730433238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00324.warc.gz"} |
http://www.lsinstruments.ch/technology/diffusing_wave_spectroscopy_dws/microrheology/ | # Microrheology
Microrheology is a rheology method that uses colloidal tracer particles, dispersed within a sample, as probes. Tracer particles (with diameters ranging from 0.3 to 2.0 $$\mu$$m) may be naturally present in the system, as in suspensions and emulsions, or added to the medium under interest. The motion of the tracer particles reflects the rheological properties of their local environment. In purely viscous samples, tracer particles freely diffuse through the whole sample (Fig. 1a), which results in a particle mean square displacement $$\langle \Delta r^2(\tau) \rangle$$ that is linear with time (red line in Fig. 2a).
\begin{gather*} \langle \Delta r^2(\tau) \rangle= 6 D \tau \end{gather*}
where D is the particle diffusion coefficient as expressed by the (Standard) Stokes-Einstein equation: \begin{gather*} D= \frac{k_B T}{6 \pi \eta R}. \end{gather*}
Related product:
DWS Rheolab
Fitting the measured $$\langle \Delta r^2(\tau)\rangle$$ with these equations yields the viscosity $$\eta$$ of a Newtonian solvent containing tracer particles with known radius $$R$$. However, in materials that also contain elastic components, the $$\langle\Delta r^2(\tau)\rangle$$ shows a more complex time-dependency, which makes the above equations not generally applicable.
Fig 1: a) Particles freely diffuse in
purely viscous liquids and can
explore the whole sample.
Fig 1: b) Particles trapped in a
gel network.
This can be illustrated with the example of a gelatin solution containing polystyrene tracer particles. At elevated temperatures (e.g. 50°C), the gelatin solution behaves purely liquid and the tracer can diffuse freely (Fig. 1a). However, at low temperature (15°C), the gelatin forms a gel and the tracer particles are trapped within the network (Fig. 1b). The thermal energy (Brownian motion) only allows local deformation with amplitudes that depend on the stiffness of the local environment. The restricted motion of the tracer particles is reflected in a $$\langle\Delta r^2(\tau)\rangle$$ that shows a plateau (blue line in Fig. 2a) with amplitude corresponding for the maximal displacement. This is characteristic of the strength of the gelatin network.
Many materials are complex fluids that exhibit both viscous and elastic behaviors, and are called, for this reason, viscoelastic materials. Their response typically depends on the length and time scale probed in the measurements. A natural way to incorporate viscoelastic behavior is to generalize the Stokes-Einstein relation [1]
\begin{gather*} G^*(\omega)= \frac{k_B T}{\pi\, R\, i\, \omega\, \langle \Delta r^2(i\,\omega) \rangle} =G'(\omega)+i\,G''(\omega). \end{gather*}
This equation allows calculation of the frequency-dependent storage $$G’(\omega)$$ and loss $$G''(\omega)$$ moduli from the measured $$\langle \Delta r^2(\tau) \rangle$$. In our example of the gelatin solution, the values obtained for $$G'(\omega)$$ and $$G''(\omega)$$ from microrheology are shown in Fig. 2b. At high temperatures (red line) $$G''(\omega)$$ is proportional to the frequency $$\omega$$, indicating a pure liquid, whereas $$G'(\omega)$$ is very small and out of plotting range. However, at low temperatures (blue lines), $$G'(\omega)$$ dominates $$G''(\omega)$$ over an extended frequency range; only at very high frequency a cross-over to a domain where $$G''(\omega)$$ is larger than $$G'(\omega)$$ is observed. Such a behavior may be approximately described by the Kelvin-Voigt model (Fig. 2c), which consists of a spring and dashpot connected in parallel. The spring stands for elasticity of the gelatin network, whereas the dashpot represents a viscous damper that describes the dissipative effect of water around the gelatin network.
Fig. 2: a) Mean square displacement $$\langle \Delta r^2(\tau) \rangle$$ from freely diffusing (red) and trapped (blue) particles.
b) Storage $$G'(\omega)$$ and and loss $$G''(\omega)$$ moduli from freely diffusing (red) and trapped particles (blue).
c) Kelvin-Voigt model consists of a spring and a dashpot connected in parallel.
Most microrheology methods are said to be "passive", i.e., they exclusively rely on thermal energy (i.e. Brownian motion) to displace the tracer particles within the sample. Only a few specialized methods (e.g. optical tweezers, magnetic microrheology) are "active", i.e., external (optical, magnetic) forces move tracer particles with energies that are stronger than thermal energy $$k_B T$$. The advantage of active methods is that the amplitude of the particle displacements can be controlled, which allows either linear or non-linear rheology to be performed. On the other hand, passive microrheology is ideal for measurements in the linear viscoelastic region (LVR) because the weak thermal energy, $$k_B T$$, ensures small amplitudes in the displacement of the tracer particles.
Microrheology can be further differentiated by the method used to measure the $$\langle \Delta r^2(\tau) \rangle$$ of the tracer particles. The most common techniques are [2]:
## Particle tracking microrheology
Sequences of images are recorded with a video camera mounted on a microscope, and software is used to track the motion of the particles. From this, the $$\langle \Delta r^2(\tau) \rangle$$ of the particles and, subsequently, the medium's rheological properties are computed. This technique yields additional information on inhomogeneous samples where particles probe different local environments. The disadvantages of this technique are that it requires laborious tuning of tracking parameters and data treatment. Moreover, the limited spatial resolution of optical microscopy restricts tracking to samples with low viscosity, i.e., where tracer particles can displace over significant distances.
## DLS microrheology
The $$\langle \Delta r^2(\tau) \rangle$$ of the tracer particles is extracted using Dynamic Light Scattering (DLS). The spatial resolution is about as low as for microscopy-based microrheology. As a consequence, DLS microrheology can only be used for samples with low viscosity where tracer particles can displace over significant distances. The main advantage compared to microscopy-based microrheology is that the measurements and data treatment are straight forward and fast.
## DWS microrheology
The $$\langle \Delta r^2(\tau) \rangle$$ of the tracer particles is extracted using Diffusing Wave Spectroscopy (DWS). This technique probes much larger sample volumes than DLS- and microscopy-based microrheology, which yields enhanced statistics at measuring times on the order of one minute. In addition, DWS has a greatly improved spatial resolution with respect to DLS and microscopy. Therefore, it allows for measurements on highly viscous and even arrested (non-ergodic) samples where motion of tracer particles is strongly restricted. Finally, the accessibility to frequencies as high as 107 rad/s is one of the most important features of DWS microrheology. For comparison, mechanical rheometers are typically limited to frequencies up to 102 rad/s.
The DWS RheoLab from LS Instruments is a compact and versatile instrument, which takes advantage of modern light scattering technology. In contrast to traditional mechanical rheology, samples are sealed in glass cuvettes and, therefore, can be studied over extended time ranges to assess their stability or aging. Moreover, typical mechanical rheometers are limited to frequencies up to ~100 rad/s, and may take several hours to complete a frequency sweep. By harnessing our patented DWS Echo-technique, the DWS RheoLab can perform accurate and reliable measurements of $$G'(\omega)$$ and $$G''(\omega)$$ over an extended frequency range from 10-1 to 106 rad/s in a matter of minutes.
Fig. 3: Storage $$G'(\omega)$$ and loss $$G''(\omega)$$ moduli of an aqueous solution with 0.55% wt/v xanthan, measured by a mechanical rheometer and DWS RheoLab. For microrheology, polystyrene particles (980 nm diameter) were added at 1%wt/v. Note that DWS extends the measurements of $$G'(\omega)$$ and $$G''(\omega)$$ to considerably higher frequencies.
[1] T.G. Mason and D. A. Weitz:
Optical Measurements of Frequency-Dependent Linear Viscoelastic Moduli of Complex Fluids,
Physical Review Letters 74, 1250-1253 (1995). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8440113067626953, "perplexity": 1974.0771464958627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424564.72/warc/CC-MAIN-20170723142634-20170723162634-00310.warc.gz"} |
https://www.physicsforums.com/threads/velocity-and-particle-of-a-particle-with-a-wave-going-through.114987/ | # Velocity and particle of a particle with a wave going through
1. Mar 21, 2006
### sauri
2) Find an expression for the velocity of a particle in the medium while a wave is going through, and its acceleration.
I first thought that the particle might under go simple harmonic motion when the waved passed but then I remembered that the speed of a wave is determined by the properties of the medium it travels through. hence v is equal to the square root of elastic property/inertial property. So now I am confused as to how to solve this.
2. Mar 21, 2006
### Staff: Mentor
What is the mechanism for physical interaction between the wave and the particle? Is the particle floating on a water wave? Or is it a charged particle that is influenced by an EM wave travelling by?
3. Mar 21, 2006
### nrqed
Do not confuse the speed of the wave with the velocity of the particles . Each particle undergoes simple harmonic motion, as you said (so their speed is not constant, nor is their acceleration). The wvae, on the other hand, has a constant speed, given by the square root you are describing.
Hope this helps
Pat
Similar Discussions: Velocity and particle of a particle with a wave going through | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9526787996292114, "perplexity": 397.72989540152224}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463613738.67/warc/CC-MAIN-20170530011338-20170530031338-00312.warc.gz"} |
https://cn.maplesoft.com/support/help/maplesim/view.aspx?path=codegen/C(deprecated)&L=C | codegen,C - Maple Help
codegen
C
generate C code
Calling Sequence C(s) C(s, options)
Parameters
s - expression, array of expressions, list of equations, or procedure
Description
• Important: The codegen[C] command has been deprecated. Use the superseding command CodeGeneration[C] instead. See CodeGeneration for information on translation to other languages.
• The codegen[C] function generates C code for evaluating the input. The input s must be one of the following: a single algebraic expression, an array of algebraic expressions, a list of equations of the form name = algebraic (which is understood to mean a sequence of assignment statements), or a Maple procedure. If the array is not a named array, the name unknown is used. The remaining arguments are optional; they are described below.
• For help on translating Maple procedures into C, see codegen/C/procedure. Note: Type declarations are only given for Maple procedures. The C code that is otherwise generated will have to be placed inside a C subroutine or main program and the type declarations will have to be supplied by the user.
• The filename option: By default, the output is sent to standard output. If you are using the C command interactively, the output will appear on your terminal screen. An additional argument of the form filename = "f.c" can be used to direct the output to the file f.c.
• The optimized option: if the keyword optimized is specified as an additional argument, common subexpression optimization is performed. The result is a sequence of assignment statements in which temporary values are stored in local variables beginning with the letter t. The global names t0, t1, t2, ... are reserved for use by C for this purpose. The input to the optimizer must satisfy certain conditions. See codegen/optimize for more information.
• The precision option: the optional argument precision=single or precision=double specifies whether single precision or double precision is to be used in the generation of floating-point variables and constants. The default is single precision if the mode=single option is provided and double precision otherwise.
• The mode option: the optional argument mode=single or mode=double specifies whether single precision or double precision math function names are generated. The default is double precision (resulting in the standard math library names), even if the precision=single option is provided.
• The digits option: non-floating point Maple constants such as integers, fractions, and symbols such as Pi, are converted using evalf to floating-point constants where necessary, for example as arguments to real functions such as sqrt. By default, the number of digits used is 7 for single precision, 16 for double precision. This can be set to n digits by specifying an optional argument digits = n.
• The ansi option: If the input is a Maple procedure, by default, parameter declarations follow the old (Kernighan and Ritchie) C compiler syntax. If this option is given, parameter declarations follow the ANSI C syntax. See the last example in the Examples section.
• The declarations option: If the optional argument declarations = x::t is given, this specifies that the variable x is to be given the type t during translation. For more help on translating Maple procedures, see codegen/C/procedure.
• The parameters, locals, and globals options: if s is a computation sequence (a list of equations) then, by default, C assumes that all variables on the left-hand-side of the equations are global variables which cannot be removed from the computation sequence. These options are used to specify otherwise. Local variables may be removed by the C translator from a computation sequence.
• In translating arrays (Maple vectors and matrices and other arrays), the C function will reindex array subscripts to 0-based indexing which C requires. If the array contains unassigned entries, the value output in the C code is the string undefined.
• The C translator will translate certain Maple functions into their C equivalents. The known functions are listed below. For example, sin(x) + sec(x) will be translated into sin(x)+1/cos(x) in double precision mode and sinf(x)+1/cosf(x) in single precision mode. If a function is not handled by the C translator, the user will be informed and the function will be translated as is.
• The Maple floating-point functions understood by the C translator are: abs(x), signum(x), min(x,y), max(x,y), sqrt(x), ceil(x), floor(x), round(x), trunc(x), ln(x), exp(x), erf(x), and the trigonometric and hyperbolic functions, and their inverses. The Maple integer functions understood are: abs(a), min(a,b), max(a,b), signum(a), irem(a,b), iquo(b,b), and modp(a,p).
• The function C produces C code as a side-effect and returns NULL as the function value. Therefore, the ditto commands (% and %%) will not recall the output from the C command.
• The command with(codegen,C) allows the use of the abbreviated form of this command.
Examples
Important: The codegen[C] command has been deprecated. Use the superseding command CodeGeneration[C] instead.
> $\mathrm{with}\left(\mathrm{codegen},C\right):$
> $f≔1-\frac{x}{2}+3{x}^{2}-{x}^{3}+{x}^{4}$
${f}{≔}{1}{-}\frac{{1}}{{2}}{}{x}{+}{3}{}{{x}}^{{2}}{-}{{x}}^{{3}}{+}{{x}}^{{4}}$ (1)
> $C\left(f\right)$
t0 = 1.0-x/2.0+3.0*x*x-x*x*x+x*x*x*x;
> $C\left(f,\mathrm{optimized}\right)$
t2 = x*x; t5 = t2*t2; t6 = 1.0-x/2.0+3.0*t2-t2*x+t5;
> $f≔\mathrm{\pi }\mathrm{ln}\left({x}^{2}\right)-\mathrm{sqrt}\left(2\right){\mathrm{ln}\left({x}^{2}\right)}^{2}$
${f}{≔}{\mathrm{\pi }}{}{\mathrm{ln}}{}\left({{x}}^{{2}}\right){-}\sqrt{{2}}{}{{\mathrm{ln}}{}\left({{x}}^{{2}}\right)}^{{2}}$ (2)
> $C\left(f,\mathrm{optimized}\right)$
t1 = x*x; t2 = log(t1); t4 = sqrt(2.0); t5 = t2*t2; t7 = 0.3141592653589793E1*t2-t4*t5;
> $\mathrm{cs}≔\left[s=1+x,t=\mathrm{ln}\left(s\right)\mathrm{exp}\left(-x\right),r=\mathrm{exp}\left(-x\right)+xt\right]$
${\mathrm{cs}}{≔}\left[{s}{=}{1}{+}{x}{,}{t}{=}{\mathrm{ln}}{}\left({s}\right){}{{ⅇ}}^{{-}{x}}{,}{r}{=}{{ⅇ}}^{{-}{x}}{+}{x}{}{t}\right]$ (3)
> $C\left(\mathrm{cs},\mathrm{optimized}\right)$
s = 1.0+x; t1 = log(s); t2 = exp(-x); t = t2*t1; r = x*t+t2;
> $C\left(\mathrm{cs},\mathrm{optimized},\mathrm{locals}=\left[s,t,r\right]\right)$
t1 = log(1.0+x); t2 = exp(-x); r = x*t2*t1+t2;
> $v≔\mathrm{array}\left(\left[\mathrm{exp}\left(-x\right)x,\mathrm{exp}\left(-x\right){x}^{2}\right]\right):$
> $C\left(v,\mathrm{optimized}\right)$
t1 = exp(-x); t3 = x*x; v[0] = t1*x; v[1] = t1*t3;
A matrix with an undefined entry
> $A≔\mathrm{array}\left(1..2,1..2,\mathrm{symmetric}\right):$
> $A\left[1,1\right]≔\mathrm{log}\left(x\right):$$A\left[1,2\right]≔1-\mathrm{log}\left(x\right):$
> $\mathrm{print}\left(A\right)$
$\left[\begin{array}{cc}{\mathrm{ln}}{}\left({x}\right)& {1}{-}{\mathrm{ln}}{}\left({x}\right)\\ {1}{-}{\mathrm{ln}}{}\left({x}\right)& {{\mathrm{?}}}_{{2}{,}{2}}\end{array}\right]$ (4)
> $C\left(A,\mathrm{mode}=\mathrm{single}\right)$
A[0][0] = logf(x); A[0][1] = 1.0-logf(x); A[1][0] = 1.0-logf(x); A[1][1] = (0.0/0.0);
> $C\left(A,\mathrm{optimized}\right)$
t1 = log(x); t2 = 1.0-t1; A[0][0] = t1; A[0][1] = t2; A[1][0] = t2; A[1][1] = (0.0/0.0);
A simple procedure with declarations
> $f≔\mathrm{convert}\left(1-3{x}^{2}-2{x}^{3}+{x}^{4},\mathrm{horner}\right)$
${f}{≔}{1}{+}\left({-}{3}{+}\left({-}{2}{+}{x}\right){}{x}\right){}{{x}}^{{2}}$ (5)
> $f≔\mathrm{unapply}\left(f,x\right)$
${f}{≔}{x}{↦}{1}{+}\left({-}{3}{+}\left({-}{2}{+}{x}\right){\cdot }{x}\right){\cdot }{{x}}^{{2}}$ (6)
> $C\left(f\right)$
/* The options were : operatorarrow */ double f(x) double x; { { return(1.0+(-3.0+(-2.0+x)*x)*x*x); } }
> $C\left(f,\mathrm{ansi}\right)$
/* The options were : operatorarrow */ double f(double x) { { return(1.0+(-3.0+(-2.0+x)*x)*x*x); } } | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 27, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8215130567550659, "perplexity": 2965.8235802100203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00292.warc.gz"} |
https://proofwiki.org/wiki/Absolute_Value_of_Product/Proof_3 | # Absolute Value of Product/Proof 3
## Theorem
Let $x, y \in \R$ be real numbers.
Then:
$\size {x y} = \size x \size y$
where $\size x$ denotes the absolute value of $x$.
## Proof
$\ds \size {x y}$ $=$ $\ds \sqrt {\paren {x y}^2}$ Definition 2 of Absolute Value $\ds$ $=$ $\ds \sqrt {x^2 y^2}$ $\ds$ $=$ $\ds \sqrt {x^2} \sqrt{y^2}$ $\ds$ $=$ $\ds \size x \cdot \size y$
$\blacksquare$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999272882938385, "perplexity": 444.8012721861947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00572.warc.gz"} |
https://www.computer.org/csdl/trans/tc/1990/10/t1213-abs.html | Subscribe
Issue No.10 - October (1990 vol.39)
pp: 1213-1219
ABSTRACT
<p>Notation and a theorem are presented which, using a result of B. Chazelle and L.J. Guibas (1985), enable the authors to design an O(n log n) algorithm for reporting all visibility edges of a given n-vertex polygon. Improving on this bound to O(n) is presently focused upon. This problem is solved for polygons with at least one given visibility edge. It is assumed that both endpoints of this edge are convex vertices. Subsequently, it is shown how to drop this restriction. The general case of detecting weak edge visibility of an arbitrary simple polygon is dealt with.</p>
INDEX TERMS
visibility detection; optimal algorithm; visibility edges; n-vertex polygon; endpoints; convex vertices; weak edge visibility; computational complexity; computational geometry.
CITATION
J.-R. Sack, S. Suri, "An Optimal Algorithm for Detecting Weak Visibility of a Polygon", IEEE Transactions on Computers, vol.39, no. 10, pp. 1213-1219, October 1990, doi:10.1109/12.59852 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8547149896621704, "perplexity": 3302.6994974529066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828286.80/warc/CC-MAIN-20160723071028-00287-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://itl.nist.gov/div898/handbook/eda/section3/eda352.htm | 1. Exploratory Data Analysis
1.3. EDA Techniques
1.3.5. Quantitative Techniques
## Confidence Limits for the Mean
Purpose:
Interval Estimate for Mean
Confidence limits for the mean (Snedecor and Cochran, 1989) are an interval estimate for the mean. Interval estimates are often desirable because the estimate of the mean varies from sample to sample. Instead of a single estimate for the mean, a confidence interval generates a lower and upper limit for the mean. The interval estimate gives an indication of how much uncertainty there is in our estimate of the true mean. The narrower the interval, the more precise is our estimate.
Confidence limits are expressed in terms of a confidence coefficient. Although the choice of confidence coefficient is somewhat arbitrary, in practice 90 %, 95 %, and 99 % intervals are often used, with 95 % being the most commonly used.
As a technical note, a 95 % confidence interval does not mean that there is a 95 % probability that the interval contains the true mean. The interval computed from a given sample either contains the true mean or it does not. Instead, the level of confidence is associated with the method of calculating the interval. The confidence coefficient is simply the proportion of samples of a given size that may be expected to contain the true mean. That is, for a 95 % confidence interval, if many samples are collected and the confidence interval computed, in the long run about 95 % of these intervals would contain the true mean.
Definition: Confidence Interval Confidence limits are defined as:
$\bar{Y} \pm t_{1 - \alpha/2, \, N-1} \,\, \frac{s}{\sqrt{N}}$
where $$\bar{Y}$$ is the sample mean, s is the sample standard deviation, N is the sample size, α is the desired significance level, and t1-α/2, N-1 is the 100(1-α/2) percentile of the t distribution with N - 1 degrees of freedom. Note that the confidence coefficient is 1 - α.
From the formula, it is clear that the width of the interval is controlled by two factors:
1. As N increases, the interval gets narrower from the $$\sqrt{N}$$ term.
That is, one way to obtain more precise estimates for the mean is to increase the sample size.
2. The larger the sample standard deviation, the larger the confidence interval. This simply means that noisy data, i.e., data with a large standard deviation, are going to generate wider intervals than data with a smaller standard deviation.
Definition: Hypothesis Test To test whether the population mean has a specific value, $$\mu_{0}$$, against the two-sided alternative that it does not have a value $$\mu_{0}$$, the confidence interval is converted to hypothesis-test form. The test is a one-sample t-test, and it is defined as:
H0: $$\mu = \mu_{0}$$ Ha: $$\mu \neq \mu_{0}$$ Test Statistic: $$T = (\bar{Y} - \mu_{0})/(s/\sqrt{N})$$ where $$\bar{Y}$$, N, and s are defined as above. Significance Level: α. The most commonly used value for α is 0.05. Critical Region: Reject the null hypothesis that the mean is a specified value, $$\mu_{0}$$, if $$T < t_{\alpha/2, \, N-1}$$ or $$T > t_{1 - \alpha/2, \, N-1}$$
Confidence Interval Example We generated a 95 %, two-sided confidence interval for the ZARR13.DAT data set based on the following information.
N = 195
MEAN = 9.261460
STANDARD DEVIATION = 0.022789
t1-0.025,N-1 = 1.9723
LOWER LIMIT = 9.261460 - 1.9723*0.022789/√195
UPPER LIMIT = 9.261460 + 1.9723*0.022789/√195
Thus, a 95 % confidence interval for the mean is (9.258242, 9.264679).
t-Test Example We performed a two-sided, one-sample t-test using the ZARR13.DAT data set to test the null hypothesis that the population mean is equal to 5.
H0: μ = 5
Ha: μ ≠ 5
Test statistic: T = 2611.284
Degrees of freedom: ν = 194
Significance level: α = 0.05
Critical value: t1-α/2,ν = 1.9723
Critical region: Reject H0 if |T| > 1.9723
We reject the null hypotheses for our two-tailed t-test because the absolute value of the test statistic is greater than the critical value. If we were to perform an upper, one-tailed test, the critical value would be t1-α,ν = 1.6527, and we would still reject the null hypothesis.
The confidence interval provides an alternative to the hypothesis test. If the confidence interval contains 5, then H0 cannot be rejected. In our example, the confidence interval (9.258242, 9.264679) does not contain 5, indicating that the population mean does not equal 5 at the 0.05 level of significance.
In general, there are three possible alternative hypotheses and rejection regions for the one-sample t-test:
Alternative Hypothesis Rejection Region
Ha: μ ≠ μ0 |T| > t1-α/2,ν
Ha: μ > μ0 T > t1-α,ν
Ha: μ < μ0 T < tα,ν
The rejection regions for three posssible alternative hypotheses using our example data are shown in the following graphs.
Questions Confidence limits for the mean can be used to answer the following questions:
1. What is a reasonable estimate for the mean?
2. How much variability is there in the estimate of the mean?
3. Does a given target value fall within the confidence limits?
Related Techniques Two-Sample t-Test
Confidence intervals for other location estimators such as the median or mid-mean tend to be mathematically difficult or intractable. For these cases, confidence intervals can be obtained using the bootstrap.
Case Study Heat flow meter data.
Software Confidence limits for the mean and one-sample t-tests are available in just about all general purpose statistical software programs. Both Dataplot code and R code can be used to generate the analyses in this section. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9637385010719299, "perplexity": 517.8066497328034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660242.52/warc/CC-MAIN-20160924173740-00000-ip-10-143-35-109.ec2.internal.warc.gz"} |
http://wiki.openwrt.org/toh/ruckus/vf7111?rev=1383478243&do=diff | OpenWrt Wiki
Site Tools
toh:ruckus:vf7111
Differences
This shows you the differences between two versions of the page. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8371164798736572, "perplexity": 2782.0823200046852}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00170-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/108261/how-does-throttling-effect-the-upstream-flow | How does throttling effect the upstream flow?
Let's assume we have a simple duct with a fan (compressor) situated somewhere along it. The fan runs at the same speed always. At the end of the duct, downstream of the fan, there is a throttle valve. The throttle valve is simply a plate facing normal to the flow in the duct, and can move horizontally (towards and away from the duct). Effectively the plate can completely block the duct outlet, giving the ducted flow no where to exit, or it can be placed sufficiently far away such that the flow "exhausts" to the atmosphere freely.
My question is now: Why does throttling (moving the plate closer to the duct exit) reduce the mass flow rate of the flow in the duct?
Will the same result be observed in a potential (irrotational) flow?
I understand that the throttling process is isenthalpic, which results in a loss of stagnation pressure across it, if that helps from the start.
Edit: I would also like to understand the causality of the process: does the throttling cause a reduced mass flow which means the fan will now achieve a higher head (pressure increase) at the same speed or does the throttling cause an increased pressure which reduces the mass flow?
This is a great question, and the answer relates intimately to why turbofan engines equipped with afterburners require variable geometry exhaust nozzles. Without increasing the throat area to accommodate the larger volumetric flow rate, lighting the afterburner would back-pressure the fan and very possibly lead to a compressor stall. Similarly, mechanically reducing the downstream area (all other things being equal) will require the flow to have a higher upstream stagnation pressure, which means the fan/pump will be required to work harder.
Now, as to your question about why the flowrate decreases when the exit area is closed, we need to expound a bit on how fans and compressors operate. The fan speed, massflow rate, and pressure ratio are related in a complex way and are usually represented graphically by a fan map.
For a given rotational speed, there is a single steady-state characteristic relating pressure ratio and massflow rate. The shape of this curve can vary (e.g. compare the 40% Nf line with the 100% Nf line above), but generally speaking the higher the pressure ratio, the lower the massflow rate for a given engine RPM. This makes some intuitive sense because the faster the bladetip velocity compared to the axial velocity, the higher will be the flow turning within the bladerow. Work done and pressure rise are proportional to flow turning within the rotor, so higher pressure ratios are positively correlated with lower massflow rates/axial velocities (up to a point).
To truly understand the causal relationship between massflow rate and back-pressure requires that we abandon steady-state thinking altogether. If the exit area is reduced, unsteady compression waves propagate upstream at the speed of sound, incrementally increasing the static pressure at the exit of the fan. This increased back-pressure means that the entering flow now encounters an adverse streamwise pressure gradient and slows down. This slower flow is then worked harder by the spinning bladerow, which results in larger stagnation pressure and temperature rises.
Remember that the flow always exits the device at atmospheric pressure so long as it is subsonic, precisely because of the information propagated upstream by unsteady pressure waves. Thus, if reducing the exit area means a higher exit Mach number is required to conserve mass, the total-to-static pressure ratio must increase. A fixed exit static pressure and increased total-to-static pressure ratio demands that the upstream stagnation pressure increase, and so the upstream turbomachinery will be affected.
If you are looking to put numbers on things, the isentropic flow function is a useful and straightforward way to determine the massflow rate of a compressible fluid if other of the fluid's basic properties are known. In general, the massflow rate of a fluid through a cross-sectional area $A$ is equal to
$\dot{m}=\rho VA$.
Now, if the fluid is compressible and the Ideal Gas Law applies, then
$\dot{m}=\rho VA=\left(\frac{P}{RT}\right)(M\sqrt{\gamma RT})A=PAM\sqrt{\frac{\gamma}{RT}}$.
Both the stagnation temperature and stagnation pressure are preferred flow variables to their static counterparts, so the above equation can be rewritten as
$\dot{m}=P_0 \left(\frac{P}{P_0}\right)AM\sqrt{\frac{\gamma (T_0/T)}{R(T_0)}}$,
and the stagnation properties (as well as the through-flow area) can be moved to the LHS of the equation:
$\frac{\dot{m}\sqrt{T_0}}{P_0 A}=\left(\frac{P}{P_0}\right)M\sqrt{\frac{\gamma}{R}\left(\frac{T_0}{T}\right)}$
If the flow is isentropic (as we are assuming), we know that
$\frac{P}{P_0}=\left(\frac{P_0}{P}\right)^{-1}=\left(\frac{T_0}{T}\right)^\frac{\gamma}{1-\gamma}$,
which gives us
$\frac{\dot{m}\sqrt{T_0}}{P_0 A}=M\sqrt{\frac{\gamma}{R}}\left(\frac{T_0}{T}\right)^{\frac{1}{2}+\frac{\gamma}{1-\gamma}}=M\sqrt{\frac{\gamma}{R}}\left(\frac{T_0}{T}\right)^{\frac{1+\gamma}{2(1-\gamma)}}$.
Again invoking our assumption of isentropic flow, we know that the stagnation temperature ratio is related to the local Mach number by the following equation:
$\frac{T_0}{T}=1+\frac{\gamma-1}{2}M^2$
which, when plugged into the previously derived expression gives us the isentropic flow function $FF_T$:
$FF_T=\frac{\dot{m}\sqrt{T_0}}{P_0 A}=M\sqrt{\frac{\gamma}{R}}\left(1+\frac{\gamma-1}{2}M^2\right)^{\frac{1+\gamma}{2(1-\gamma)}}$
To compute the massflow rate we simply rearrange the isentropic flow function relation...
$\boxed{\dot{m}=P_0 AM\sqrt{\frac{\gamma}{RT_0}}\left(1+\frac{\gamma-1}{2}M^2\right)^{\frac{1+\gamma}{2(1-\gamma)}}}$.
**Note: The above equation is true at any given section within a compressible flow, but the stagnation properties may change from location to location (or over time) based on the specifics of the exact flow the equation is being applied to.
• Mathjax tip: Instead of $...$, use $$...$$; then you don't need blank lines buffering the equation, and more importantly it doesn't try to squeeze things vertically to fit inline. (Nice answer by the way :) – user10851 Jul 15 '14 at 22:08
• Thanks, this is a great answer, it explains it very well. – Dipole Jul 16 '14 at 23:01
For a given pressure drop (pressure upstream minus pressure downstream), the flow rate is proportional to the 4th power of radius, according to the Hagen–Poiseuille equation.
As you close the valve, flow across the valve will decrease, and pressure between the fan and the valve will increase. Because the pressure on the downstream side of the fan increases, flow across the fan will decrease. A steady state where flow across the valve and across the fan will both have decreased to the same lower value will be reached. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8157652616500854, "perplexity": 528.9302020694217}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703644033.96/warc/CC-MAIN-20210125185643-20210125215643-00584.warc.gz"} |
https://www.physicsforums.com/threads/finding-the-interval-of-convergence.891028/ | # Finding the interval of convergence
1. Oct 27, 2016
### NihalRi
1. The problem statement, all variables and given/known data
The question was to find the interval of convergence for a series.
2. Relevant equations
an+1/an
3. The attempt at a solution
2. Oct 28, 2016
### Staff: Mentor
Do you have a question?
Draft saved Draft deleted
Similar Discussions: Finding the interval of convergence | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9703924655914307, "perplexity": 2535.390288069758}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104172.67/warc/CC-MAIN-20170817225858-20170818005858-00494.warc.gz"} |
https://jdhao.github.io/2019/02/23/crop_rotated_rectangle_opencv/ | In computer vision tasks, we need to crop a rotated rectangle from the original image sometimes, for example, to crop a rotated text box. In this post, I would like to introduce how to do this in OpenCV.
If you search on the internet about cropping rotated rectangle, there are several answers in the Stack Overflow which suggest using minAreaRect() to find the minimum bounding rectangle, rotating the original image and finally cropping the rectangle from the image. You can find these questions here and here. While some of the answers work, they only work in certain conditions. But if the rotated rectangle is near the edge of the original image, some part of the cropped rectangle is cut out in the output.
# The imperfect way
Take the following image with rotated text as an example,
The corner points1 are (in top left, top right, bottom right, bottom left order):
[64, 49], [122, 11], [391, 326], [308, 373]
If you crop the rectangle using the following script (based on this answer):
Click to see the code.
import cv2
import numpy as np
def main():
img = cv2.imread("big_vertical_text.jpg")
cnt = np.array([
[[64, 49]],
[[122, 11]],
[[391, 326]],
[[308, 373]]
])
# find the exact rectangle enclosing the text area
# rect is a tuple consisting of 3 elements: the first element is the center
# of the rectangle, the second element is the width, height, and the
# third element is the detected rotation angle.
# Example output: ((227.5, 187.50003051757812),
# (94.57575225830078, 417.98736572265625), -36.982906341552734)
rect = cv2.minAreaRect(cnt)
print("rect: {}".format(rect))
box = cv2.boxPoints(rect)
box = np.int0(box)
# print("bounding box: {}".format(box))
cv2.drawContours(img, [box], 0, (0, 0, 255), 2)
# img_crop will the cropped rectangle, img_rot is the rotated image
img_crop, img_rot = crop_rect(img, rect)
cv2.imwrite("cropped_img.jpg", img_crop)
cv2.waitKey(0)
def crop_rect(img, rect):
# get the parameter of the small rectangle
center, size, angle = rect[0], rect[1], rect[2]
center, size = tuple(map(int, center)), tuple(map(int, size))
# get row and col num in img
height, width = img.shape[0], img.shape[1]
# calculate the rotation matrix
M = cv2.getRotationMatrix2D(center, angle, 1)
# rotate the original image
img_rot = cv2.warpAffine(img, M, (width, height))
# now rotated rectangle becomes vertical, and we crop it
img_crop = cv2.getRectSubPix(img_rot, size, center)
return img_crop, img_rot
if __name__ == "__main__":
main()
In the above code, we first find the rectangle enclosing the text area based on the four points we provide using the cv2.minAreaRect() method. Then in function crop_rect(), we calculate a rotation matrix and rotate the original image around the rectangle center to straighten the rotated rectangle. Finally, the rectangle text area is cropped from the rotated image using cv2.getRectSubPix method.
The original, rotated and cropped image are shown below:
We can see clearly that some parts of the text are cut out in the final result. Of course, you can pad the image beforehand, and crop the rectangle2 from the padded image, which will prevent the cutting-out effect.
# The better way
Is there a better way? Yes!
In the above code, when we want to draw the rectangle area in the image, we use cv2.boxPoints() method to get the four corner points of the real rectangle. We also know the width and height of rectangle from rect. Then we can directly warp the rectangle from the image using cv2.warpPerspective() function. The following script shows an example:
Click to see the code.
import cv2
import numpy as np
def main():
img = cv2.imread("big_vertical_text.jpg")
# points for test.jpg
cnt = np.array([
[[64, 49]],
[[122, 11]],
[[391, 326]],
[[308, 373]]
])
print("shape of cnt: {}".format(cnt.shape))
rect = cv2.minAreaRect(cnt)
print("rect: {}".format(rect))
# the order of the box points: bottom left, top left, top right,
# bottom right
box = cv2.boxPoints(rect)
box = np.int0(box)
print("bounding box: {}".format(box))
cv2.drawContours(img, [box], 0, (0, 0, 255), 2)
# get width and height of the detected rectangle
width = int(rect[1][0])
height = int(rect[1][1])
src_pts = box.astype("float32")
# coordinate of the points in box points after the rectangle has been
# straightened
dst_pts = np.array([[0, height-1],
[0, 0],
[width-1, 0],
[width-1, height-1]], dtype="float32")
# the perspective transformation matrix
M = cv2.getPerspectiveTransform(src_pts, dst_pts)
# directly warp the rotated rectangle to get the straightened rectangle
warped = cv2.warpPerspective(img, M, (width, height))
# cv2.imwrite("crop_img.jpg", warped)
cv2.waitKey(0)
if __name__ == "__main__":
main()
Now the cropped rectangle becomes
If you check carefully, you will notice that there are some black area in the cropped image. That is because a small part of the detected rectangle is out of the bound of the image. To remedy this, you may pad the image a bit and do the crop after that. An example is given here (By me :)).
# About the angle of the return rectangle
The last element of the returned rect is the detected angle of the rectangle. But it has confused a lot of people, for example, see here and here.
This angle is in the range $[-90, 0)$. After much experiment, I have found that the relationship between the rectangle orientation and output angle of minAreaRect(). It can be summarized in the following image
The following description assume that we have a rectangle with unequal height and width length, i.e., it is not square.
If the rectangle lies vertically (width < height), then the detected angle is -90. If the rectangle lies horizontally, then the detected angle is also -90 degree.
If the top part of the rectangle is in first quadrant, then the detected angle decreases as the rectangle rotate from horizontal to vertical position, until the detected angle becomes -90 degrees. In first quadrant, the width of detected rectangle is longer than its height.
If the top part of the detected rectangle is in second quadrant, then the angle decreases as the rectangle rotate from vertical to horizontal position. But there is a difference between second and first quadrant. If the rectangle approaches vertical position but has not been in vertical position, its angle approaches 0. If the rectangle approaches horizontal position but has not been in horizontal position, its angle approaches -90 degrees.
This post here is also good at explaining this.
# Conclusion
In this post, I compared the two methods to crop the rotated rectangle from the image and also explained the meaning of angle returned by cv2.minAreaRect() method. Overall, I like the second method since it does not require rotating the image and can deal with this problem more elegantly with less code.
# References
1. We only need to provide roughly the 4 corner points of the rectangle. Then we can use OpenCV to find the exact the rectangle enclosing the rectangle text area. ↩︎
2. You need to re-calculate the corner points of the rectangle after padding the original image. ↩︎ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8111185431480408, "perplexity": 1781.271000057531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057036.89/warc/CC-MAIN-20210920101029-20210920131029-00663.warc.gz"} |
http://mathhelpforum.com/calculus/77326-function-integrable.html | # Math Help - Is this function integrable?
1. ## Is this function integrable?
Is the function $f(x,y) = \exp(-xy)$ integrable over the region
$\{(x,y) : 0 < x < y < x+x^2\}$?
I can show that f is not integrable over the whole positive quadrant, but this seems a lot tricker...I'm not even sure what the answer "should" be (I'm assuming it is integrable though). I guess we could use comparison though or some other technique or trick.
2. So integrate over y first:
$\int_{ x}^{x+x^2 } e^{-xy} =$
$\frac{e^{-x^2 (x+1)} \left(-1+e^{x^3}\right)}{x}$
Now show that this goes to zero faster than something you know is integrable from zero to infinity, like E^-(x+1)
Limit x->Infinity of $\frac{ \left ( \frac{e^{-x^2 (x+1)} \left(-1+e^{x^3}\right)}{x} \right ) }{e^{-(x+1)} }$
Do L'Hoptial on the numerator once, then bring down the negative powers and you get an E^(x^3) on top and an E^(x^4) on the bottom. Then this will converge to zero.
Then the integral is integrable.
I suppose there are also problems at zero, but take the limit of the function at zero and you get zero so all is well.
3. I made a mistake, you dont get E^(x^4), thats obvious. It will still converge though due to the other terms. There might be something better to compare it to though besides E^-(x+1).
4. It is even more clear if you just write it as:
$\frac{e^{-x^2 (x+1)} \left(-1+e^{x^3}\right)}{x} = - \frac{1 }{ xe^{x^{2}(1+x)}} + \frac{ 1}{ xe^{x^{2}}}$
and then just compare it to 1/(x^2) or something similar.
5. Comparison to $1/x^2$ will only show integrability over $[1,\infty)$. I think the real difficulty is showing integrability over $(0,1)$. Indeed I'm not even sure if it integrable over this interval.
In fact all someone needs to be able to do is to prove whether or not $1/(x\exp(x^2))$ is integrable over $(0,1)$ and then we'll be done.
6. I think you can show that it isn't integrable by making the substitution $x^2 = t$ and then using the fact that $1/(t\exp(t))$ is not integrable.
7. Originally Posted by Mentia
I suppose there are also problems at zero, but take the limit of the function at zero and you get zero so all is well.
Originally Posted by HenryB
Comparison to $1/x^2$ will only show integrability over $[1,\infty)$. I think the real difficulty is showing integrability over $(0,1)$. Indeed I'm not even sure if it integrable over this interval.
Mentia is right, there should be no problem at 0: $\int_0^1 \int_x^{x+x^2} e^{-xy} dy dx$ amounts to integrating a continuous function on a compact subset of $\mathbb{R}^2$, so it is finite, without computation. It only remains to show that $\int_1^\infty \int_x^{x+x^2} e^{-xy} dy dx$ is finite, and a comparison with $\frac{1}{x^2}$ works fine. In order to simplify the computation, you can even say: for $x\geq 1$,
$0\leq \int_x^{x+x^2} e^{-xy} dy\leq \int_x^\infty e^{-xy} dy = \frac{e^{-x^2}}{x}\leq \frac{C}{x^2}$,
where $C$ is some constant. So that the integral on $x\in[1,+\infty)$ converges.
Just for the sake of not letting a question answered: the function $\frac{e^{-x^2}}{x}$ is clearly not integrable on $(0,1]$: the numerator converges to 1 as $x\to0^+$, hence the function is greater than $\frac{1/2}{x}$ for small enough $x$ (or we can say we have the asymptotic equivalence $\frac{e^{-x^2}}{x}\sim_{x\to 0^+} \frac{1}{x}$), and $\frac{1}{x}$ is not integrable on $(0,1]$.
So how come I proved the function is integrable ? Just because there is another term than $\frac{e^{-x^2}}{x}$, and their difference tends to 0 when $x\to 0$: no divergence. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.973500669002533, "perplexity": 213.39555100982022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267075.55/warc/CC-MAIN-20140728011747-00326-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://www.maplesoft.com/support/help/MapleSim/view.aspx?path=componentLibrary/multibody/visualization/ForceArrow | Force Arrow
Display an arrow to indicate the force magnitude and direction on a component
Description During an animation, the height of the arrow changes to represent the instantaneous force magnitude and direction acting on a component. You must connect this component in parallel with a multibody Force and Moment sensor in your model. For more information on how to use the Force Arrow see Using the Force Arrow.
Connections
Name Description frame_a Defines the position of Force Arrow base in the 3D visualization force Force input from the Force and Moment sensor
Parameters
Name Default Description scale 1 Force Arrow scale factor ($n$). The force magnitude length (m) is divided by this number ($n$), to give (m/$n$). color red Defines the arrow and cylinder color. For information on using the color selection tools, see Selecting Colors. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.963768720626831, "perplexity": 1805.4895756225983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662321.82/warc/CC-MAIN-20160924173742-00286-ip-10-143-35-109.ec2.internal.warc.gz"} |
http://www.tex.ac.uk/cgi-bin/texfaq2html?label=verbwithin | # Welcome to the UK List of TeX Frequently Asked Questions on the Web
## Why doesn’t verbatim work within …?
The LaTeX verbatim commands work by changing category codes. Knuth says of this sort of thing “Some care is needed to get the timing right…”, since once the category code has been assigned to a character, it doesn’t change. So \verb and \begin{verbatim} have to assume that they are getting the first look at the parameter text; if they aren’t, TeX has already assigned category codes so that the verbatim command doesn’t have a chance. For example:
\verb+\error+
will work (typesetting ‘\error’), but
\newcommand{\unbrace}[1]{#1}
\unbrace{\verb+\error+}
will not (it will attempt to execute \error). Other errors one may encounter are ‘\verb ended by end of line’, or even the rather more helpful ‘\verb illegal in command argument’. The same sorts of thing happen with \begin{verbatim}\end{verbatim}:
\ifthenelse{\boolean{foo}}{%
\begin{verbatim}
foobar
\end{verbatim}
}{%
\begin{verbatim}
barfoo
\end{verbatim}
}
provokes errors like ‘File ended while scanning use of \@xverbatim’, as \begin{verbatim} fails to see its matching \end{verbatim}.
This is why the LaTeX book insists that verbatim commands must not appear in the argument of any other command; they aren’t just fragile, they’re quite unusable in any “normal” command parameter, regardless of \protection. (The \verb command tries hard to detect if you’re misusing it; unfortunately, it can’t always do so, and the error message is therefore not reliable as an indication of problems.)
The first question to ask yourself is: “is \verb actually necessary?”.
• If \texttt{your text} produces the same result as \verb+your text+, then there’s no need of \verb in the first place.
• If you’re using \verb to typeset a URL or email address or the like, then the \url command from the url will help: it doesn’t suffer from all the problems of \verb, though it’s still not robust; “typesetting URLs” offers advice here.
• If you’re putting \verb into the argument of a boxing command (such as \fbox), consider using the lrbox environmen)t:
\newsavebox{\mybox}
...
\begin{lrbox}{\mybox}
\verb!VerbatimStuff!
\end{lrbox}
\fbox{\usebox{\mybox}}
If you can’t avoid verbatim, the \cprotect command (from the package cprotect) might help. The package manages to make a macro read a verbatim argument in a “sanitised” way by the simple medium of prefixing the macro with \cprotect:
\cprotect\section{Using \verb|verbatim|}
The package (at the time this author tested it) was still under development (though it does work in this simple case) and deserves consideration in most cases; the package documentation gives more details.
Another way out is to use one of “argument types” of the \NewDocumentCommand command in the experimental LaTeX3 package xparse:
\NewDocumentCommand\cmd{ m v m }{#1 #2' #3}
\cmd{Command }|\furble|{ isn't defined}
Which gives us:
Command \furble isn’t defined
The “m” tag argument specifies a normal mandatory argument, and the “v” specifies one of these verbatim arguments. As you see, it’s implanting a \verb-style command argument in the argument sequence of an otherwise “normal” sort of command; that ‘|’ may be any old character that doesn’t conflict with the content of the argument.
This is pretty neat (even if the verbatim is in an argument of its own) but the downside is that xparse pulls in the experimental LaTeX3 programming environment (l3kernel) which is pretty big.
Other than the cprotect package, there are three partial solutions to the problem:
• Some packages have macros which are designed to be responsive to verbatim text in their arguments. For example, the fancyvrb package defines a command \VerbatimFootnotes, which redefines the \footnotetext command, and hence also the behaviour of the \footnote) command, in such a way that you can include \verb commands in its argument. This approach could in principle be extended to the arguments of other commands, but it can clash with other packages: for example, \VerbatimFootnotes interacts poorly with the para option of the footmisc package.
The memoir class defines its \footnote command so that it will accept verbatim in its arguments, without any supporting package.
• The fancyvrb package defines a command \SaveVerb, with a corresponding \UseVerb command, that allow you to save and then to reuse the content of its argument; for details of this extremely powerful facility, see the package documentation.
Rather simpler is the verbdef package, whose \verbdef command defines a (robust) command which expands to the verbatim argument given.
• If you have a single character that is giving trouble (in its absence you could simply use \texttt), consider using \string. \texttt{my\string_name} typesets the same as \verb+my_name+, and will work in the argument of a command. It won’t, however, work in a moving argument, and no amount of \protection will make it work in such a case.
A robust alternative is:
\chardef\us=\_
...
\section{... \texttt{my\us name}}
Such a definition is ‘naturally’ robust; the construction “<back-tick>\<char>” may be used for any troublesome character (though it’s plainly not necessary for things like percent signs for which (La)TeX already provides robust macros).
cprotect.sty
macros/latex/contrib/cprotect (or browse the directory); catalogue entry
fancyvrb.sty
macros/latex/contrib/fancyvrb (or browse the directory); catalogue entry
l3kernel bundle
macros/latex/contrib/l3kernel (or browse the directory); catalogue entry
memoir.cls
macros/latex/contrib/memoir (or browse the directory); catalogue entry
url.sty
macros/latex/contrib/url (or browse the directory); catalogue entry
verbdef.sty
macros/latex/contrib/verbdef (or browse the directory); catalogue entry
xparse.sty
Distributed as part of macros/latex/contrib/l3packages (or browse the directory); catalogue entry | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9377715587615967, "perplexity": 4131.9009527220305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/help-with-a-logical-derivation-of-set-theoretical-statement.472096/ | # Help with a logical derivation of set theoretical statement
1. ### julypraise
110
1. The problem statement, all variables and given/known data
This is actually from the proof of Dedekind's cut in Rudin's Principles of Mathematical analysis on the page 19. It says when $$\alpha\in\mathbb{R}$$ ($$\alpha$$ is a cut) is fixed, $$\beta$$ is the set of all $$p$$ with the following property:
There exists $$r>0$$ such that $$-p-r\notin\alpha$$.
From the given above, I need to derive that
if $$q\in\alpha$$, then $$q\notin\beta$$.
But I cannot reach this statement as my explanation for this is in the below.
2. Relevant equations
3. The attempt at a solution
The draft I have done so far is that, as defining $$\beta$$ such that
$$\beta=\left\{p|\exists r\in\mathbb{Q} (r>0 \wedge -p-r \notin \alpha)\right\}$$,
I derived
$$p \notin \beta \leftrightarrow \forall r \in \mathbb{Q} (r>0 \to -p-r\in\alpha)$$.
And I'm stucked here. From the last statement, I cannot derive the conclusion I was meant to derive. If anyone gives me help, I will give thanks.
Last edited: Feb 12, 2011 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598944783210754, "perplexity": 335.3005158203019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246655589.82/warc/CC-MAIN-20150417045735-00053-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://www.purplemath.com/learning/viewtopic.php?f=8&t=432&p=1355 | ## Can this be simplified any furthur? (a^2(a^2-b^2))/(b^2(a-b)
Quadratic equations and inequalities, variation equations, function notation, systems of equations, etc.
Hannah
Posts: 2
Joined: Sun Apr 19, 2009 4:27 pm
Contact:
### Can this be simplified any furthur? (a^2(a^2-b^2))/(b^2(a-b)
In my math class right now, we're learning about multiplying rationals, which is basically multiplying fractions with variables in them. I got this as an answer to a question, but I'm not sure if it's simplified all the way:
a2(a2-b2)
b2(a-b)
We don't need to distribute the a2 or b2. I'm just wondering if you could divide the (a2-b2) by the (a-b)? Is that possible? Is it allowed? And would the quotient be (a+b)? One of the answers on the answer sheet (it's one of those corny Punchline joke worksheets) is:
a2(a+b)
b2
Would that be the simplified answer of my question above, if those two terms were to be divided?
Thanks!
Martingale
Posts: 350
Joined: Mon Mar 30, 2009 1:30 pm
Location: USA
Contact:
### Re: Can this be simplified any furthur?
Hannah wrote:In my math class right now, we're learning about multiplying rationals, which is basically multiplying fractions with variables in them. I got this as an answer to a question, but I'm not sure if it's simplified all the way:
a2(a2-b2)
b2(a-b)
We don't need to distribute the a2 or b2. I'm just wondering if you could divide the (a2-b2) by the (a-b)? Is that possible? Is it allowed? And would the quotient be (a+b)? One of the answers on the answer sheet (it's one of those corny Punchline joke worksheets) is:
a2(a+b)
b2
Would that be the simplified answer of my question above, if those two terms were to be divided?
Thanks!
$a^2-b^2=(a+b)(a-b)$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8782695531845093, "perplexity": 1070.9067846994258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398448506.69/warc/CC-MAIN-20151124205408-00286-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html | # Introduction
This is an informal FAQ list for the r-sig-mixed-models mailing list.
The most commonly used functions for mixed modeling in R are
• linear mixed models: aov(), nlme::lme1, lme4::lmer; brms::brm
• generalized linear mixed models (GLMMs)
• frequentist: MASS::glmmPQL, lme4::glmer; glmmTMB
• Bayesian: MCMCglmm::MCMCglmm; brms::brm
• nonlinear mixed models: nlme::nlme, lme4::nlmer; brms::brm
• GNLMMs: brms::brm
Another quick-and-dirty way to search for mixed-model related packages on CRAN:
grep("l.?m[me][^t]",rownames(available.packages()),value=TRUE)
## [1] "blmeco" "buildmer" "cellVolumeDist" "climextRemes"
## [5] "elementR" "glmertree" "glmmboot" "glmmEP"
## [9] "glmmfields" "glmmLasso" "glmmML" "glmmsr"
## [13] "glmmTMB" "lamme" "lme4" "lmec"
## [17] "lmem.qtler" "lmeNB" "lmeNBBayes" "lmenssp"
## [21] "lmerTest" "lmeSplines" "lmeVarComp" "lmmen"
## [25] "lmmlasso" "lmmot" "lmmpar" "lmms"
## [29] "lrmest" "lsmeans" "mlmm.gwas" "mlmmm"
## [33] "mvglmmRank" "nlmeODE" "nlmeU" "sensors4plumes"
## [37] "tlmec" "vagalumeR"
## Other sources of help
• the mailing list is [email protected]
• archives here
• or Google search with the tag site:https://stat.ethz.ch/pipermail/r-sig-mixed-models/
• The source code of this document is available on GitHub; the rendered (HTML) version lives on GitHub pages.
• Searching on StackOverflow with the [r] [mixed-models] tags, or on CrossValidated with the [mixed-model] tag may be helpful (these sites also have an [lme4] tag).
DISCLAIMERS:
• (G)LMMs are hard - harder than you may think based on what you may have learned in your second statistics class, which probably focused on picking the appropriate sums of squares terms and degrees of freedom for the numerator and denominator of an $$F$$ test. ‘Modern’ mixed model approaches, although more powerful (they can handle more complex designs, lack of balance, crossed random factors, some kinds of non-Normally distributed responses, etc.), also require a new set of conceptual tools. In order to use these tools you should have at least a general acquaintance with classical mixed-model experimental designs but you should also, probably, read something about modern mixed model approaches. Littell et al. (2006) and Pinheiro and Bates (2000) are two places to start, although Pinheiro and Bates is probably more useful if you want to use R. Other useful references include Gelman and Hill (2006) (focused on Bayesian methods) and Zuur et al. (2009b). If you are going to use generalized linear mixed models, you should understand generalized linear models (Dobson and Barnett (2008), Faraway (2006), and McCullagh and Nelder (1989) are standard references; the last is the canonical reference, but also the most challenging).
• All of the issues that arise with regular linear or generalized-linear modeling (e.g.: inadequacy of p-values alone for thorough statistical analysis; need to understand how models are parameterized; need to understand the principle of marginality and how interactions can be treated; dangers of overfitting, which are not mitigated by stepwise procedures; the non-existence of free lunches) also apply, and can apply more severely, to mixed models.
• When SAS (or Stata, or Genstat/AS-REML or …) and R differ in their answers, R may not be wrong. Both SAS and R may be right’ but proceeding in a different way/answering different questions/using a different philosophical approach (or both may be wrong …)
• The advice in this FAQ comes with absolutely no warranty of any sort.
# Model definition
## Model specification
The following formula extensions for specifying random-effects structures in R are used by
• lme4
• nlme (nested effects only, although crossed effects can be specified with more work)
• glmmADMB and glmmTMB
MCMCglmm uses a different specification, inherited from AS-REML.
(Modified from Robin Jeffries, UCLA:)
formula meaning
(1|group) random group intercept
(x|group) = (1+x|group) random slope of x within group with correlated intercept
(0+x|group) = (-1+x|group) random slope of x within group: no variation in intercept
(1|group) + (0+x|group) uncorrelated random intercept and random slope within group
(1|site/block) = (1|site)+(1|site:block) intercept varying among sites and among blocks within sites (nested random effects)
site+(1|site:block) fixed effect of sites plus random variation in intercept among blocks within sites
(x|site/block) = (x|site)+(x|site:block) = (1 + x|site)+(1+x|site:block) slope and intercept varying among sites and among blocks within sites
(x1|site)+(x2|block) two different effects, varying at different levels
x*site+(x|site:block) fixed effect variation of slope and intercept varying among sites and random variation of slope and intercept among blocks within sites
(1|group1)+(1|group2) intercept varying among crossed random effects ( e.g. site, year)
Or in a little more detail:
equation formula
$$β_0 + β_{1}X_{i} + e_{si}$$ n/a (Not a mixed-effects model)
$$(β_0 + b_{S,0s}) + β_{1}X_i + e_{si}$$ ∼ X + (1∣Subject)
$$(β_0 + b_{S,0s}) + (β_{1} + b_{S,1s}) X_i + e_{si}$$ ~ X + (1 + X∣Subject)
$$(β_0 + b_{S,0s} + b_{I,0i}) + (β_{1} + b_{S,1s}) X_i + e_{si}$$ ∼ X + (1 + X∣Subject) + (1∣Item)
As above, but $$S_{0s}$$, $$S_{1s}$$ independent ∼ X + (1∣Subject) + (0 + X∣ Subject) + (1∣Item)
$$(β_0 + b_{S,0s} + b_{I,0i}) + β_{1}X_i + e_{si}$$ ∼ X + (1∣Subject) + (1∣Item)
$$(β_0 + b_{I,0i}) + (β_{1} + b_{S,1s})X_i + e_{si}$$ ∼ X + (0 + X∣Subject) + (1∣Item)
Modified from: http://stats.stackexchange.com/questions/13166/rs-lmer-cheat-sheet?lq=1 (Livius)
## Should I treat factor xxx as fixed or random?
This is in general a far more difficult question than it seems on the surface. There are many competing philosophies and definitions. For example, from Gelman (2005):
Before discussing the technical issues, we briefly review what is meant by fixed and random effects. It turns out that different—in fact, incompatible—definitions are used in different contexts. [See also Kreft and de Leeuw (1998), Section 1.3.3, for a discussion of the multiplicity of definitions of fixed and random effects and coefficients, and Robinson (1998) for a historical overview.] Here we outline five definitions that we have seen: 1. Fixed effects are constant across individuals, and random effects vary. For example, in a growth study, a model with random intercepts αi and fixed slope β corresponds to parallel lines for different individuals i, or the model yit = αi + βt. Kreft and de Leeuw [(1998), page 12] thus distinguish between fixed and random coefficients. 2. Effects are fixed if they are interesting in themselves or random if there is interest in the underlying population. Searle, Casella and McCulloch [(1992), Section 1.4] explore this distinction in depth. 3. “When a sample exhausts the population, the corresponding variable is fixed; when the sample is a small (i.e., negligible) part of the population the corresponding variable is random” [Green and Tukey (1960)]. 4. “If an effect is assumed to be a realized value of a random variable, it is called a random effect” [LaMotte (1983)]. 5. Fixed effects are estimated using least squares (or, more generally, maximum likelihood) and random effects are estimated with shrinkage [“linear unbiased prediction” in the terminology of Robinson (1991)]. This definition is standard in the multilevel modeling literature [see, e.g., Snijders and Bosker (1999), Section 4.2] and in econometrics.
Another useful comment (via Kevin Wright) reinforcing the idea that “random vs. fixed” is not a simple, cut-and-dried decision: from Schabenberger and Pierce (2001), p. 627:
Before proceeding further with random field linear models we need to remind the reader of the adage that one modeler’s random effect is another modeler’s fixed effect.
Clark and Linzer (2015) address this question from a mostly econometric perspective, focusing mostly on practical variance/bias/RMSE criteria.
One point of particular relevance to ‘modern’ mixed model estimation (rather than ‘classical’ method-of-moments estimation) is that, for practical purposes, there must be a reasonable number of random-effects levels (e.g. blocks) – more than 5 or 6 at a minimum. This is not surprising if you consider that random effects estimation is trying to estimate an among-block variance. For example, from Crawley (2002) p. 670:
Are there enough levels of the factor in the data on which to base an estimate of the variance of the population of effects? No, means [you should probably treat the variable as] fixed effects.
Some researchers (who treat fixed vs random as a philosophical rather than a pragmatic decision) object to this approach.
Also see a very thoughtful chapter in Hodges (2016).
Treating factors with small numbers of levels as random will in the best case lead to very small and/or imprecise estimates of random effects; in the worst case it will lead to various numerical difficulties such as lack of convergence, zero variance estimates, etc.. (A small simulation exercise shows that at least the estimates of the standard deviation are downwardly biased in this case; it’s not clear whether/how this bias would affect the point estimates of fixed effects or their estimated confidence intervals.) In the classical method-of-moments approach these problems may not arise (because the sums of squares are always well defined as long as there are at least two units), but the underlying problems of lack of power are there nevertheless.
Also see this thread on the r-sig-mixed-models mailing list.
## Nested or crossed?
• Relatively few mixed effect modeling packages can handle crossed random effects, i.e. those where one level of a random effect can appear in conjunction with more than one level of another effect. (This definition is confusing, and I would happily accept a better one.) A classic example is crossed temporal and spatial effects. If there is random variation among temporal blocks (e.g. years) ‘’and’’ random variation among spatial blocks (e.g. sites), ‘’and’’ if there is a consistent year effect across sites and ‘’vice versa’’, then the random effects should be treated as crossed.
• lme4 does handled crossed effects, efficiently; if you need to deal with crossed REs in conjunction with some of the features that nlme offers (e.g. heteroscedasticity of residuals via weights/varStruct, correlation of residuals via correlation/corStruct, see p. 163ff of Pinheiro and Bates (2000) (section 4.2.2: Google books link)
• I rarely find it useful to think of fixed effects as “nested” (although others disagree); if for example treatments A and B are only measured in block 1, and treatments C and D are only measured in block 2, one still assumes (because they are fixed effects) that each treatment would have the same effect if applied in the other block. (One might like to estimate treatment-by-block interactions, but in this case the experimental design doesn’t allow it; one would have to have multiple treatments measured within each block, although not necessarily all treatments in every block.) One would code this analysis as response~treatment+(1|block) in lme4. Also, in the case of fixed effects, crossed and nested specifications change the parameterization of the model, but not anything else (e.g. the number of parameters estimated, log-likelihood, model predictions are all identical). That is, in R’s model.matrix function (which implements a version of Wilkinson-Rogers notation) a*b and a/b (which expand to 1+a+b+a:b and 1+a+a:b respectively) give model matrices with the same number of columns.
• Whether you explicitly specify a random effect as nested or not depends (in part) on the way the levels of the random effects are coded. If the ‘lower-level’ random effect is coded with unique levels, then the two syntaxes (1|a/b) (or (1|a)+(1|a:b)) and (1|a)+(1|b) are equivalent. If the lower-level random effect has the same labels within each larger group (e.g. blocks 1, 2, 3, 4 within sites A, B, and C) then the explicit nesting (1|a/b) is required. It seems to be considered best practice to code the nested level uniquely (e.g. A1, A2, …, B1, B2, …) so that confusion between nested and crossed effects is less likely.
# Model extensions
## Overdispersion
### Testing for overdispersion/computing overdispersion factor
• with the usual caveats, plus a few extras – counting degrees of freedom, etc. – the usual procedure of calculating the sum of squared Pearson residuals and comparing it to the residual degrees of freedom should give at least a crude idea of overdispersion. The following attempt counts each variance or covariance parameter as one model degree of freedom and presents the sum of squared Pearson residuals, the ratio of (SSQ residuals/rdf), the residual df, and the $$p$$-value based on the (approximately!!) appropriate $$\chi^2$$ distribution. Do PLEASE note the usual, and extra, caveats noted here: this is an APPROXIMATE estimate of an overdispersion parameter. Even in the GLM case, the expected deviance per point equaling 1 is only true as the distribution of individual deviates approaches normality, i.e. the usual $$\lambda>5$$ rules of thumb for Poisson values and $$\textrm{min}(Np, N(1-p)) > 5$$ for binomial values (e.g. see Venables and Ripley (2002), p. 209). (And that’s without the extra complexities due to GLMM, i.e. the “effective” residual df should be large enough to make the sums of squares converge on a $$\chi^2$$ distribution …)
• Remember that (1) overdispersion is irrelevant for models that estimate a scale parameter (i.e. almost anything but Poisson or binomial: Gaussian, Gamma, negative binomial …) and (2) overdispersion is not estimable (and hence practically irrelevant) for Bernoulli models (= binary data = binomial with $$N=1$$).
The following function should work for a variety of model types (at least glmmADMB, glmmTMB, lme4, …).
overdisp_fun <- function(model) {
rdf <- df.residual(model)
rp <- residuals(model,type="pearson")
Pearson.chisq <- sum(rp^2)
prat <- Pearson.chisq/rdf
pval <- pchisq(Pearson.chisq, df=rdf, lower.tail=FALSE)
c(chisq=Pearson.chisq,ratio=prat,rdf=rdf,p=pval)
}
Example:
library(lme4)
set.seed(101)
d <- data.frame(y=rpois(1000,lambda=3),x=runif(1000),
f=factor(sample(1:10,size=1000,replace=TRUE)))
m1 <- glmer(y~x+(1|f),data=d,family=poisson)
overdisp_fun(m1)
## chisq ratio rdf p
## 1026.7791432 1.0298687 997.0000000 0.2497584
library(glmmADMB) ## 0.7.7
overdisp_fun(m2)
## chisq ratio rdf p
## 1026.7584913 1.0298480 997.0000000 0.2499025
The gof function in the aods3 provides similar functionality (it reports both deviance- and $$\chi^2$$-based estimates of overdispersion and tests).
### Fitting models with overdispersion?
• quasilikelihood estimation: MASS::glmmPQL. Quasi- was deemed unreliable in lme4, and is no longer available. (Part of the problem was questionable numerical results in some cases; the other problem was that DB felt that he did not have a sufficiently good understanding of the theoretical framework that would explain what the algorithm was actually estimating in this case.) geepack::geelgm may be workable (haven’t tried it)
If you really want quasi-likelihood analysis for glmer fits, you can do it yourself by adjusting the coefficient table - i.e., by multiplying the standard error by the square root of the dispersion factor2 and recomputing the $$Z$$- and $$p$$-values accordingly, as follows:
## extract summary table; you may also be able to do this via
## broom::tidy or broom.mixed::tidy
cc <- coef(summary(m1))
phi <- overdisp_fun(m1)["ratio"]
cc <- within(as.data.frame(cc),
{ Std. Error <- Std. Error*sqrt(phi)
z value <- Estimate/Std. Error
Pr(>|z|) <- 2*pnorm(abs(z value), lower.tail=FALSE)
})
printCoefmat(cc,digits=3)
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 1.0785 0.0384 28.10 <2e-16 ***
## x 0.0222 0.0650 0.34 0.73
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(In this case it doesn’t make much difference, since the data we simulated in the first place were Poisson.) Keep in mind that once you switch to quasi-likelihood you will either have to eschew inferential methods such as the likelihood ratio test, profile confidence intervals, AIC, etc., or make more heroic assumptions to compute “quasi-” analogs of all of the above (such as QAIC).
• observation-level random effects (OLRE: this approach should work in most packages). If you want to a citation for this approach, try Elston et al. (2001), who cite Lawson et al. (1999); apparently there is also an example in section 10.5 of Maindonald and Braun (2010), and (according to an R-sig-mixed-models post) this is also discussed by Rabe-Hesketh and Skrondal (2008). Also see Browne et al. (2005) for an example in the binomial context (i.e. logit-normal-binomial rather than lognormal-Poisson). Agresti’s excellent (2002) book Agresti (2002) also discusses this (section 13.5), referring back to Breslow (1984) and Hinde (1982). [Notes: (a) I haven’t checked all these references myself, (b) I can’t find the reference any more, but I have seen it stated that observation-level random effect estimation is probably dodgy for PQL approaches as used in Elston et al 2001]
• alternative distributions
• Poisson-lognormal model for counts or binomial-logit-Normal model for proportions (see above, “observation-level random effects”)
• negative binomial for counts or beta-binomial for proportions
• lme4::glmer.nb() should fit a negative binomial, although it is somewhat slow and fragile compared to some of the other methods suggested here. lme4 cannot fit beta-binomial models (these cannot be formulated as a part of the exponential family of distributions)
• glmmADMB and glmmTMB (both on GitHub) will fit two parameterizations of the negative binomial: family="nbinom" (or family="nbinom2" in glmmTMB) gives the classic parameterization with $$\sigma^2=\mu(1+mu/k)$$ (“NB2” in Hardin and Hilbe’s terminology) while family="nbinom1" gives a parameterization with $$\sigma^2=\phi \mu$$, $$\phi>1$$ (“NB1” to Hardin and Hilbe). The latter might also be called a “quasi-Poisson” parameterization because it matches the mean-variance relationship assumed by quasi-Poisson models, i.e. the variance is strictly proportional to the mean (although the proportionality constant must be >1, a limitation that does not apply to quasi-likelihood approaches).
• glmmTMB allows beta-binomial models ((Harrison 2015) suggests comparing beta-binomial with OLRE models to assess reliability)
• the brms package has a negbinomial family (no beta-binomial, but it does have a wide range of other families)
• other packages/approaches (less widely used, or requiring a bit more effort)
• gamlss.mx:gamlssNP
• WinBUGS/JAGS (via R2WinBUGS/Rjags)
• AD Model Builder (possibly via R2admb package) or TMB
• gnlmm in the repeated package (off-CRAN)
• ASREML
Negative binomial models in glmmTMB and lognormal-Poisson models in glmer (or MCMCglmm) are probably the best quick alternatives for overdispersed count data. If you need to explore alternatives (different variance-mean relationships, different distributions), then ADMB, TMB, WinBUGS, Stan, NIMBLE are the most flexible alternatives.
### Underdispersion
Underdispersion (much less variability than expected) is a less common problem than overdispersion.
• mild overdispersion is sometimes ignored, since it tends in general to lead to conservative rather than anti-conservative results
• quasi-likelihood (and the quasi-hack listed above) can handle under- as well as underdispersion
• some other solutions exist, but are less widely implemented
• for distributions with a small range (e.g. litter sizes of large mammals), one can treat responses as ordinal (e.g. using the ordinal package, or MCMCglmm or brms for Bayesian solutions)
• the COM-Poisson distribution and generalized Poisson distributions, implemented in glmmTMB, can handle underdispersion (J. Hilbe recommends the latter in this CrossValidated answer. (VGAM has a generalized Poisson distribution, but doesn’t handle random effects.)
## Gamma GLMMs
While one (well, OK I) would naively think that GLMMs with Gamma distributions would be just as easy (or hard) as any other sort of GLMMs, it seems that they are in fact harder to implement. Basic simulated examples of Gamma GLMMs can fail in lme4 despite analogous problems with Poisson, binomial, etc. distributions. Solutions: - the default inverse link seems particularly problematic; try other links (especially family=Gamma(link="log")) if that is possible/makes sense - consider whether a lognormal model (i.e. a regular LMM on logged data) would work/makes sense (Lo and Andrews (2015) argue that the Gamma family with an identity link is superior to lognormal models for reaction-time data)
Gamma models can be fitted by a wide variety of platforms (lme4::glmer, MASS::glmmPQL, glmmADMB, glmmTMB, MixedModels.jl, MCMCglmm, brms … not sure about others.
## Beta GLMMs
Proportion data where the denominator (e.g. maximum possible number of successes for a given observation) is not known can be modeled using a Beta distribution. Smithson and Verkuilen (2006) is a good introduction for non-statisticians (not in the mixed-model case), and the betareg package (Cribari-Neto and Zeileis 2009) handles non-mixed Beta regressions. The glmmTMB and brms packages handle Beta mixed models (brms also handles zero-inflated and zero-one inflated models).
## Zero-inflation
See e.g. Martin et al. (2005) or Warton (2005) (“many zeros does not mean zero inflation”) or Zuur et al. (2009a) for general information on zero-inflation.
### Count data
• MCMCglmm handles zero-truncated, zero-inflated, and zero-altered models, although specifying the models is a little bit tricky: see Sections 5.3 to 5.5 of the CourseNotes vignette
• glmmADMB handles
• zero-inflated models (with a single zero-inflation parameter – i.e., the level of zero-inflation is assumed constant across the whole data set)
• truncated Poisson and negative binomial distributions (which allows two-stage fitting of hurdle models)
• glmmTMB handles a variety of Z-I and Z-T models (allows covariates, and random effects, in the zero-alteration model)
• brms does too
• so does GLMMadaptive
• Gavin Simpson has a detailed writeup showing that mgcv::gam() can do simple mixed models (Poisson, not NB) with zero-inflation, and comparing mgcv with glmmTMB results
• gamlssNP in the gamlss.mx package should handle zero-inflation, and the gamlss.tr package should handle truncated (i.e. hurdle) models – but I haven’t tried them
### Continuous data
Continuous data are a special case where the mixture model for zero-inflated data is less relevant, because observations that are exactly zero occur with probability (but not probability density) zero. There are two cases of interest:
#### Probability density of $$x$$ zero or infinite
In this case zero is a problematic observation for the distribution; it’s either impossible or infinitely (locally) likely. Some examples:
• Gamma distribution: probability density at zero is infinite (if shape<1) or zero (if shape>1); it’s finite only for an exponential distribution (shape==1)
• Lognormal distribution: the probability density at zero is zero.
• Beta distribution: the probability densities at 0 and 1 are zero (if the corresponding shape parameter is >1) or infinite (if shape<1)
The best solution depends very much on the data-generating mechanism.
• If the bad (0/1) values are generated by rounding (e.g. proportions that are too close to the boundaries are reported as being on the boundaries), the simplest solution is to “squeeze” these in slightly, e.g. $$y \to (y +a)/2a$$ for some sensible value of $$a$$ (Smithson and Verkuilen 2006)
• If you think that zero values are generated by a separate process, the simplest solution is to fit a Bernoulli model to the zero/non-zero data, then a conditional continuous model for the non-zero values; this is effectively a hurdle model.
• you might have censored data where all values below a certain limit (e.g. a detection limit) are recorded as zero; in this case you might be able to use survreg() and frailty() in the survival package for random-intercept models (as suggested on r-help by Thomas Lumley in 2003 or on StackOverflow by user 42- in 2014. The lmec package handles linear mixed models.
• The cplm package handles ‘Tweedie compound Poisson linear models’, which in a particular range of parameters allows for skewed continuous responses with a spike at zero
#### Probability density of $$x$$ positive and finite
In this case (e.g. a spike of zeros in the center of an otherwise continuous distribution), the hurdle model probably makes the most sense.
### Tests for zero-inflation
• you can use a likelihood ratio test between the regular and zero-inflated version of the model, but be aware of boundary issues (search “boundary” elsewhere on this page …) – the null value (no zero inflation) is on the boundary of the feasible space
• you can use AIC or variations, with the same caveats
• you can use Vuong’s test, which is often recommended for testing zero-inflation in GLMs, because under some circumstances the various model flavors under consideration (hurdle vs zero-inflated vs “vanilla”) are not nested. Vuong’s test is implemented (and referenced) in the pscl package, and should be feasible to implement for GLMMs, but I don’t know of an implementation. Someone should let me (BMB) know if they find one.
• two untested but reasonable approaches:
• use a simulate() method if it exists to construct a simulated distribution of the proportion of zeros expected overall from your model, and compare it to the observed proportion of zeros in the data set
• compute the probability of a zero for each observation. On the basis of (conditionally) independent Bernoulli trials, compute the expected number of zeros and the confidence intervals – compare it with the observed number.
## Spatial and temporal correlation models, heteroscedasticity (“R-side” models)
In nlme these so-called R-side (R for “residual”) structures are accessible via the weights/VarStruct (heteroscedasticity) and correlation/corStruct (spatial or temporal correlation) arguments and data structures. This extension is a bit harder than it might seem. In LMMs it is a natural extension to allow the residual error terms to be components of a single multivariate normal draw; if that MVN distribution is uncorrelated and homoscedastic (i.e. proportional to an identity matrix) we get the classic model, but we can in principle allow it to be correlated and/or heteroscedastic.
It is not too hard to define marginal correlation structures that don’t make sense. One class of reasonably sensible models is to always assume an observation-level random effect (as MCMCglmm does for computational reasons) and to allow that random effect to be MVN on the link scale (so that the full model is lognormal-Poisson, logit-normal binomial, etc., depending on the link function and family).
For example, a relatively simple Poisson model with spatially correlated errors might look like this:
$\begin{split} \eta & \sim \textrm{MVN}(a + b x, \Sigma) \\ \Sigma_{ij} & = \sigma^2 \exp(-d_{ij}/s) \\ y_i & \sim \textrm{Poisson}(\lambda=\exp(\eta_i)) \end{split}$
That is, the marginal distributions of the response values are Poisson-lognormal, but on the link (log) scale the latent Normal variables underlying the response are multivariate normal, with a variance-covariance matrix described by an exponential spatial correlation function with scale parameter $$s$$.
How can one achieve this?
• These types of models are not implemented in lme4, for either LMMs or GLMMs; they are fairly low priority, and it is hard to see how they could be implemented for GLMMs (the equivalent for LMMs is tedious but should be straightforward to implement).
• For LMMs, you can use the spatial/temporal correlation structures that are built into (n)lme
• You can use the spatial/temporal correlation structures available for (n)lme, which include basic geostatistical (space) and ARMA-type (time) models.
library(sos)
findFn("corStruct")
finds additional possibilities in the ramps (extended geostatistical) and ape (phylogenetic) packages.
• You can use these structures in GLMMs via MASS::glmmPQL (see Dormann et al.)
• geepack::geeglm
• geoR, geoRglm (power tools); these are mostly designed for fitting spatial random field GLMMs via MCMC – not sure that they do random effects other than the spatial random effect
• R-INLA (super-power tool)
• it is possible to use AD Model Builder to fit spatial GLMMs, as shown in these AD Model Builder examples; this capability is not in the glmmADMB package (and may not be for a while!), but it would be possible to run AD Model Builder via the R2admb package (requires installing – and learning! ADMB)
• geoBUGS, the geostatistical/spatial correlation module for WinBUGS, is another alternative (but again requires going outside of R)
## Penalization/handling complete separation
Complete separation occurs in a binary-response model when there is some linear combination of the parameters that perfectly separates failures from successes - for example, when all of the observations are zero for some particular combination of categories. The symptoms of this problem are unrealistically large parameter estimates; ridiculously large Wald standard errors (the Hauck-Donner effect); and various warnings.
In particular, binomial glmer() models with complete separation can lead to “Downdated VtV is not positive definite” (e.g. see here) or “PIRLS step-halvings failed to reduce deviance in pwrssUpdate” errors (e.g. see here). Roughly speaking, the complete separation is likely to appear even if one considers only the fixed effects part of the model (counterarguments or counterexamples welcome!), suggesting two quick-and-dirty diagnostic methods. If fixed_form is the formula including only the fixed effects:
• summary(g1 <- glm(fixed_form, family=binomial, data=...)) will show one or more of the following symptoms:
• warnings that glm.fit: fitted probabilities numerically 0 or 1 occurred
• parameter estimates of large magnitude (e.g. any(abs(g1)>8), assuming that predictors are either categorical or scaled to have standard deviations of $$\approx 1$$)
• extremely large Wald standard errors, and large p-values (Hauck-Donner effect)
• the brglm2 package has a method for detecting complete separation: library("brglm2"); glm(fixed_form, data = ..., family = binomial, method="detect_separation"). This should say whether complete separation occurs, and in which (combinations of) variables, e.g.
Separation: TRUE
Existence of maximum likelihood estimates
(Intercept) height
Inf Inf
0: finite value, Inf: infinity, -Inf: -infinity
If complete separation is occurring between categories of a single categorical fixed-effect predictor with a large number of levels, one option would be to treat this fixed effect as a random effect, which will allow some degree of shrinkage to the mean. (It might be reasonable to specify the variance of this term a priori to a large value [minimal shrinkage], rather than trying to estimate it from the data.)
(TODO: worked example)
The general approach to handling complete separation in logistic regression is called penalized regression; it’s available in the brglm, brglm2, logistf, and rms packages. However, these packages don’t handle mixed models, so the best available general approach is to use a Bayesian method that allows you to set a prior on the fixed effects, e.g. a Gaussian with standard deviation of 3; this can be done in any of the Bayesian GLMM packages (e.g. blme, MCMCglmm, brms, …) (See supplementary material for Fox et al. 2016 for a worked example.)
## Non-Gaussian random effects
I’m not aware of easy ways to fit mixed models with non-Gaussian random effects distributions in R (i.e., convenient, flexible, well-tested implementations). McCulloch and Neuhaus (2011) discusses when this misspecification may be important. This presentation discusses various approaches to solving the problem (e.g. using a Gamma rather than a Normal distribution of REs in log-link models). The spaMM package implements H-likelihood models (Lee, Nelder, and Pawitan 2017), and claims to allow a range of random-effects distributions (perhaps not well tested though …)
In principle you can implement any random-effects distribution you want in a fully capable Bayesian modeling language (e.g. JAGS/Stan/PyMC/etc.); see e.g. this StackOverflow answer, which uses the rethinking package’s interface to Stan.
# Estimation
## What methods are available to fit (estimate) GLMMs?
(adapted from Bolker et al TREE 2009)
Penalized quasi-likelihood Flexible, widely implemented Likelihood inference may be inappropriate; biased for large variance or small means PROC GLIMMIX (SAS), GLMM (GenStat), glmmPQL (R:MASS), ASREML-R
Laplace approximation More accurate than PQL Slower and less flexible than PQL glmer (R:lme4,lme4a), glmm.admb (R:glmmADMB), INLA, glmmTMB, AD Model Builder, HLM
Gauss-Hermite quadrature More accurate than Laplace Slower than Laplace; limited to 2‑3 random effects PROC NLMIXED (SAS), glmer (R:lme4, lme4a), glmmML (R:glmmML), xtlogit (Stata)
Markov chain Monte Carlo Highly flexible, arbitrary number of random effects; accurate Slow, technically challenging, Bayesian framework MCMCglmm (R:MCMCglmm), rstanarm (R), brms (R), MCMCpack (R), WinBUGS/OpenBUGS (R interface: BRugs/R2WinBUGS), JAGS (R interface: rjags/R2jags), AD Model Builder (R interface: R2admb), glmm.admb (post hoc MCMC after Laplace fit) (R:glmmADMB)
## Troubleshooting
• double-check the model specification and the data for mistakes
• center and scale continuous predictor variables (e.g. with scale())
• try all available optimizers (e.g. several different implementations of BOBYQA and Nelder-Mead, L-BFGS-B from optim, nlminb(), …). While this will of course be slow for large fits, we consider it the gold standard; if all optimizers converge to values that are practically equivalent (it’s up to the user to decide what “practically equivalent means for their case”), then we would consider the model fit to be good enough. For example:
modelfit.all <- lme4::allFit(model)
ss <- summary(modelfit.all)
### Convergence warnings
Most of the current advice about troubleshooting lme4 convergence problems can be found in the help page ?convergence. That page explains that the convergence tests in the current version of lme4 (1.1-11, February 2016) generate lots of false positives. We are considering raising the gradient warning threshold to 0.01 in future releases of lme4. In addition to the general troubleshooting tips above:
• double-check the Hessian calculation with the more expensive Richardson extrapolation method (see examples)
• restart the fit from the apparent optimum, or from a point perturbed slightly away from the optimum (getME(model,c("theta","beta")) should retrieve the parameters in a form suitable to be used as the start parameter)
• a common error is to specify an offset to a log-link model as a raw searching-effort value, i.e. offset(effort) rather than offset(log(effort)). While the intention is to fit a model where $$\textrm{counts} \propto \textrm{effort}$$, specifying offset(effort) leads to a model where $$\textrm{counts} \propto \exp(\textrm{effort})$$ instead; exp(effort) is often a huge (and model-destabilizing) number.
### Singular models: random effect variances estimated as zero, or correlations estimated as +/- 1
It is very common for overfitted mixed models to result in singular fits. Technically, singularity means that some of the $$\boldsymbol \theta$$ (variance-covariance Cholesky decomposition) parameters corresponding to diagonal elements of the Cholesky factor are exactly zero, which is the edge of the feasible space, or equivalently that the variance-covariance matrix has some zero eigenvalues (i.e. is positive semidefinite rather than positive definite), or (almost equivalently) that some of the variances are estimated as zero or some of the correlations are estimated as +/-1. This commonly occurs in two scenarios:
• small numbers of random-effect levels (e.g. <5), as illustrated in these simulations and discussed (in a somewhat different, Bayesian context) by Gelman (2006).
• complex random-effects models, e.g. models of the form (f|g) where f is a categorical variable with a relatively large number of levels, or models with several different random-slopes terms.
• When using lme4, singularity is most obviously detectable in the output of summary.merMod() or VarCorr.merMod() when a variance is estimated as 0 (or very small, i.e. orders of magnitude smaller than other variance components) or when a correlation is estimated as exactly $$\pm 1$$. However, as pointed out by D. Bates, Kliegl, et al. (2015), singularities in larger variance-covariance matrices can be hard to detect: checking for small values among the diagonal elements of the Cholesky factor is a good start.
theta <- getME(model,"theta")
## diagonal elements are identifiable because they are fitted
## with a lower bound of zero ...
diag.element <- getME(model,"lower")==0
any(theta[diag.element]<1e-5)
As of lme4 version 1.1-19, this functionality is available as isSingular(model). - In MCMCglmm, singular or near-singular models will provoke an error and a requirement to specify a stronger prior.
At present there are a variety of strong opinions about how to resolve such problems. Briefly:
• Barr et al. (2013) suggest always starting with the maximal model (i.e. the most random-effects component of the model that is theoretically identifiable given the experimental design) and then dropping terms when singularity or non-convergence occurs (please see the paper for detailed recommendations …)
• Matuschek et al. (2017) and D. Bates, Kliegl, et al. (2015) strongly disagree, suggesting that models should be simplified a priori whenever possible; they also provide tools for diagnosing and mitigating singularity.
• One alternative (suggested by Robert LaBudde) for the small-numbers-of-levels scenario is to “fit the model with the random factor as a fixed effect, get the level coefficients in the sum to zero form, and then compute the standard deviation of the coefficients.” This is appropriate for users who are (a) primarily interested in measuring variation (i.e. the random effects are not just nuisance parameters, and the variability [rather than the estimated values for each level] is of scientific interest), (b) unable or unwilling to use other approaches (e.g. MCMC with half-Cauchy priors in WinBUGS), (c) unable or unwilling to collect more data. For the simplest case (balanced, orthogonal, nested designs with normal errors) these estimates of standard deviations should equal the classical method-of-moments estimates.
• Bayesian approaches allow the user to specify a informative prior that avoids singularity.
• The blme package (Chung et al. 2013) provides a wrapper for the lme4 machinery that adds a particular form of weak prior to get an approximate a Bayesian maximum a posteriori estimate that avoids singularity.
• The MCMCglmm package allows for priors on the variance-covariance matrix
• The rstanarm and brms packages provide wrappers for the Stan Hamiltonian MCMC engine that fit GLMMs via lme4 syntax, again allowing a variety of priors to be set.
• If a variance component is zero, dropping it from the model will have no effect on any of the estimated quantities (although it will affect the AIC, as the variance parameter is counted even though it has no effect). Pasch, Bolker, and Phelps (2013) gives one example where random effects were dropped because the variance components were consistently estimated as zero. Conversely, if one chooses for philosophical grounds to retain these parameters, it won’t change any of the answers.
### Setting residual variances to a fixed value (zero or other)
For some problems it would be convenient to be able to set the residual variance term to zero, or a fixed value. This is difficult in lme4, because the model is parameterized internally in such a way that the residual variance is profiled out (i.e., calculated directly from a residual deviance term) and the random-effects variances are scaled by the residual variance.
Searching the r-sig-mixed-models list for “fix residual variance”
• This is done in the metafor package, for meta-analytic models
• You can use the blme package to fix the residual variance: from Vincent Dorie,
library(blme)
blmer(formula = y ~ 1 + (1 | group), weights = V,
resid.prior = point(1.0), cov.prior = NULL)
This sets the residual variance to 1.0. You cannot use this to make it exactly zero, but you can make it very small (and experiment with setting it to different small values, e.g. 0.001 vs 0.0001, to see how sensitive the results are). - Similarly, you can fix the residual variance to a small positive value in [n]lme via the control() argument (Heisterkamp et al. 2017):
nlme::lme(Reaction~Days,random=~1|Subject,
data=lme4::sleepstudy,
control=list(sigma=1e-8))
• the glmmTMB package can set the residual variance to zero, by specifying dispformula = ~0
• There is an rrBlupMethod6 package on CRAN (“Re-parametrization of mixed model formulation to allow for a fixed residual variance when using RR-BLUP for genom[e]wide estimation of marker effects”), but it seems fairly special-purpose.
• it might be possible in principle to adapt lme4’s internal devfun2() function (used in the likelihood profiling computation for LMMs), which uses a specified value of the residual standard deviation in computing likelihood, but as D. Bates, Mächler, et al. (2015) say:
The resulting function is not useful for general nonlinear optimization — one can easily wander into parameter regimes corresponding to infeasible (non-positive semidefinite) variance-covariance matrices — but it serves for likelihood profiling, where one focal parameter is varied at a time and the optimization over the other parameters is likely to start close to an optimum.
### Other problems/lme4 error messages
Most of the following error messages are relatively unusual, and happen mostly with complex/large/unstable models. There is often no simple fix; the standard suggestions for troubleshooting are (1) try rescaling and/or centering predictors; (2) see if a simpler model can be made to work; (3) look for severe lack of balance and/or complete separation in the data set.
## REML for GLMMs
• While restricted maximum likelihood (REML) procedures (Wikipedia are well established for linear mixed models, it is less clear how one should define and compute the equivalent criteria (integrating out the effects of fixed parameters) for GLMMs. Millar (2011) and Berger, Liseo, and Wolpert (1999) are possible starting points in the peer-reviewed literature, and there are mailing-list discussions of these issues here and here.
• Attempting to use REML=TRUE with glmer will produce the warning extra argument(s) ‘REML’ disregarded
• glmmTMB allows REML=TRUE for GLMMs (it uses the Laplace approximation to integrate over the fixed effect parameters), since version 0.2.2
# Inference and confidence intervals
## Testing hypotheses
### What are the p-values listed by summary(glmerfit) etc.? Are they reliable?
By default, in keeping with the tradition in analysis of generalized linear models, lme4 and similar packages display the Wald Z-statistics for each parameter in the model summary. These have one big advantage: they’re convenient to compute. However, they are asymptotic approximations, assuming both that (1) the sampling distributions of the parameters are multivariate normal (or equivalently that the log-likelihood surface is quadratic) and that (2) the sampling distribution of the log-likelihood is (proportional to) $$\chi^2$$. The second approximation is discussed further under “Degrees of freedom”. The first assumption usually requires an even greater leap of faith, and is known to cause problems in some contexts (for binomial models failures of this assumption are called the Hauck-Donner effect), especially with extreme-valued parameters.
### Methods for testing single parameters
From worst to best:
• Wald $$Z$$-tests
• For balanced, nested LMMs where degrees of freedom can be computed according to classical rules: Wald $$t$$-tests
• Likelihood ratio test, either by setting up the model so that the parameter can be isolated/dropped (via anova or drop1, or via computing likelihood profiles
• Markov chain Monte Carlo (MCMC) or parametric bootstrap confidence intervals
### Tests of effects (i.e. testing that several parameters are simultaneously zero)
From worst to best:
• Wald chi-square tests (e.g. car::Anova)
• Likelihood ratio test (via anova or drop1)
• For balanced, nested LMMs where df can be computed: conditional F-tests
• For LMMs: conditional F-tests with df correction (e.g. Kenward-Roger in pbkrtest package: see notes on K-R etc below.
• MCMC or parametric, or nonparametric, bootstrap comparisons (nonparametric bootstrapping must be implemented carefully to account for grouping factors)
### Is the likelihood ratio test reliable for mixed models?
• It depends.
• Not for fixed effects in finite-size cases (see Pinheiro and Bates (2000)): may depend on ‘denominator degrees of freedom’ (number of groups) and/or total number of samples - total number of parameters
• Conditional F-tests are preferred for LMMs, if denominator degrees of freedom are known
### Why doesn’t lme4 display denominator degrees of freedom/p values? What other options do I have?
There is an R FAQ entry on this topic, which links to a mailing list post by Doug Bates (there is also a voluminous mailing list thread reproduced on the R wiki). The bottom line is
• For special cases that correspond to classical experimental designs (i.e. balanced designs that are nested, split-plot, randomized block, etc.) … we can show that the null distributions of particular ratios of sums of squares follow an $$F$$ distribution with known numerator and denominator degrees of freedom (and hence the sampling distributions of particular contrasts are t-distributed with known df). In more complicated situations (unbalanced, GLMMs, crossed random effects, models with temporal or spatial correlation, etc.) it is not in general clear that the null distribution of the computed ratio of sums of squares is really an F distribution, for any choice of denominator degrees of freedom.
• For each simple degrees-of-freedom recipe that has been suggested (trace of the hat matrix, etc.) there seems to be at least one fairly simple counterexample where the recipe fails badly (e.g. see this r-help thread from September 2006).
• When the responses are normally distributed and the design is balanced, nested etc. (i.e. the classical LMM situation), the scaled deviances and differences in deviances are exactly $$F$$-distributed and looking at the experimental design (i.e., which treatments vary/are replicated at which levels) tells us what the relevant degrees of freedom are (see “df alternatives” below)
• Two approaches to approximating df (Satterthwaite and Kenward-Roger) have been implemented in R, Satterthwaite in lmerTest and Kenward-Roger in pbkrtest (as KRmodcomp) (various packages such as lmerTest, emmeans, car, etc., import pbkrtest::get_Lb_ddf).
• K-R is probably the most reliable option (Schaalje, McBride, and Fellingham 2002), although it may be prohibitively computationally expensive for large data sets.
• K-R was derived for LMMs (and for REML?) in particular, it isn’t clear how it would apply to GLMMs. Stroup (2014) states (referencing Stroup (2013)) that K-R actually works reasonably well for GLMMs (K-R is not implemented in R for GLMMs; Stroup suggests that a pseudo-likelihood (Wolfinger and O’Connell 1993) approach is necessary in order to implement K-R for GLMMs):
Notice the non-integer values of the denominator df. They, and the $$F$$ and $$p$$ values, reflect the procedure developed by Kenward and Roger (2009) to account for the effect of the covariance structure on degrees of freedom and standard errors. Although the Kenward–Roger adjustment was derived for the LMM with normally distributed data and is an ad hoc procedure for GLMMs with non-normal data, informal simulation studies consistently have suggested that the adjustment is accurate. The Kenward-Roger adjustment requires that the SAS GLIMMIX default computing algorithm, pseudo-likelihood, be used rather than the Laplace algorithm used to obtain AICC statistics. Stroup (2013b) found that for binomial and Poisson GLMMs, pseudo-likelihood with the Kenward–Roger adjustment yields better Type I error control than Laplace while preserving the GLMM’s advantage with respect to power and accuracy in estimating treatment means.
• There are several different issues at play in finite-size (small-sample) adjustments, which apply slightly differently to LMMs and GLMMs.
• When the data don’t fit into the classical framework (crossed, unbalanced, R-side effects), we might still guess that the deviances etc. are approximately F-distributed but that we don’t know the real degrees of freedom – this is what the Satterthwaite, Kenward-Roger, Fai-Cornelius, etc. approximations are supposed to do.
• When the responses are not normally distributed (as in GLMs and GLMMs), and when the scale parameter is not estimated (as in standard Poisson- and binomial-response models), then the deviance differences are only asymptotically F- or chi-square-distributed (i.e. not for our real, finite-size samples). In standard GLM practice, we usually ignore this problem; there is some literature on finite-size corrections for GLMs under the rubrics of “Bartlett corrections” and “higher order asymptotics” (see McCullagh and Nelder (1989), Cordeiro, Paula, and Botter (1994), Cordeiro and Ferrari (1998) and the cond package (on CRAN) [which works with GLMs, not GLMMs]), but it’s rarely used. (The bias correction/Firth approach implemented in the brglm package attempts to address the problem of finite-size bias, not finite-size non-chi-squaredness of the deviance differences.)
• When the scale parameter in a GLM is estimated rather than fixed (as in Gamma or quasi-likelihood models), it is sometimes recommended to use an $$F$$ test to account for the uncertainty of the scale parameter (e.g. Venables and Ripley (2002) recommend anova(...,test="F") for quasi-likelihood models)
• Combining these issues, one has to look pretty hard for information on small-sample or finite-size corrections for GLMMs: Feng, Braun, and McCulloch (2004) and Bell and Grunwald (2010) look like good starting points, but it’s not at all trivial.
#### Df alternatives:
• use MASS::glmmPQL (uses old nlme rules approximately equivalent to SAS ‘inner-outer’/‘within-between’ rules) for GLMMs, or (n)lme for LMMs
• Guess the denominator df from standard rules (for standard designs, e.g. see Gotelli and Ellison (2004)) and apply them to $$t$$ or $$F$$ tests
• Run the model in lme (if possible) and use the denominator df reported there (which follow a simple ‘inner-outer’ rule which should correspond to the canonical answer for simple/orthogonal designs), applied to $$t$$ or $$F$$ tests. For the explicit specification of the rules that lme uses, see page 91 of Pinheiro and Bates (this page was previously available on Google Books, but the link is no longer useful, so here are the relevant paragraphs):
These conditional tests for fixed-effects terms require denominator degrees of freedom. In the case of the conditional $$F$$-tests, the numerator degrees of freedom are also required, being determined by the term itself. The denominator degrees of freedom are determined by the grouping level at which the term is estimated. A term is called inner relative to a factor if its value can change within a given level of the grouping factor. A term is outer to a grouping factor if its value does not changes within levels of the grouping factor. A term is said to be estimated at level $$i$$, if it is inner to the $$i-1$$st grouping factor and outer to the $$i$$th grouping factor. For example, the term Machine in the fm2Machine model is outer to Machine %in% Worker and inner to Worker, so it is estimated at level 2 (Machine %in% Worker). If a term is inner to all $$Q$$ grouping factors in a model, it is estimated at the level of the within-group errors, which we denote as the $$Q+1$$st level.
The intercept, which is the parameter corresponding to the column of all 1’s in the model matrices $$X_i$$, is treated differently from all the other parameters, when it is present. As a parameter it is regarded as being estimated at level 0 because it is outer to all the grouping factors. However, its denominator degrees of freedom are calculated as if it were estimated at level $$Q+1$$. This is because the intercept is the one parameter that pools information from all the observations at a level even when the corresponding column in $$X_i$$ doesn’t change with the level.
Letting $$m_i$$ denote the total number of groups in level $$i$$ (with the convention that $$m_0=1$$ when the fixed effects model includes an intercept and 0 otherwise, and $$m_{Q+1}=N$$) and $$p_i$$ denote the sum of the degrees of freedom corresponding to the terms estimated at level $$i$$, the $$i$$th level denominator degrees of freedom is defined as
$\mathrm{denDF}_i = m_i - (m_{i-1} + p_i), i = 1, \dots, Q$
This definition coincides with the classical decomposition of degrees of freedom in balanced, multilevel ANOVA designs and gives a reasonable approximation for more general mixed-effects models.
Note that the implementation used in lme gets the wrong answer for random-slopes models:
library(nlme)
lmeDF <- function(formula=distance~age,random=~1|Subject) {
mod <- lme(formula,random,data=Orthodont)
aa <- anova(mod)
return(setNames(aa[,"denDF"],rownames(aa)))
}
lmeDF()
## (Intercept) age
## 80 80
lmeDF(random=~age|Subject) ## wrong!
## (Intercept) age
## 80 80
I (BB) have re-implemented this algorithm in a way that does slightly better for random-slopes models (but may still get confused!), see here.
source("R/calcDenDF.R")
calcDenDF(~age,"Subject",nlme::Orthodont)
## (Intercept) age
## 80 80
calcDenDF(~age,data=nlme::Orthodont,random=~1|Subject)
## (Intercept) age
## 80 80
calcDenDF(~age,data=nlme::Orthodont,random=~age|Subject) ## off by 1
## (Intercept) age
## 81 25
• use SAS, Genstat (AS-REML), Stata?
• Assume infinite denominator df (i.e. $$Z$$/$$\chi^2$$ test rather than $$t$$/$$F$$) if number of groups is large (>45? Various rules of thumb for how large is “approximately infinite” have been posed, including (in Angrist and Pischke 2009), 42 (in homage to Douglas Adams)
### Testing significance of random effects
• the most common way to do this is to use a likelihood ratio test, i.e. fit the full and reduced models (the reduced model is the model with the focal variance(s) set to zero). For example:
library(lme4)
m2 <- lmer(Reaction~Days+(1|Subject)+(0+Days|Subject),sleepstudy,REML=FALSE)
m1 <- update(m2,.~Days+(1|Subject))
m0 <- lm(Reaction~Days,sleepstudy)
anova(m2,m1,m0) ## two sequential tests
## Data: sleepstudy
## Models:
## m0: Reaction ~ Days
## m1: Reaction ~ Days + (1 | Subject)
## m2: Reaction ~ Days + (1 | Subject) + (0 + Days | Subject)
## Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq)
## m0 3 1906.3 1915.9 -950.15 1900.3
## m1 4 1802.1 1814.8 -897.04 1794.1 106.214 1 < 2.2e-16 ***
## m2 5 1762.0 1778.0 -876.00 1752.0 42.075 1 8.782e-11 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
With recent versions of lme4, goodness-of-fit (deviance) can be compared between (g)lmer and (g)lm models, although anova() must be called with the mixed ((g)lmer) model listed first. Keep in mind that LRT-based null hypothesis tests are conservative when the null value (such as $$\sigma^2=0$$) is on the boundary of the feasible space (Self and Liang 1987; Stram and Lee 1994; Goldman and Whelan 2000); in the simplest case (single random effect variance), the p-value is approximately twice as large as it should be (Pinheiro and Bates 2000).
• Consider not testing the significance of random effects. If the random effect is part of the experimental design, this procedure may be considered ‘sacrificial pseudoreplication’ (Hurlbert 1984). Using stepwise approaches to eliminate non-significant terms in order to squeeze more significance out of the remaining terms is dangerous in any case.
• consider using the RLRsim package, which has a fast implementation of simulation-based tests of null hypotheses about zero variances, for simple tests. (However, it only applies to lmer models, and is a bit tricky to use for more complex models.)
library(RLRsim)
## compare m0 and m1
exactLRT(m1,m0)
##
## simulated finite sample distribution of LRT. (p-value based on 10000
## simulated values)
##
## data:
## LRT = 106.21, p-value < 2.2e-16
## compare m1 and m2
mA <- update(m2,REML=TRUE)
m0 <- update(mA, . ~ . - (0 + Days|Subject))
m.slope <- update(mA, . ~ . - (1|Subject))
exactRLRT(m0=m0,m=m.slope,mA=mA)
##
## simulated finite sample distribution of RLRT.
##
## (p-value based on 10000 simulated values)
##
## data:
## RLRT = 42.796, p-value < 2.2e-16
• Parametric bootstrap: fit the reduced model, then repeatedly simulate from it and compute the differences between the deviance of the reduced and the full model for each simulated data set. Compare this null distribution to the observed deviance difference. This procedure is implemented in the pbkrtest package (messages and warnings suppressed).
(pb <- pbkrtest::PBmodcomp(m2,m1,seed=101))
## Parametric bootstrap test; time: 31.52 sec; samples: 1000 extremes: 0;
## Requested samples: 1000 Used samples: 527 Extremes: 0
## large : Reaction ~ Days + (1 | Subject) + (0 + Days | Subject)
## small : Reaction ~ Days + (1 | Subject)
## stat df p.value
## LRT 42.075 1 8.782e-11 ***
## PBtest 42.075 0.001894 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
### Standard errors of variance estimates
• Paraphrasing Doug Bates: the sampling distribution of variance estimates is in general strongly asymmetric: the standard error may be a poor characterization of the uncertainty.
• lme4 allows for computing likelihood profiles of variances and computing confidence intervals on their basis; these likelihood profile confidence intervals are subject to the usual caveats about the LRT with finite sample sizes.
• Using an MCMC-based approach (the simplest/most canned is probably to use the MCMCglmm package, although its mode specifications are not identical to those of lme4) will provide posterior distributions of the variance parameters: quantiles or credible intervals (HPDinterval() in the coda package) will characterize the uncertainty.
• (don’t say we didn’t warn you …) [n]lme fits contain an element called apVar which contains the approximate variance-covariance matrix (derived from the Hessian, the matrix of (numerically approximated) second derivatives of the likelihood (REML?) at the maximum (restricted?) likelihood values): you can derive the standard errors from this list element via sqrt(diag(lme.obj\$apVar)). For whatever it’s worth, though, these estimates might not match the estimates that SAS gives which are supposedly derived in the same way.
• it’s not a full solution, but there is some more information here. I have some delta-method computations there that are off by a factor of 2 for the residual standard deviation, as well as some computations based on reparameterizing the deviance function.
### P-values: MCMC and parametric bootstrap
Abandoning the approximate $$F$$/$$t$$-statistic route, one ends up with the more general problem of estimating $$p$$-values. There is a wider range of options here, although many of them are computationally intensive …
#### Markov chain Monte Carlo sampling:
• pseudo-Bayesian: post-hoc sampling, typically (1) assuming flat priors and (2) starting from the MLE, possibly using the approximate variance-covariance estimate to choose a candidate distribution
• via mcmcsamp (if available for your problem: i.e. LMMs with simple random effects – not GLMMs or complex random effects)
• via pvals.fnc in the languageR package, a wrapper for mcmcsamp)
• in AD Model Builder, possibly via the glmmADMB package (use the mcmc=TRUE option) or the R2admb package (write your own model definition in AD Model Builder), or outside of R
• via the sim function from the arm package (simulates the posterior only for the beta (fixed-effect) coefficients; not yet working with development lme4; would like a better formal description of the algorithm …?)
• fully Bayesian approaches
• via the MCMCglmm package
• glmmBUGS (a WinBUGS wrapper/R interface)
• JAGS/WinBUGS/OpenBUGS etc., via the rjags/r2jags/R2WinBUGS/BRugs packages
#### Status of mcmcsamp
mcmcsamp is a function for lme4 that is supposed to sample from the posterior distribution of the parameters, based on flat/improper priors for the parameters [ed: I believe, but am not sure, that these priors are flat on the scale of the theta (Cholesky-factor) parameters]. At present, in the CRAN version (lme4 0.999999-0) and the R-forge “stable” version (lme4.0 0.999999-1), this covers only linear mixed models with uncorrelated random effects.
As has been discussed in a variety of places (e.g. on r-sig-mixed models, and on the r-forge bug tracker, it is challenging to come up with a sampler that accounts properly for the possibility that the posterior distributions for some of the variance components may be mixtures of point masses at zero and continuous distributions. Naive samplers are likely to get stuck at or near zero. Doug Bates has always been a bit unsure that mcmcsamp is really performing as intended, even in the limited cases it now handles.
Given this uncertainty about how even the basic version works, the lme4 developers have been reluctant to make the effort to extend it to GLMMs or more complex LMMs, or to implement it for the development version of lme4 … so unless something miraculous happens, it will not be implemented for the new version of lme4. As always, users are encouraged to write and share their own code that implements these capabilities …
#### Parametric bootstrap
The idea here is that in order to do inference on the effect of (a) predictor(s), you (1) fit the reduced model (without the predictors) to the data; (2) many times, (2a) simulate data from the reduced model; (2b) fit both the reduced and the full model to the simulated (null) data; (2c) compute some statistic(s) [e.g. t-statistic of the focal parameter, or the log-likelihood or deviance difference between the models]; (3) compare the observed values of the statistic from fitting your full model to the data to the null distribution generated in step 2. - PBmodcomp in the pbkrtest package - see the example in help("simulate-mer") in the lme4 package to roll your own, using a combination of simulate() and refit(). - bootMer in lme4 version >1.0.0 - a presentation at UseR! 2009 (abstract, slides) went into detail about a proposed bootMer package and suggested it could work for GLMMs too – but it does not seem to be active.
## Predictions and/or confidence (or prediction) intervals on predictions
Note that none of the following approaches takes the uncertainty of the random effects parameters into account … if you want to take RE parameter uncertainty into account, a Bayesian approach is probably the easiest way to do it.
The general recipe for computing predictions from a linear or generalized linear model is to
• figure out the model matrix $$X$$ corresponding to the new data;
• matrix-multiply $$X$$ by the parameter vector $$\beta$$ to get the predictions (or linear predictor in the case of GLM(M)s);
• extract the variance-covariance matrix of the parameters $$V$$
• compute $$X V X^{\prime}$$ to get the variance-covariance matrix of the predictions;
• extract the diagonal of this matrix to get variances of predictions;
• if computing prediction rather than confidence intervals, add the residual variance;
• take the square-root of the variances to get the standard deviations (errors) of the predictions;
• compute confidence intervals based on a Normal approximation;
• for GL(M)Ms, run the confidence interval boundaries (not the standard errors) through the inverse-link function.
### lme
library(nlme)
fm1 <- lme(distance ~ age*Sex, random = ~ 1 + age | Subject,
data = Orthodont)
plot(Orthodont,asp="fill") ## plot responses by individual` | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8408969044685364, "perplexity": 2929.326749877986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371813538.73/warc/CC-MAIN-20200408104113-20200408134613-00082.warc.gz"} |
http://etheses.bham.ac.uk/863/ | eTheses Repository
# Agitation, mixing and mass transfer in simulated high viscosity fermentation broths
Hickman, Alan Douglas (1985)
Ph.D. thesis, University of Birmingham.
PDF (13Mb)
## Abstract
Gas-liquid mass transfer, agitator power consumption, rheology, gas-liquid mixing and gas hold-up have been studied in an agitated, sparged vessel of diameter, T = 0.3 m, with a liquid capacity of 0.02 m$$^3$$, unaerated liquid height = 0.3 m. The solutions of sodium carboxymethylcellulose used exhibit moderate viscoelasticity and shear thinning behaviour, obeying the power law over the range of shear rates studied. The gas-liquid mass transfer was studied using a steady state technique. This involves monitoring the gas and liquid phase oxygen concentrations when a microorganism (yeast) is cultured in the solutions of interest. Agitator power consumption was measured using strain gauges mounted on the impeller shaft. Various agitator geometries were used. These were: Rushton turbines ( D = T/3 and D = T/2 ), used singly and in pairs; Intermig impellers ( D = 0.58T ), used as a pair; and a 45° pitched blade turbine ( D = T/2 ), used in combination with a Rushton turbine. Gas hold-up and gas-liquid flow patterns were visually observed. In addition, the state of the culture variables, (oxygen uptake rate and carbon dioxide production rate), were used to provide a respiratory quotient, the value of which can be linked to the degree of gas-liquid mixing in the vessel. Measurement of point values of the liquid phase oxygen concentration is also used to indicate the degree of liquid mixing attained. The volumetric mass transfer coefficient, k$$_L$$a, was found to be dependent on the conditions in which the yeast was cultivated, as well as being a function of time. These variations were associated with variations in solution composition seen over the course of each experiment. Steps were taken to ensure that further k$$_L$$a values were measured under identical conditions of the culture variables, in order to determine the effect on k$$_L$$a of varying viscosity, agitator speed and type and air flow rate. Increasing solution viscosity results in poorer gas-liquid mixing and a reduction in k$$_L$$a, as has been found by earlier workers. Thus high agitator speeds and power inputs are required to maintain adequate mass transfer rates. In the more viscous solutions used, large diameter dual impeller systems were required, to mix the gas and liquid phases. Of these a pair of Rushton turbines ( D = T/2 ) gave the highest k$$_L$$a values at a given power input. In these solutions the dependence of k$$_L$$a on the gassing rate, which is seen in intermediate and low viscosity solutions, virtually disappears, with k$$_L$$a highly dependent on the power input and the apparent viscosity. At intermediate viscosities a smaller pair of Rushton turbines showed the most efficient mass transfer characteristics, here k$$_L$$a is dependent on the power input and the gassing rate, but independent of viscosity. This is linked to the flow regime force in the vessel, which at intermediate viscosities lies in the transition region between the laminar and turbulent flow regimes. Variations in gas hold-up, rising then falling with increasing impeller speed, were linked to variations in the gassed power number, falling then rising with increasing impeller speed. These effects are considered to be due to variations in the size of the gas filled cavities behind the impeller blades.
Type of Work: Ph.D. thesis. Nienow, A. W. Faculties (to 1997) > Faculty of Engineering Department of Chemical Engineering TP Chemical technology University of Birmingham Check for printed version of this thesis 863
This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder.
Repository Staff Only: item control page | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8969966173171997, "perplexity": 2456.8335549491176}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720941.32/warc/CC-MAIN-20161020183840-00204-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://openstax.org/books/chemistry-atoms-first/pages/1-4-measurements | Chemistry: Atoms First
# 1.4Measurements
Chemistry: Atoms First1.4 Measurements
### Learning Objectives
By the end of this section, you will be able to:
• Explain the process of measurement
• Identify the three basic parts of a quantity
• Describe the properties and units of length, mass, volume, density, temperature, and time
• Perform basic unit calculations and conversions in the metric and other unit systems
Measurements provide the macroscopic information that is the basis of most of the hypotheses, theories, and laws that describe the behavior of matter and energy in both the macroscopic and microscopic domains of chemistry. Every measurement provides three kinds of information: the size or magnitude of the measurement (a number); a standard of comparison for the measurement (a unit); and an indication of the uncertainty of the measurement. While the number and unit are explicitly represented when a quantity is written, the uncertainty is an aspect of the measurement result that is more implicitly represented and will be discussed later.
The number in the measurement can be represented in different ways, including decimal form and scientific notation. (Scientific notation is also known as exponential notation; a review of this topic can be found in Appendix B.) For example, the maximum takeoff weight of a Boeing 777-200ER airliner is 298,000 kilograms, which can also be written as 2.98 $××$ 105 kg. The mass of the average mosquito is about 0.0000025 kilograms, which can be written as 2.5 $××$ 10−6 kg.
Units, such as liters, pounds, and centimeters, are standards of comparison for measurements. When we buy a 2-liter bottle of a soft drink, we expect that the volume of the drink was measured, so it is two times larger than the volume that everyone agrees to be 1 liter. The meat used to prepare a 0.25-pound hamburger is measured so it weighs one-fourth as much as 1 pound. Without units, a number can be meaningless, confusing, or possibly life threatening. Suppose a doctor prescribes phenobarbital to control a patient’s seizures and states a dosage of “100” without specifying units. Not only will this be confusing to the medical professional giving the dose, but the consequences can be dire: 100 mg given three times per day can be effective as an anticonvulsant, but a single dose of 100 g is more than 10 times the lethal amount.
We usually report the results of scientific measurements in SI units, an updated version of the metric system, using the units listed in Table 1.2. Other units can be derived from these base units. The standards for these units are fixed by international agreement, and they are called the International System of Units or SI Units (from the French, Le Système International d’Unités). SI units have been used by the United States National Institute of Standards and Technology (NIST) since 1964.
Base Units of the SI System
Property Measured Name of Unit Symbol of Unit
length meter m
mass kilogram kg
time second s
temperature kelvin K
electric current ampere A
amount of substance mole mol
luminous intensity candela cd
Table 1.2
Sometimes we use units that are fractions or multiples of a base unit. Ice cream is sold in quarts (a familiar, non-SI base unit), pints (0.5 quart), or gallons (4 quarts). We also use fractions or multiples of units in the SI system, but these fractions or multiples are always powers of 10. Fractional or multiple SI units are named using a prefix and the name of the base unit. For example, a length of 1000 meters is also called a kilometer because the prefix kilo means “one thousand,” which in scientific notation is 103 (1 kilometer = 1000 m = 103 m). The prefixes used and the powers to which 10 are raised are listed in Table 1.3.
Common Unit Prefixes
Prefix Symbol Factor Example
femto f 10−15 1 femtosecond (fs) = 1 $××$ 10−15 s (0.000000000000001 s)
pico p 10−12 1 picometer (pm) = 1 $××$ 10−12 m (0.000000000001 m)
nano n 10−9 4 nanograms (ng) = 4 $××$ 10−9 g (0.000000004 g)
micro µ 10−6 1 microliter (μL) = 1 $××$ 10−6 L (0.000001 L)
milli m 10−3 2 millimoles (mmol) = 2 $××$ 10−3 mol (0.002 mol)
centi c 10−2 7 centimeters (cm) = 7 $××$ 10−2 m (0.07 m)
deci d 10−1 1 deciliter (dL) = 1 $××$ 10−1 L (0.1 L )
kilo k 103 1 kilometer (km) = 1 $××$ 103 m (1000 m)
mega M 106 3 megahertz (MHz) = 3 $××$ 106 Hz (3,000,000 Hz)
giga G 109 8 gigayears (Gyr) = 8 $××$ 109 yr (8,000,000,000 yr)
tera T 1012 5 terawatts (TW) = 5 $××$ 1012 W (5,000,000,000,000 W)
Table 1.3
### SI Base Units
The initial units of the metric system, which eventually evolved into the SI system, were established in France during the French Revolution. The original standards for the meter and the kilogram were adopted there in 1799 and eventually by other countries. This section introduces four of the SI base units commonly used in chemistry. Other SI units will be introduced in subsequent chapters.
#### Length
The standard unit of length in both the SI and original metric systems is the meter (m). A meter was originally specified as 1/10,000,000 of the distance from the North Pole to the equator. It is now defined as the distance light in a vacuum travels in 1/299,792,458 of a second. A meter is about 3 inches longer than a yard (Figure 1.23); one meter is about 39.37 inches or 1.094 yards. Longer distances are often reported in kilometers (1 km = 1000 m = 103 m), whereas shorter distances can be reported in centimeters (1 cm = 0.01 m = 10−2 m) or millimeters (1 mm = 0.001 m = 10−3 m).
Figure 1.23 The relative lengths of 1 m, 1 yd, 1 cm, and 1 in. are shown (not actual size), as well as comparisons of 2.54 cm and 1 in., and of 1 m and 1.094 yd.
#### Mass
The standard unit of mass in the SI system is the kilogram (kg). A kilogram was originally defined as the mass of a liter of water (a cube of water with an edge length of exactly 0.1 meter). It is now defined by a certain cylinder of platinum-iridium alloy, which is kept in France (Figure 1.24). Any object with the same mass as this cylinder is said to have a mass of 1 kilogram. One kilogram is about 2.2 pounds. The gram (g) is exactly equal to 1/1000 of the mass of the kilogram (10−3 kg).
Figure 1.24 This replica prototype kilogram is housed at the National Institute of Standards and Technology (NIST) in Maryland. (credit: National Institutes of Standards and Technology)
#### Temperature
Temperature is an intensive property. The SI unit of temperature is the kelvin (K). The IUPAC convention is to use kelvin (all lowercase) for the word, K (uppercase) for the unit symbol, and neither the word “degree” nor the degree symbol (°). The degree Celsius (°C) is also allowed in the SI system, with both the word “degree” and the degree symbol used for Celsius measurements. Celsius degrees are the same magnitude as those of kelvin, but the two scales place their zeros in different places. Water freezes at 273.15 K (0 °C) and boils at 373.15 K (100 °C) by definition, and normal human body temperature is approximately 310 K (37 °C). The conversion between these two units and the Fahrenheit scale will be discussed later in this chapter.
#### Time
The SI base unit of time is the second (s). Small and large time intervals can be expressed with the appropriate prefixes; for example, 3 microseconds = 0.000003 s = 3 $××$ 10−6 and 5 megaseconds = 5,000,000 s = 5 $××$ 106 s. Alternatively, hours, days, and years can be used.
### Derived SI Units
We can derive many units from the seven SI base units. For example, we can use the base unit of length to define a unit of volume, and the base units of mass and length to define a unit of density.
#### Volume
Volume is the measure of the amount of space occupied by an object. The standard SI unit of volume is defined by the base unit of length (Figure 1.25). The standard volume is a cubic meter (m3), a cube with an edge length of exactly one meter. To dispense a cubic meter of water, we could build a cubic box with edge lengths of exactly one meter. This box would hold a cubic meter of water or any other substance.
A more commonly used unit of volume is derived from the decimeter (0.1 m, or 10 cm). A cube with edge lengths of exactly one decimeter contains a volume of one cubic decimeter (dm3). A liter (L) is the more common name for the cubic decimeter. One liter is about 1.06 quarts.
A cubic centimeter (cm3) is the volume of a cube with an edge length of exactly one centimeter. The abbreviation cc (for cubic centimeter) is often used by health professionals. A cubic centimeter is also called a milliliter (mL) and is 1/1000 of a liter.
Figure 1.25 (a) The relative volumes are shown for cubes of 1 m3, 1 dm3 (1 L), and 1 cm3 (1 mL) (not to scale). (b) The diameter of a dime is compared relative to the edge length of a 1-cm3 (1-mL) cube.
#### Density
We use the mass and volume of a substance to determine its density. Thus, the units of density are defined by the base units of mass and length.
The density of a substance is the ratio of the mass of a sample of the substance to its volume. The SI unit for density is the kilogram per cubic meter (kg/m3). For many situations, however, this as an inconvenient unit, and we often use grams per cubic centimeter (g/cm3) for the densities of solids and liquids, and grams per liter (g/L) for gases. Although there are exceptions, most liquids and solids have densities that range from about 0.7 g/cm3 (the density of gasoline) to 19 g/cm3 (the density of gold). The density of air is about 1.2 g/L. Table 1.4 shows the densities of some common substances.
Densities of Common Substances
Solids Liquids Gases (at 25 °C and 1 atm)
ice (at 0 °C) 0.92 g/cm3 water 1.0 g/cm3 dry air 1.20 g/L
oak (wood) 0.60–0.90 g/cm3 ethanol 0.79 g/cm3 oxygen 1.31 g/L
iron 7.9 g/cm3 acetone 0.79 g/cm3 nitrogen 1.14 g/L
copper 9.0 g/cm3 glycerin 1.26 g/cm3 carbon dioxide 1.80 g/L
lead 11.3 g/cm3 olive oil 0.92 g/cm3 helium 0.16 g/L
silver 10.5 g/cm3 gasoline 0.70–0.77 g/cm3 neon 0.83 g/L
gold 19.3 g/cm3 mercury 13.6 g/cm3 radon 9.1 g/L
Table 1.4
While there are many ways to determine the density of an object, perhaps the most straightforward method involves separately finding the mass and volume of the object, and then dividing the mass of the sample by its volume. In the following example, the mass is found directly by weighing, but the volume is found indirectly through length measurements.
$density=massvolumedensity=massvolume$
### Example 1.1
#### Calculation of Density
Gold—in bricks, bars, and coins—has been a form of currency for centuries. In order to swindle people into paying for a brick of gold without actually investing in a brick of gold, people have considered filling the centers of hollow gold bricks with lead to fool buyers into thinking that the entire brick is gold. It does not work: Lead is a dense substance, but its density is not as great as that of gold, 19.3 g/cm3. What is the density of lead if a cube of lead has an edge length of 2.00 cm and a mass of 90.7 g?
#### Solution
The density of a substance can be calculated by dividing its mass by its volume. The volume of a cube is calculated by cubing the edge length.
$volume of lead cube=2.00 cm×2.00 cm×2.00 cm=8.00 cm3volume of lead cube=2.00 cm×2.00 cm×2.00 cm=8.00 cm3$
$density=massvolume=90.7 g8.00 cm3=11.3 g1.00 cm3=11.3 g/cm3density=massvolume=90.7 g8.00 cm3=11.3 g1.00 cm3=11.3 g/cm3$
(We will discuss the reason for rounding to the first decimal place in the next section.)
#### Check Your Learning
(a) To three decimal places, what is the volume of a cube (cm3) with an edge length of 0.843 cm?
(b) If the cube in part (a) is copper and has a mass of 5.34 g, what is the density of copper to two decimal places?
(a) 0.599 cm3; (b) 8.91 g/cm3
### Example 1.2
#### Using Displacement of Water to Determine Density
This PhET simulation illustrates another way to determine density, using displacement of water. Determine the density of the red and yellow blocks.
#### Solution
When you open the density simulation and select Same Mass, you can choose from several 5.00-kg colored blocks that you can drop into a tank containing 100.00 L water. The yellow block floats (it is less dense than water), and the water level rises to 105.00 L. While floating, the yellow block displaces 5.00 L water, an amount equal to the weight of the block. The red block sinks (it is more dense than water, which has density = 1.00 kg/L), and the water level rises to 101.25 L.
The red block therefore displaces 1.25 L water, an amount equal to the volume of the block. The density of the red block is:
$density=massvolume=5.00 kg1.25 L=4.00 kg/Ldensity=massvolume=5.00 kg1.25 L=4.00 kg/L$
Note that since the yellow block is not completely submerged, you cannot determine its density from this information. But if you hold the yellow block on the bottom of the tank, the water level rises to 110.00 L, which means that it now displaces 10.00 L water, and its density can be found:
$density=massvolume=5.00 kg10.00 L=0.500 kg/Ldensity=massvolume=5.00 kg10.00 L=0.500 kg/L$
#### Check Your Learning
Remove all of the blocks from the water and add the green block to the tank of water, placing it approximately in the middle of the tank. Determine the density of the green block.
2.00 kg/L
Order a print copy
As an Amazon Associate we earn from qualifying purchases. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.884324312210083, "perplexity": 1304.31462955144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370752.61/warc/CC-MAIN-20210305091526-20210305121526-00438.warc.gz"} |
https://academic.oup.com/bib/article/7/2/151/306000/Review-On-the-analysis-and-interpretation-of | ## Abstract
A remarkable inherent feature of cellular metabolism is that the concentrations of a small but significant number of metabolites are strongly correlated when measurements of biological replicates are performed. This review seeks to summarize the recent efforts to elucidate the origin of these observed correlations and points out several aspects concerning their interpretation. It is argued that correlations between metabolites differ profoundly from their transcriptomic and proteomic counterparts, and a straightforward interpretation in terms of the underlying biochemical pathways will unavoidably fail. It is demonstrated that the comparative correlations analysis offers a way to exploit the observed correlations to obtain additional information about the physiological state of the system.
## INTRODUCTION
Metabolomic measurements provide a wealth of information about the biochemical status of cells, tissues or organisms and play an increasing role to elucidate the function of the unknown and the novel genes [1–5]. Interpretation of these data relies crucially on computational approaches to large-scale data analysis and data visualization, such as principal and independent component analysis [6], multidimensional scaling [7], a variety of clustering techniques [2] and discriminant function analysis, among many others [1, 8]. Common to most of these methods is that they build upon interdependencies between metabolites, i.e. relationships between the concentrations of metabolites, as expressed by a covariance or a correlation matrix.
Indeed, a remarkable feature of metabolomic data is that a small but significant number of metabolite levels are highly interrelated when repeated measurements are performed [9, 10]. Contrary to a naïve interpretation, these correlations do not necessarily occur between metabolites that are neighbours on the metabolic map, but other, more subtle, mechanisms are involved [11–13]. This review seeks to discuss the underlying mechanisms that give rise to an observed pattern of correlations and seeks to point out some aspects of the interpretation of these correlations. It is argued that correlations, or more generally ‘associations’, between observed metabolite levels differ profoundly from their transcriptomic and proteomic counterparts. In the latter case, the concentrations are mainly governed by a network of regulatory interactions, whereas metabolites are synthesized from other metabolites via a network of biochemical reactions. As emphasized previously [11], this results in a level of interdependence between their concentrations that does not exist for transcripts and proteins. While the difference in the underlying causality does not hamper the application of most algorithms as heuristic research tools, for example to discriminate genotypes based on their metabolite profiles, a genuine interpretation of the data has to take this inherent difference into account—and eventually make use of it.
The review is organized as follows: in the first section, we summarize some quantitative measures of correlations between metabolite levels. Subsequently, a variety of possible scenarios that give rise to an observed pattern of correlations are discussed and specific causes for the correlations between measured metabolite concentrations are identified. The next section proceeds towards an interpretation of metabolite correlations as a characteristic fingerprint of the underlying system. Subsequently, we discuss the utilization of correlation networks in large-scale data analysis and point out several drawbacks and pitfalls in their interpretation. In the last section, alternative approaches to metabolomic data analysis are given and the key points are summarized.
## MEASURES OF ASSOCIATIONS BETWEEN METABOLITES
Several measures to quantify the association between metabolite levels have been suggested in the literature, each with their own merits and drawbacks. The most common choices are summarized in Figure 1. Though in the following we will mainly refer to the usual Pearson correlation coefficient, the term ‘correlation’ is understood here in a rather general sense, implying a statistically significant relationship between two metabolites.
Figure 1:
Among the most common measures of association between metabolite levels are the elements Γ(X, Y) of the covariance matrix and the corresponding Pearson correlation . When applied to rank-ordered data, the Pearson correlation transform into the Spellman rank correlation ρc(X, Y) (A). A complimentary approach to specify the relationship between two metabolites is the slope or ratio of their respective concentrations. Note that the Pearson correlation itself is not sensitive to the slope, but is given as the geometric mean of the reciprocal slopes . For deterministic relationships we obtain , thus ρ(X, Y) = 1. For uncorrelated data we obtain , thus ρ(X, Y) ≃ 0 (B). In the case of nonlinear relationships between metabolite concentrations, alternative measures, such as the mutual information [14] are more appropriate (C).
Figure 1:
Among the most common measures of association between metabolite levels are the elements Γ(X, Y) of the covariance matrix and the corresponding Pearson correlation . When applied to rank-ordered data, the Pearson correlation transform into the Spellman rank correlation ρc(X, Y) (A). A complimentary approach to specify the relationship between two metabolites is the slope or ratio of their respective concentrations. Note that the Pearson correlation itself is not sensitive to the slope, but is given as the geometric mean of the reciprocal slopes . For deterministic relationships we obtain , thus ρ(X, Y) = 1. For uncorrelated data we obtain , thus ρ(X, Y) ≃ 0 (B). In the case of nonlinear relationships between metabolite concentrations, alternative measures, such as the mutual information [14] are more appropriate (C).
## THE ORIGIN OF CORRELATIONS
An interpretation of the observed correlations is, of course, intimately related to the experimental situation under which the metabolite profiles were obtained. To be able to observe a concomitant change in metabolite concentrations at all, presupposes a source of variation for (at least some) metabolites. We can broadly distinguish between three different scenarios: (i) specific perturbations—in this case, the change in metabolite levels results from a specific and localized intervention within the underlying network of biochemical reactions, typically an over-expression or knockout of a gene coding for an enzyme. (ii) global perturbations—this includes measurement of transient or diurnal time series, as well as the response to temperature (heat shock) or other environmental changes (stress). In the case of global perturbations, changes are induced at multiple sites within the metabolic network or are brought about by external factors that influence a large number of metabolites simultaneously. (iii) intrinsic variability—an intriguing feature of metabolomic data is that some metabolites are strongly interrelated even when biological replicates under identical experimental conditions are measured [9, 10]. In this case, changes of metabolite levels do not result from deliberate experimental perturbations or changes of the physiological state, but are induced by the intrinsic variability of cellular metabolism [11–13].
Obviously, metabolite correlations that arise from global perturbations are the hardest to interpret in terms of the underlying biochemical network. For example, given a diurnal time series, all metabolic compounds that show a diurnal variation will inevitably correlate, conveying no, or only little, information about their causal dependency or mutual proximity within the metabolic map. Similarly, a time-dependent transient response to a perturbation may either result in no detectable change, an increase or a decrease of a metabolic compound, resulting inevitably in a large number of ‘correlations’ between metabolites (see Figure 2 for examples). Moreover, metabolism itself is considered to be a rather fast process [15], thus, most current measurements of time-dependent metabolite concentrations do not capture the intrinsic timescales of biochemical processes, but focus on slower timescales that are induced by changes in protein expression or circadian regulation.
Figure 2:
Different scenarios of observed correlations: measurements of diurnal time series (A1) inevitably lead to a large number of ‘correlations’ between metabolites (A2), implying only that both compounds are following a diurnal pattern, but allowing no further conclusions about the underlying causality (A). Likewise in the case of a transient response, metabolites may either show no change, an increase or a decrease in their concentration (B1), resulting inevitably in the corresponding correlations (B2). In contrast to this, specific and localized changes within the metabolic network allow to reveal properties of metabolic regulation [16] (C). In the following, we will mainly focus on the correlations observed within a specific physiological state, disclosing additional information about the underlying state of the system (D).
Figure 2:
Different scenarios of observed correlations: measurements of diurnal time series (A1) inevitably lead to a large number of ‘correlations’ between metabolites (A2), implying only that both compounds are following a diurnal pattern, but allowing no further conclusions about the underlying causality (A). Likewise in the case of a transient response, metabolites may either show no change, an increase or a decrease in their concentration (B1), resulting inevitably in the corresponding correlations (B2). In contrast to this, specific and localized changes within the metabolic network allow to reveal properties of metabolic regulation [16] (C). In the following, we will mainly focus on the correlations observed within a specific physiological state, disclosing additional information about the underlying state of the system (D).
For specific perturbations, the situation is slightly different. Over expression of individual enzymes does indeed offer the possibility to infer properties of the metabolic system, as will be discussed in the last section. For the moment, though, we will focus on an interpretation of metabolite correlations in the absence of deliberate experimental perturbations. In this case, the observed correlations are induced by diminutive fluctuations within the metabolic system itself, which then propagate through the system and give rise to a specific pattern of correlations, depending on the physiological state of the system [11–13, 16]. When measuring a population of biological replicates, intrinsic fluctuations may arise due to at least two different mechanisms: first, even under identical experimental conditions, organisms are never actually identical. Inevitable small differences in enzyme concentrations, reflecting the differences in gene expression, affect metabolite concentrations and consequently result in interdependencies between metabolites [11]. Second, cellular metabolism is influenced by a number of environmental factors such as light intensity or nutrient supply. Again, rapidly changing diminutive differences, even in an approximately constant environment, result in changes in metabolite concentrations, which then propagate through the metabolic network and induce a specific pattern of correlations [12, 16].
Common to both cases is that the resulting correlations are a global property of the system, i.e. whether two metabolites correlate or not is a combined result of many, if not all, biochemical reactions, regulatory interactions and the inducing fluctuations that constitute the system. Consistent with experimental observations, both the scenarios lead to a small number of characteristic correlations which do not necessarily occur only for neighbours in the underlying metabolic map. A schematic example is given in Figure 3.
Figure 3:
For replicate measurements, diminutive differences within the system induce correlations between metabolites. Shown is a simple example system, consisting of five metabolites and five reactions. The system includes a reversible rapid equilibrium reaction ν2 = k2 ([S1] – [S2]/q2) and a conserved moiety [A1] + A2] = const. All reactions are modelled with mass-action kinetics νi = ki∏j Sj with unit parameters ki = 1, except k2 = 10. Differences between replicate samples are introduced by slight differences in model parameters, resulting in a characteristic pattern of correlations between observed steady state concentrations.
Figure 3:
For replicate measurements, diminutive differences within the system induce correlations between metabolites. Shown is a simple example system, consisting of five metabolites and five reactions. The system includes a reversible rapid equilibrium reaction ν2 = k2 ([S1] – [S2]/q2) and a conserved moiety [A1] + A2] = const. All reactions are modelled with mass-action kinetics νi = ki∏j Sj with unit parameters ki = 1, except k2 = 10. Differences between replicate samples are introduced by slight differences in model parameters, resulting in a characteristic pattern of correlations between observed steady state concentrations.
Camacho et al. [11] identified several distinct mechanisms that result in a high correlation between two metabolites in replicate experiments. These include: (i) chemical equilibrium—two metabolites near chemical equilibrium will show a high correlation, with their concentration ratio approximating the equilibrium constant. (ii) mass conservation—within a moiety-conserved cycle, at least one member should have a negative correlation with another member of the conserved group. (iii) asymmetric control—if one parameter dominates the concentration of two metabolites, intrinsic fluctuations of this parameter result in a high correlation between these two metabolites. (iv) unusually high variance in the expression of a single gene. Similar to the previous situation, but the resulting correlation is not due to a high sensitivity towards a particular parameter, but due to an unusually high variance of this parameter. In particular, a single enzyme that carries a high variance will induce negative correlations between its substrate and product metabolites.
However, it is emphasized that the resulting correlations are still a systemic property of the underlying metabolic network. To actually distinguish the specific mechanisms responsible for an observed correlation does require additional knowledge [11]. Nonetheless, as we will discuss in the subsequent section, correlations that arise from intrinsic fluctuations do provide additional information about the physiological state of the system and represent a promising starting point for data analysis.
## THE INTERPRETATION OF CORRELATIONS
Despite the difficulties in assessing the causal origin of a specific correlation, the primary interest in the analysis of metabolite correlations stems from the fact that the observed pattern indeed provides information about the physiological state of a metabolic system. As already indicated in Figure 2D, a transition to a different physiological state may not only involve changes in the average levels of the measured metabolites, but additionally may also involve changes in their pair-wise correlations. Likewise, a metabolite which shows no significant change in the average level between two different experimental conditions or genotypes may still show an alteration of its pair-wise correlations with other metabolites. This observation leads to the interpretation of the resulting pattern of correlation as a global fingerprint of the physiological state. In this way, the analysis of correlations exploits the intrinsic variability of a metabolic system to obtain the additional features of the state of the system.
The situation can be best described by an analogy in physics. Consider a particle at rest in a potential, as depicted in Figure 4. In an ideal noise-free world, repeated measurements (replicates) result in identical values for the position of the particle, probably only impaired by the measurement noise. However, in the presence of internal fluctuations, repeated measurements yield a characteristic distribution. This distribution is determined by the shape of the potential, as well as the nature of the intrinsic fluctuations. Changes in the system thus result in concomitant changes of the observed distribution, conveying information that would not be accessible by observation of the average position alone. Along similar lines, the intrinsic fluctuations of a metabolic system induce a characteristic pattern of interdependencies between metabolites, depending on the genetic and experimental background.
Figure 4:
Intrinsic fluctuations give rise to a characteristic distribution, conveying information about the shape of the underlying potential.
Figure 4:
Intrinsic fluctuations give rise to a characteristic distribution, conveying information about the shape of the underlying potential.
Of course, the interpretation of metabolite correlations as a global snapshot of the physiological state is only valid for repeated measurements of biological replicates under identical experimental conditions. Having obtained such replicate measurements, it opens the way to perform an analysis of differential correlations, i.e. a systematic comparison of correlations between different states and tissue types. An altered pattern of correlations, in addition to the changes in average metabolite levels, points to changes in the underlying state of the system. On the other hand, and maybe more important, correlations that are preserved across multiple experimental conditions allow to identify (at least candidates for) rapid equilibrium reactions between metabolites and possible mass conservation relationships. Indeed, in a recent preliminary study [16], comparing four different states and tissue types (Aravidopsis thaliana leaf, Nicotiana tabacum leaf, Solanum tuberosum leaf and tuber), a number of preserved correlations were detected. These include, in addition to the obvious high correlation between glucose-6-phosphate and fructose-6-phosphate observed in almost all the studies so far, the metabolite pairs fumarate with malate as well as serine with threonine.
More striking, however, is the existence of reversed correlations, i.e. a situation in which the correlation between two metabolites changes its sign [10, 12, 16]. This points to a marked change in the underlying regulation of the system and possibly reflects the existence of multiple steady states. Indeed, the phenomenon of reversed correlations is also observed in the numerical models of cellular metabolism involving multistationarity and switching between different states [16]. However, other causes of reversed correlations are also conceivable and a more detailed evaluation is still awaited.
It should be emphasized that a systematic comparison of correlation across multiple experimental states serves also as a crucial test for the validity of the analysis. Assuming that the observed pattern of correlations represents a global fingerprint of the underlying physiological state, vastly different states or tissue types should manifest themselves as different patterns of observed correlations. On the other hand, closely related physiological states should give rise to rather similar patterns of correlations. That is, differences or ‘distance’ in terms of observed correlations should reflect and correspond to differences or ‘distance’ in physiological state, tissue type and experimental condition. The observed correlations should be robust with respect to minor changes in the underlying system, while at the same time they should be susceptible for marked changes in the underlying biochemical system. While the preliminary studies seem to support this view [10, 16], large-scale comparisons of metabolic correlations are still sparsely reported. Thus, a concluding evaluation of the validity and applicability of large-scale metabolomic correlations analysis requires further experimental verification.
## METABOLOMIC NETWORK ANALYSIS
Metabolomics, by definition, usually involves a large number of measured metabolites, necessitating the use of simple and effective visualization procedures. Among the most basic methods to depict the observed correlations are metabolomic correlation networks, schematically shown in Figure 5. Metabolomic correlation networks represent a coarse-grained view of the observed correlations: All metabolites are arranged in a two-dimensional plane, such that their pair-wise distances approximately reflect their pair-wise correlation—a procedure known as multidimensional scaling and already used early in the analysis of metabolic data [7]. To create the final network, two metabolites are connected with a ‘link’ if their pair-wise correlation exceeds a given threshold.
Figure 5:
Among the most simplest methods to visualize the observed correlations are metabolomic correlation or association networks. Two metabolites are connected with a link if their pair-wise correlation exceeds a given threshold.
Figure 5:
Among the most simplest methods to visualize the observed correlations are metabolomic correlation or association networks. Two metabolites are connected with a link if their pair-wise correlation exceeds a given threshold.
As these kinds of correlation or association networks have received widespread interest in different fields and applications also [17–20], some problems concerning their interpretation should be pointed out. Obvious and widely acknowledged correlation networks should not be confused with the actual causal dependencies within the underlying system. Nonetheless, a large number of studies, including work on metabolomic data [21, 22], have focused on the topological properties of these networks to obtain information about the large-scale organization of the system—mostly revealing a ‘scale-free nature’ of the network. Apart from several minor problems, such that the topology predominantly depends on the choice of the correlation threshold and the resulting networks may range from completely unconnected to fully connected, a few more fundamental questions also arise. Most importantly, a correlation matrix exhibits a number of characteristic features that are inevitably reflected within the topology of the resulting correlation network. For example, correlations are transitive, i.e. given that a node A shows a high correlation to a node B, as well as to another node C, it must be expected that B and C also correlate. For the usual Pearson correlation, this relationship can be made quantitative, resulting in an inequality for the pair-wise correlations which holds for any triplet of nodes. On the level of correlation networks, this results in a high clustering coefficient, i.e. nodes with a common neighbour tend to be also connected. Thus, rather unsurprisingly, a recent study reports an ‘unexpected assortative feature’, i.e. an over-representation of triangles, for biological correlation networks and concludes that the clustering coefficient is in orders of magnitudes larger than those of equivalent random networks [17]. A similar reasoning holds for other topological features, such as the observed hierarchies in the network or the analysis of motifs and cliques [22].
In almost all of these cases, the problem arises out of the notion of an ‘equivalent random network’. Comparing the observed properties of correlation networks against those found for randomized networks, as is required to assess their significance, is likely to result in highly ‘significant’ differences. However, a random network, even with preserved degree distribution, is not an appropriate null model for a correlation network. In order to distinguish specific properties of the underlying system from those features that are inherent to a correlation matrix, it is necessary that the randomized network itself is consistent with a correlation network, i.e. it represents a network that is generated from a correlation matrix but lacks the distinctive features of the original network. Otherwise, the statistical significance of any observed feature, such as the clustering coefficient, cannot be assessed in a meaningful way.
One possible way to relate observed properties of the correlation network to the underlying system is to construct artificial metabolic systems, as has previously been done for genetic networks [23], and then by comparing the resulting correlation networks with the underlying models. Preliminary results indicate that even rather simplistic artificial metabolic systems are able to reproduce some key properties of observed correlation networks (unpublished data, R.S.). However, as yet, the interpretation of topological features of correlation networks in terms of organizing principles of the underlying system, remains elusive.
## CONCLUSIONS AND ALTERNATIVE APPROACHES
According to the view put forward here, observed correlations between metabolite levels obtained from biological replicates represent a promising additional source of information about the state of a metabolic system. However, their interpretation in terms of the underlying biochemical pathways is not straightforward and largely defies an intuitive analysis. In particular, an evaluation based on a ‘guilt-by-association’ principle must unavoidably fail. This puts the metabolomic data in marked contrast to other ‘omics’ data, such as gene expression measurements, where an analysis based on ‘guilt-by-association’, though also hampered by some fundamental difficulties, has already proven to be highly successful [24]. In our opinion, this difference is due to the distinct nature of the underlying system: while for transcriptomics, the notion that correlated genes are likely to be involved in similar regulatory processes has some intuitive justification, such a reasoning does not hold for cellular metabolism.
Likewise, an analysis of the topological properties of the resulting correlation networks is, as yet, unlikely to reveal properties about the large-scale organization of the underlying system. Though widely popular across several disciplines, no study so far has succeeded in establishing a concise relationship between observed topological features of such a network and the underlying biochemical system. Furthermore, most efforts to investigate the topological properties of correlation networks make no, or only little, reference to the specific experimental conditions under which the measurements were obtained.
As a more straightforward approach, we thus proposed to focus on a systematic comparison of the observed correlations across different experimental conditions and genetic strains [16]. Since it is possible to pinpoint several key mechanisms that lead to a high correlation between two metabolites, preserved correlations across multiple conditions can be expected to identify the invariant features of cellular metabolism. Likewise, though still awaiting experimental verification, changes in correlations will point to the key points at which metabolic regulation has changed. In this way, the comparative analysis of correlations extends and supplements the more common approach to look for macroscopic changes in metabolite concentrations in response to experimental interventions in the metabolic system. As replicate measurements are already a necessary prerequisite to assess the statistical significance of macroscopic changes in metabolite concentrations, and comparative analysis of correlations requires only a slightly larger number of replicates, the experimental burdens of this approach seem acceptable.
A number of further developments are possible. In this review, we have focused solely on the pair-wise correlation between metabolites while actual relationships, for example rapid equilibrium reactions, may involve more than two metabolites simultaneously. Drawing upon earlier work on the analysis of gene expression data, concepts like the partial correlation [25] may thus allow to overcome some of the difficulties related to the simple pair-wise correlation. Again, however, we have to point out that a one-to-one application of concepts derived from transcriptomics, even when already successfully applied in this field, should be treated with caution. If indeed the variability among biological replicates is a consequence of slight differences in gene expression, the respective enzymes constitute a set of hidden variables that severely hampers a straightforward analysis of the partial correlations for metabolomic data. This hindrance also emphasizes the importance to consider cellular metabolism as a part of an integrated cellular system, i.e. to complement metabolomics with quantitative data from transcriptomics and proteomics. Only then, we can expect to truly uncover the regulation and design principles that constitute cellular metabolism.
Along similar lines, at least equally powerful could prove an application of several recent methods that aim to reconstruct cellular systems directly from measurements of specific perturbations [26, 27]. Here, metabolomics can draw upon the vast body of theory developed in the past decades in the realm of metabolic control analysis [15, 28]. While currently the application of such concepts is restricted to rather small subsystems, further improvements in experimental methodology could allow one, systematically, to assess metabolic regulation on a larger-scale. In this sense, the analysis of metabolomic data has only just begun.
Key Points
• The concentrations of a small but significant number of metabolites are strongly correlated when repeated measurements are performed.
• The origin and interpretation of correlations between observed metabolite levels differs profoundly from their transcriptomic and proteomic counterparts.
• It is possible to pinpoint specific mechanisms that give rise to observed correlations between metabolite levels.
• A comparison of metabolite concentrations across multiple experimental conditions can be expected to identify invariant features of cellular metabolism.
• The analysis of observed correlations in terms of correlation networks involves a number of fundamental difficulties with respect to their interpretation.
## References
Goodacre
R
Vaidyanathan
S
Dunn
WB
, et al. .
Metabolomics by numbers: acquiring and understanding global metabolite data
Trends Biotechnol
,
2004
, vol.
22
(pg.
245
-
52
)
Sumner
LW
Mendes
P
Dixon
RA
Large-scale phytochemistry in the functional genomics era
Phytochemistry
,
2003
, vol.
63
(pg.
817
-
36
)
Kell
DB
Metabolomics and systems biology: making sense of the soup
Curr Opin Microbiol
,
2004
, vol.
7
(pg.
1
-
12
)
Sweetlove
LJ
Fernie
AR
Regulation of metabolic networks: understanding metabolic complexity in the systems biology era
New Phytol
,
2005
, vol.
168
(pg.
9
-
24
)
Fernie
AR
Trethewey
RN
Krotzky
A
Willmitzer
L
Metabolite profiling: from diagnostics to systems biology
Nat Rev Mole Cell Biol
,
2004
, vol.
5
(pg.
763
-
9
)
Scholz
M
Gatzek
S
Sterling
A
, et al. .
Metabolite fingerprinting: detecting biological features by independent component analysis
Bioinformatics
,
2004
, vol.
20
(pg.
2447
-
54
)
Arkin
A
Shen
P
Ross
J
A test case of correlation metric construction of a reaction pathway from measurements
Science
,
1997
, vol.
277
(pg.
1275
-
9
)
Steuer
R
Morgenthal
K
Weckwerth
W
Selbig
J
Weckwert
W
A gentle guide to the analysis of metabolomic data
Methods in Molecular Biology
, vol.
vol. 358
Totowa, NJ
Humana Press Inc.
Metabolics: Methods and Protocols
Roessner
U
Luedemann
A
Brust
D
, et al. .
Metabolic profiling allows comprehensive phenotyping of genetically or environmentally modified plant systems
Plant Cell
,
2001
, vol.
13
(pg.
11
-
29
)
Martins
AM
Camacho
D
Shuman
J
, et al. .
A systems biology study of two distinct growth phases of Saccharomyces cerevisiae cultures
Curr Genomics
,
2004
, vol.
5
(pg.
649
-
63
)
Camacho
D
de la Fuente
A
Mendes
P
The origin of correlations in metabolomics data
Metabolomics
,
2005
, vol.
1
(pg.
53
-
63
)
Steuer
R
Kurths
J
Fiehn
O
Weckwerth
W
Observing and interpreting correlations in metabolomic networks
Bioinformatics
,
2003
, vol.
19
(pg.
1019
-
26
)
Steuer
R
Kurths
J
Fiehn
O
Weckwerth
W
Interpreting correlations in metabolomic networks
Biochem Soc Trans
,
2003
, vol.
31
(pg.
1476
-
8
)
Steuer
R
Kurths
J
Daub
CO
, et al. .
The mutual information: detecting and evaluating dependencies between variables
Bioinformatics
,
2002
, vol.
18
Suppl. 2
(pg.
231
-
40
)
Heinrich
R
Schuster
D
The Regulation of Cellular Systems
,
1996
New York
Chapman & Hall
Morgenthal
K
Weckwerth
W
Steuer
R
Metabolomic networks in plants: transitions from pattern recognition to biological interpretation
Biosystems
,
2006
, vol.
83
(pg.
108
-
17
)
Eguiluz
VM
Chialvo
DR
Cecchi
GA
, et al. .
Scale-free brain functional networks
Phys Rev Lett
,
2005
, vol.
94
pg.
018102
Agrawal
H
Extreme self-organization in networks constructed from gene expression data
Phys Rev Lett
,
2002
, vol.
89
pg.
268702
Lukashin
AV
Lukashev
ME
Fuchs
R
Topology of gene expression networks as revealed by data mining and modeling
Bioinformatics
,
2003
, vol.
19
(pg.
1909
-
16
)
Carter
SL
Brechbuehler
CM
Griffin
M
Bond
AT
Gene co-expression network topology provides a framework for molecular characterization of cellular state
Bioinformatics
,
2004
, vol.
20
(pg.
2242
-
50
)
Weckwerth
W
Loureiro
M
Wenzel
K
Fiehn
O
Differential metabolic networks unravel the effects of silent plant phenotypes
,
2004
, vol.
101
(pg.
7809
-
14
)
Kose
F
Weckwerth
W
T
Fiehn
O
Visualizing plant metabolomic correlation networks using clique-metabolite matrices
Bioinformatics
,
2001
, vol.
17
(pg.
1198
-
208
)
Mendes
P
Sha
W
Ye
K
Artificial gene networks for objective comparision of analysis algorithms
Bioinformatics
,
2003
, vol.
19
Suppl. 2
(pg.
ii122
-
29
)
D’haeseleer
P
Liang
S
Somogyi
R
Genetic network inference: from co-expression clustering to reverse engineering
Bioinformatics
,
2000
, vol.
16
(pg.
707
-
26
)
de la Fuente
A
Bing
N
Hoeschele
I
Mendes
P
Discovery of meaningful associations in genomic data using partial correlation coefficients
Bioinformatics
,
2004
, vol.
20
(pg.
3565
-
74
)
Raamsdonk
LM
Teusink
B
D
, et al. .
A functional genomics strategy that uses metabolome data to reveal the phenotype of silent mutations
Nat Biotechnol
,
2001
, vol.
19
(pg.
45
-
50
)
de la Fuente
A
Brazhnik
P
Mendes
P
Linking the genes: inferring quantitative gene networks from microarray data
Trends Genet
,
2002
, vol.
18
(pg.
395
-
8
)
Cornish-Bowden
A
Hofmeyr
JHS
Determination of control coefficients in intact metabolic systems
Biochem J
,
1994
, vol.
198
(pg.
367
-
75
) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8783422708511353, "perplexity": 1172.1150768194623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00121-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://community.rti.com/forum-topic/rtiddsgen-utility | # rtiddsgen utility
6 posts / 0 new
Offline
Last seen: 2 years 7 months ago
Joined: 03/14/2015
Posts: 7
rtiddsgen utility
Can Anyone please tell how I can I set the environment variables for rtiddsgen utility. I have read the documentation and tried every possible thing but still unable to run rtiddsgen to generate code from idl file. If possible please specify step by step solution with example. Thanks in advance for the help. I am using Windows 7, 64 bit platform.
Organization:
Keywords:
Offline
Last seen: 5 months 6 days ago
Joined: 09/24/2013
Posts: 53
Hi,
We normally only set NDDSHOME to point to the directory where we installed NDDS (which contains for example versions.xml). That should be enough to invoke rtiddsgen. If that fails it would help when you would post the command you use to invoke rtiddsgen, the IDL file, and the errors.
Johnny
Offline
Last seen: 2 years 7 months ago
Joined: 03/14/2015
Posts: 7
Yes the error the command prompt gives is "rtiddsgen is not recognized as an internal or external command".
One more thing is do we need to run vcvars32.bat file before running rtiddsgen ?
Offline
Last seen: 5 months 6 days ago
Joined: 09/24/2013
Posts: 53
When you invoke it without a full path you should make sure %NDDSHOME%\bin is part of your path, our scripts invoke %NDDSHOME%\bin\rtiddsgen
Offline
Last seen: 2 years 7 months ago
Joined: 03/14/2015
Posts: 7
I have given the path as follows:
C:program files\rti_connext_dds-5.2.0\bin>rtiddsgen myfile.idl
Is it a correct way to do it ?
Offline
Last seen: 5 months 6 days ago
Joined: 09/24/2013
Posts: 53
I would do the following
set NDDSHOME=C:\program files\rti_connext_dds-5.2.0
And compile your IDL using the command below. You normally don't put your IDL into the RTI tree
%NDDSHOME%\bin\rtiddsgen myfile.idl | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8418744206428528, "perplexity": 4999.745595665896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524502.23/warc/CC-MAIN-20190716035206-20190716061206-00461.warc.gz"} |
https://nickburns2013.wordpress.com/2013/07/20/project-euler-12/ | # Project Euler #12
Project Euler # 12 – Highly divisible triangular numbers
I am taking a bit of a jump here and moving right on to # 12. The reason is that this question came up on Stackoverflow, and so I was playing with it anyways and it got me really interested in testing out the performance of generators vs. a simple calculation
In this Project Euler Challenge we have to consider the factors of triangular numbers. Triangular numbers are the sum of a subset of natural numbers. For example, the first 6 triangular numbers are:
triangular number : {1, 3, 6, 10, 15, 21, ...}
{1} => 1
{1 + 2} => 3
{1 + 2 + 3} => 6
{1 + 2 + 3 + 4} => 10
{1 + 2 + 3 + 4 + 5} = 15
{ 1 + 2 + 3 + 4 + 5 + 6} = 21
... and so on
My first thought was to take a recursive approach. But then, we have discussed the limitations of recursion in Python. So I thought, “great opportunity to use generators!”. The basic algorithm is laid out below:
1: generator function to create the sequence of triangular numbers with a suitably high limit.
2: get_factors() function to find all the factors of any given (triangular) number
3: return the total number of factors
The nice thing is that this ran, and ran relatively quickly (< 10 secs) to give the correct answer. But I was intrigued, could I do better with a direct calculation for any given triangular number?
For anyone who likes kakuro puzzles, this will be a very familiar pattern 🙂 It is known as the ‘sum-of-all-integers’ and is defined as:
$\sum_{n=0}^{i }\frac{n(n + 1)}{2}$
So I substituted this in place of the generator function in step 1 to test its performance. Turns out, that the direct calculation is approx. 30 % faster on average (for both a small order of factors, or a high order of factors). The results from a profile test are below:
Using a generator function:
$python3 -m cProfile euler12.py 842161320 1423265 function calls in 116.281 seconds Using a direct calculation:$ python3 -m cProfile euler12.py
842161320
1382224 function calls in 90.209 seconds
I’m intrigued. Instinctively, I thought that they would have been roughly the same speed. My reasoning being that a generator takes full advantage of lazy evaluation, so there should be little additional overhead. Of course, this made me look closer at implementation of the generator. I had implemented this to explicitly sum from 1..n and I reason that this calculation becomes quite intensive for large values of n. Contrast this to the sum of all integers, and it avoids the need to explicitly sum over a range. Well that explains the improvement!
I wonder, if I memoized the previous triangular number could I make the generator equally as fast (or faster!) than the calculation for sum of all integers? I will experiment with this one day. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8256840705871582, "perplexity": 505.06166653172323}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104560.59/warc/CC-MAIN-20170818024629-20170818044629-00446.warc.gz"} |
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/studia-mathematica/all/218/2/89875/preconditioners-and-korovkin-type-theorems-for-infinite-dimensional-bounded-linear-operators-via-completely-positive-maps | # Wydawnictwa / Czasopisma IMPAN / Studia Mathematica / Wszystkie zeszyty
## Preconditioners and Korovkin-type theorems for infinite-dimensional bounded linear operators via completely positive maps
### Tom 218 / 2013
Studia Mathematica 218 (2013), 95-118 MSC: 41A36, 47A58, 47B35. DOI: 10.4064/sm218-2-1
#### Streszczenie
The classical as well as noncommutative Korovkin-type theorems deal with the convergence of positive linear maps with respect to different modes of convergence, like norm or weak operator convergence etc. In this article, new versions of Korovkin-type theorems are proved using the notions of convergence induced by strong, weak and uniform eigenvalue clustering of matrix sequences with growing order. Such modes of convergence were originally considered for the special case of Toeplitz matrices and indeed the Korovkin-type approach, in the setting of preconditioning large linear systems with Toeplitz structure, is well known. Here we extend this finite-dimensional approach to the infinite-dimensional context of operators acting on separable Hilbert spaces. The asymptotics of these preconditioners are evaluated and analyzed using the concept of completely positive maps. It is observed that any two limit points, under Kadison's BW-topology, of the same sequence of preconditioners are equal modulo compact operators. Moreover, this indicates the role of preconditioners in the spectral approximation of bounded self-adjoint operators.
#### Autorzy
• K. KumarDepartment of Mathematics
“CUSAT”
Cochin, India
e-mail
• M. N. N. NamboodiriDepartment of Mathematics
“CUSAT”
Cochin, India
e-mail
• S. Serra-CapizzanoDepartment of Science and High Technology
Università Insubria, Como Campus
via Valleggio 11
22100 Como, Italy
e-mail
## Przeszukaj wydawnictwa IMPAN
Zbyt krótkie zapytanie. Wpisz co najmniej 4 znaki.
Odśwież obrazek
Odśwież obrazek | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8794177770614624, "perplexity": 2606.27796669382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00308.warc.gz"} |
https://www.physicsforums.com/threads/gravity-and-velocity.154460/ | # Gravity and Velocity
1. Feb 3, 2007
### Izzhov
I recently saw the equation $$v= \sqrt{2 \int_{r_1}^{r_2} \frac{GM}{r}}$$ (G=gravitational constant, M=larger mass, r=radius). I know it has to do with the velocity of an object attracted by a larger object's gravity, which is pulling it a distance $$r_2-r_1$$ taking into account the change in force of gravity. My question is: does this equation represent the instantaneous velocity at r1 or the average velocity? Also, how can this equation be changed to include the force of gravity of the smaller object on the larger one?
2. Feb 3, 2007
### Staff: Mentor
Edited: Please note my correction to the original equation!
Firstly, that equation is not quite right. The integral (under the square root sign) should be:
$$\int_{r_1}^{r_2} \frac{GM}{r^2} dr = \left[ - \frac{GM}{r} \right]_{r_1}^{r_2}$$
$$v= \sqrt{2 \int_{r_1}^{r_2} \frac{GM}{r^2} dr}$$
The corrected equation represents the instantaneous speed of the falling object at r1, assuming it started from rest at r2 (where r2 > r1).
The integral comes from calculating the increase in KE of the two mass system. If the smaller mass is a tiny fraction of the larger's mass, then all that energy becomes KE of the smaller mass--as in the given equation for v. (That's where that equation comes from.) But if you want to include the motion of the larger mass as well, you'll have to distribute that KE across both masses and apply conservation of momentum to find the relative speed of the masses.
Last edited: Feb 4, 2007
3. Feb 3, 2007
### Izzhov
So, if I wanted to include the KE that the smaller mass' force of gravity produces, would the new equation be $$v= \sqrt{2 \int_{r_1}^{r_2} \frac{GMm}{r}}$$ where m is the smaller mass?
4. Feb 3, 2007
### Staff: Mentor
No. Check and you'll see that the units don't match across that equation.
Before you worry about modifying the original equation, first understand how it was obtained; it starts with this statement of conservation of energy:
$$\frac{1}{2}mv^2 = \int_{r_1}^{r_2} \frac{GMm}{r^2} dr$$
Last edited: Feb 4, 2007
5. Feb 3, 2007
### Izzhov
Would the KE be distributed evenly?
6. Feb 4, 2007
### Gib Z
None of these problems would come up if you were thinking about it relativistically >.<. Equating (1/2)mv^2 as KE makes it obvious to me this is classical mechanics...As to your last post: It depends on your frame of reference :p
EDIT: MY BAD!! Classical Physics section >.< Sorry
7. Feb 4, 2007
### Staff: Mentor
Oops... that equation as stated is incorrect
Note to Izzhov: I just realized (last night, after logging off) that there is an error in your original equation, which I have let propagate into my own. D'oh! So I will revise my answers accordingly. Stay tuned. (Same basic idea, though; perhaps you just miscopied the equation.)
Edited: Please note my corrections in posts 2 and 4!
Last edited: Feb 4, 2007
8. Feb 4, 2007
### Staff: Mentor
The distribution of KE depends on the relative size of the masses. If they were equal, then the KE would be distributed evenly. If they were wildly different, like a bowling ball or rocket compared to the earth, then the bowling ball or rocket would get just about all of the KE. (That's the assumption made in your orginal equation.)
The key is that both energy and momentum must be conserved. If the two masses start from rest, their initial--and final--momentum must be zero.
9. Feb 5, 2007
### Izzhov
So essentially what you're saying is that if $$\int_{r_1}^{r_2} \frac{GMm}{r^2} dr$$ is the KE, then $$\frac{M}{M+m} \ast \int_{r_1}^{r_2} \frac{GMm}{r^2} dr$$ is the amount of KE that the smaller mass gets, and $$\frac{m}{M+m} \ast \int_{r_1}^{r_2} \frac{GMm}{r^2} dr$$ is the larger mass's KE, and that this can be derived using conservation of momentum. Is this correct?
Last edited: Feb 5, 2007
10. Feb 6, 2007
### Staff: Mentor
Exactly right!
Similar Discussions: Gravity and Velocity | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658783674240112, "perplexity": 1181.8620891863527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824820.28/warc/CC-MAIN-20171021152723-20171021172723-00133.warc.gz"} |
https://yutsumura.com/difference-between-ring-homomorphisms-and-module-homomorphisms/ | # Difference Between Ring Homomorphisms and Module Homomorphisms
## Problem 422
Let $R$ be a ring with $1$ and consider $R$ as a module over itself.
(a) Determine whether every module homomorphism $\phi:R\to R$ is a ring homomorphism.
(b) Determine whether every ring homomorphism $\phi: R\to R$ is a module homomorphism.
(c) If $\phi:R\to R$ is both a module homomorphism and a ring homomorphism, what can we say about $\phi$?
## Solution.
### (a) Determine whether every module homomorphism $\phi:R\to R$ is a ring homomorphism.
Consider the ring of integers $R=\Z$. Then the map
$\phi: \Z \to \Z$ defined by
$\phi(x)=2x$ is a $\Z$-module homomorphism.
In fact, we have for $x, y, r\in R$
\begin{align*}
\phi(x+y)=2(x+y)=2x+2y=\phi(x)+\phi(y)
\end{align*}
and
\begin{align*}
\phi(rx)=2(rx)=r(2x)=r\phi(x).
\end{align*}
However, the map $\phi$ is not a ring homomorphism since $\phi(1)=2\neq 1$.
(Every ring homomorphism sends $1$ to itself.)
Thus, we conclude that not every module homomorphism $\phi:R\to R$ is a ring homomorphism.
### (b) Determine whether every ring homomorphism $\phi: R\to R$ is a module homomorphism.
Let us consider the polynomial ring $R=\Z[x]$.
Consider the map
$\phi:\Z[x] \to \Z[x]$ defined by
$\phi\left(\, f(x) \,\right)=f(x^2)$ for $f(x)\in \Z[x]$.
Then $\phi$ is a ring homomorphism because we have
\begin{align*}
&\phi\left(\, f(x)+g(x) \,\right)=f(x^2)+g(x^2)=\phi\left(\, f(x) \,\right)+\phi\left(\, g(x) \,\right), \text{ and }\\
&\phi\left(\, f(x)g(x) \,\right)=f(x^2)g(x^2)=\phi\left(\, f(x) \,\right)\phi\left(\, g(x)\right).
\end{align*}
However, the map $\phi$ is not a $\Z[x]$-module homomorphism. If it were a $\Z[x]$-module, then we would have
\begin{align*}
&x^2=\phi(x)\\
&=x\phi(1) && \text{since $\phi$ is a $\Z[x]$-module homomorphism}\\
&=x\cdot 1 && \text{since $\phi$ is a ring homomorphism}\\
&=x,
\end{align*}
and thus we have $x=x^2$, which is a contradiction.
Thus, the conclusion is that not every ring homomorphism $\phi:R\to R$ is a module homomorphism.
### (c) If $\phi:R\to R$ is both a module homomorphism and a ring homomorphism, what can we say about $\phi$?
Suppose that $\phi:R\to R$ is both a ring homomorphism and an $R$-module homomorphism.
Then for any $x\in R$, we have
\begin{align*}
&\phi(x)=x\phi(1) && \text{since $\phi$ is an $R$-module homomorphism}\\
&=x\cdot 1 && \text{since $\phi$ is a ring homomorphism}\\
&=x,
\end{align*}
and it follows that $\phi$ must be the identity map.
##### Can $\Z$-Module Structure of Abelian Group Extend to $\Q$-Module Structure?
If $M$ is a finite abelian group, then $M$ is naturally a $\Z$-module. Can this action be extended to make... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9724916219711304, "perplexity": 350.0562397767922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401617641.86/warc/CC-MAIN-20200928234043-20200929024043-00298.warc.gz"} |
https://seamo-practice.com/student/test/3 | # Question 1 of 13
A new operation is defined as \begin{align*} 2 \oplus 4 &= 2+3+4+5 = 14 \\ 5\oplus 3 &= 5+6+7=18 \end{align*} Find the value of $m$ in $m \oplus 7 = 49$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 496.13369743557354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00495.warc.gz"} |
http://slideplayer.com/slide/4261586/ | # 7.3 Day One: Volumes by Slicing. 3 3 3 Find the volume of the pyramid: Consider a horizontal slice through the pyramid. s dh The volume of the slice.
## Presentation on theme: "7.3 Day One: Volumes by Slicing. 3 3 3 Find the volume of the pyramid: Consider a horizontal slice through the pyramid. s dh The volume of the slice."— Presentation transcript:
7.3 Day One: Volumes by Slicing
3 3 3 Find the volume of the pyramid: Consider a horizontal slice through the pyramid. s dh The volume of the slice is s 2 dh. If we put zero at the top of the pyramid and make down the positive direction, then s=h. 0 3 h This correlates with the formula:
Method of Slicing: 1 Find a formula for V ( x ). (Note that I used V ( x ) instead of A(x).) Sketch the solid and a typical cross section. 2 3 Find the limits of integration. 4 Integrate V ( x ) to find volume.
x y A 45 o wedge is cut from a cylinder of radius 3 as shown. Find the volume of the wedge. You could slice this wedge shape several ways, but the simplest cross section is a rectangle. If we let h equal the height of the slice then the volume of the slice is: Since the wedge is cut at a 45 o angle: x h 45 o Since
x y Even though we started with a cylinder, does not enter the calculation!
Cavalieri’s Theorem: Two solids with equal altitudes and identical parallel cross sections have the same volume. Identical Cross Sections
Cavalieri’s Theorem: Volume of a SphereVolume of a Sphere
7.3 Disk and Washer Methods
Suppose I start with this curve. My boss at the ACME Rocket Company has assigned me to build a nose cone in this shape. So I put a piece of wood in a lathe and turn it to a shape to match the curve.
How could we find the volume of the cone? One way would be to cut it into a series of thin slices (flat cylinders) and add their volumes. The volume of each flat cylinder (disk) is: In this case: r= the y value of the function thickness = a small change in x = dx
The volume of each flat cylinder (disk) is: If we add the volumes, we get:
This application of the method of slicing is called the disk method. The shape of the slice is a disk, so we use the formula for the area of a circle to find the volume of the disk. If the shape is rotated about the x-axis, then the formula is: A shape rotated about the y-axis would be:
The region between the curve, and the y -axis is revolved about the y -axis. Find the volume. y x We use a horizontal disk. The thickness is dy. The radius is the x value of the function. volume of disk
The natural draft cooling tower shown at left is about 500 feet high and its shape can be approximated by the graph of this equation revolved about the y-axis: The volume can be calculated using the disk method with a horizontal disk.
The region bounded by and is revolved about the y-axis. Find the volume. The “disk” now has a hole in it, making it a “washer”. If we use a horizontal slice: The volume of the washer is: outer radius inner radius
This application of the method of slicing is called the washer method. The shape of the slice is a circle with a hole in it, so we subtract the area of the inner circle from the area of the outer circle. The washer method formula is:
If the same region is rotated about the line x = 2 : The outer radius is: R The inner radius is: r
Washer Cross Section The region in the first quadrant enclosed by the y-axis and the graphs of y = cos x and y = sin x is revolved about the x-axis to form a solid. Find its volume.
Washer Cross Section The region in the first quadrant enclosed by the y-axis and the graphs of y = cos x and y = sin x is revolved about the x-axis to form a solid. Find its volume.
7.3 The Shell Method
Find the volume of the region bounded by,, and revolved about the y - axis. We can use the washer method if we split it into two parts: outer radius inner radius thickness of slice cylinder Japanese Spider Crab Georgia Aquarium, Atlanta
If we take a vertical sliceand revolve it about the y-axis we get a cylinder. cross section If we add all of the cylinders together, we can reconstruct the original object. Here is another way we could approach this problem:
cross section The volume of a thin, hollow cylinder is given by: r is the x value of the function. h is the y value of the function. thickness is dx.
cross section If we add all the cylinders from the smallest to the largest: This is called the shell method because we use cylindrical shells.
Find the volume generated when this shape is revolved about the y axis. We can’t solve for x, so we can’t use a horizontal slice directly.
Shell method: If we take a vertical slice and revolve it about the y-axis we get a cylinder.
Note:When entering this into the calculator, be sure to enter the multiplication symbol before the parenthesis.
When the strip is parallel to the axis of rotation, use the shell method. When the strip is perpendicular to the axis of rotation, use the washer method.
Find the volume of the solid when the region bounded by the curve y =, the x-axis, and the line x = 4 is revolved about the x-axis. Find the volume of the solid using cylindrical shells.
Find the volume of the solid of revolution formed by revolving the region bounded by the graph of and the y axis, 0 ≤ y ≤ 1, about the x-axis. Use the Shell Method.
Find the volume of the solid formed by revolving the region bounded by the graphs y = x 3 + x + 1, y = 1, and x = 1 about the line x = 2.
Download ppt "7.3 Day One: Volumes by Slicing. 3 3 3 Find the volume of the pyramid: Consider a horizontal slice through the pyramid. s dh The volume of the slice."
Similar presentations | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008994698524475, "perplexity": 354.1737205300686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645550.13/warc/CC-MAIN-20180318071715-20180318091715-00366.warc.gz"} |
https://books.aijr.org/index.php/press/catalog/book/34/chapter/210 | # Solar Hydrogen Production System Simulation Using PSCAD
## Authors
Matouk M Elamari
Electronic Department Engineering Academy Tajoura Libya
### Synopsis
Hydrogen is a potential future energy storage medium to supplement a variety of renewable energy sources. It can be regarded as an environmentally -friendly fuel, especially when it is extracted from water using electricity obtained from solar panels or wind turbines. One of the challenges in producing hydrogen by using solar energy is to reduce the overall costs. It is therefore important that the system operates at maximum power. In this paper a PSCAD computer simulation based on a water-splitting, hydrogen-production system is presented. The hydrogen production system was powered by a photovoltaic (PV) array using a proton exchange membrane (PEM) electrolyser. Optimal matching between the PV system and the electrolyser is essential to maximise the transfer of electrical energy and the rate of hydrogen production. A DC/DC buck converter is used for power matching by shifting the PEM electrolyser IV curve as closely as possible toward the maximum power the PV can deliver. The simulation shows that the hydrogen production of the PV-electrolyser system can be optimised by adjusting the converter duty cycle generated by PWM circuit.
Published
November 30, 2018
Series
Online ISSN
2582-3922 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9413210153579712, "perplexity": 1727.6997647390504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00717.warc.gz"} |
http://mathhelpforum.com/statistics/187014-poisson-distribution-question-print.html | # Poisson Distribution question
• August 31st 2011, 12:21 AM
Paymemoney
Poisson Distribution question
Hi
Can someone tell me if my answers are correct?
A machine makes a special type of lining brick at the rate of 25 per hour. Overall about 5% of the bricks are
defective. Calculate:
The probability that no defective bricks are produced in any one hour?
Now this is a poisson distributions so i used the calculated by using the calculator function
poissonpdf(1.25,0) = 0.2865
the probability that two defective bricks are produced in any one hour?
poissonpdf(1.25,2) = 0.2238
the probability that at least one defective brick is produced in the next hour?
P.S
• August 31st 2011, 01:36 PM
chisigma
Re: Poisson Distribution question
Quote:
Originally Posted by Paymemoney
Hi
Can someone tell me if my answers are correct?
A machine makes a special type of lining brick at the rate of 25 per hour. Overall about 5% of the bricks are
defective. Calculate:
The probability that no defective bricks are produced in any one hour?
Now this is a poisson distributions so i used the calculated by using the calculator function
poissonpdf(1.25,0) = 0.2865
the probability that two defective bricks are produced in any one hour?
poissonpdf(1.25,2) = 0.2238
the probability that at least one defective brick is produced in the next hour?
P.S
The probability of k events in n trials has binomial distribution, even if for 'rare events' the Poisson distribution [computationallly less problematic...] gives accetable precision. The probability to have k defective briks in a stock of n, if p is the probability of a single defective brick, is...
$P(k,n)= \binom{n}{k}\ p^{k}\ (1-p)^{n-k}$ (1)
If p=.05 and n=25 the (1) supplies $P(0,25)= .277389573...$ and $P(2,25)=.23051765079...$. The probability of least one defective brick in one hour is of course $1-P(0,25)= .722610426878...$
Kind regards
$\chi$ $\sigma$
• August 31st 2011, 03:54 PM
Paymemoney
Re: Poisson Distribution question
but doesn't any events that occur at random with a constant average per some unit such as time and length or area is modeled as a Poisson distribution?
• September 1st 2011, 02:03 AM
chisigma
Re: Poisson Distribution question
Quote:
Originally Posted by Paymemoney
but doesn't any events that occur at random with a constant average per some unit such as time and length or area is modeled as a Poisson distribution?
The real probability distribution of such type of process is binomial, even if in many pratical situations the Poisson distribution gives an excellent approximation. Binomial and Poisson distributions give pratically the same results when is $\lambda = p\ n <<1$. In Your case is $\lambda= p\ n = 1.25$, so that the results are slighly different. Here You can observe the values of...
$P_{b} (k,n) = \binom {n}{k}\ p^{k}\ (1-p)^{n-k}$ (1)
$P_{p}(k,\lambda)= \frac{\lambda^{k}\ e^{-\lambda}}{k!}$ (2)
... computed for k from 0 to 9 with $\lambda= p\ n= 1.25$...
$k=0\ ,\ P_{b}= .2773895\ ,\ P_{p}= .2865048$
$k=1\ ,\ P_{b}= .3649863\ ,\ P_{p}= .358131$
$k=2\ ,\ P_{b}= .2305176\ ,\ P_{p}= .2238318$
$k=3\ ,\ P_{b}= .09301589\ ,\ P_{p}= .09326328$
$k=4\ ,\ P_{b}= .02692565\ ,\ P_{p}= .02914477$
$k=5\ ,\ P_{b}= .0059519866\ ,\ P_{p}= .0072861937$
$k=6\ ,\ P_{b}= .001044208\ ,\ P_{p}= .0015177957$
$k=7\ ,\ P_{b}= .0001491726\ ,\ P_{p}= .00027106375$
$k=8\ ,\ P_{b}= 1.76651758\ 10^{-5}\ ,\ P_{p}= 4.23537119\ 10^{-5}$
$k=9\ ,\ P_{b}= 1.75618707\ 10^{-7}\ ,\ P_{p}= 5.88246\ 10^{-6}$
The most evident difference from the two is the fact that, for 'large' value of k, the $P_{b}$ decreases much steeper than the $P_{p}$ and the reason of that is obvious...
Kind regards
$\chi$ $\sigma$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9567934274673462, "perplexity": 1105.1682707143727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00201-ip-10-164-35-72.ec2.internal.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.