url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://tel.archives-ouvertes.fr/view_by_stamp.php?label=INSMI&langue=en&action_todo=view&id=hal-00726386&version=1 | 26450 articles – 20328 references [version française]
Detailed view Article in peer-reviewed journal
Archive for Rational Mechanics and Analysis 186, 2 (2007) 309-349
Attached file list to this document:
PDF
bc6.pdf(299.3 KB)
ON THE FUNDAMENTAL SOLUTION OF A LINEARIZED UEHLING-UHLENBECK EQUATION
In this paper we describe the fundamental solution of the equation that is obtained linearizing the Uehling-Uhlenbeck equation around the steady state of Kolmogorov type f(k) = k−7/6. Detailed estimates on its asymptotics are obtained.
hal-00726386, version 1 http://hal.archives-ouvertes.fr/hal-00726386 oai:hal.archives-ouvertes.fr:hal-00726386 From: Stéphane Mischler <> Submitted on: Thursday, 30 August 2012 23:02:54 Updated on: Friday, 31 August 2012 08:48:44 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8009263873100281, "perplexity": 3940.6397161472196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268734.38/warc/CC-MAIN-20140728011748-00493-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/191310/find-the-inverse-matrix-a-1-using-gauss-elimination-without-lu-decomposit/239042 | # Find the inverse matrix $A^{-1}$ , using Gauss-Elimination without LU decomposition (and without Gauss-Jordan)?
Given a non-singular matrix A , is it possible to find the inverse matrix $A^{-1}$ , using Gauss-Elimination , without using LU decomposition and without using Gauss-Jordan ?
I know that I can use LU decomposition and then apply Gauss-elimination on $L$ and $U$ , this would require :
1. Finding $L$ and $U$
2. Calculate $L*Y = e(i)$ , from here I'd get $Y$
3. Calculate $U*(current-column) = Y$ , from here I'd get each time the column
Or , I can use Gauss-Jordan method (without LU decomposition) where I put the $I$ matrix on the right of $A$ , and then use the Gauss-Jordan elimination .
Both ways works great, but , is it possible to calculate the inverse of $A$ only with Gauss-elimination , without LU ... ?
Regards
-
I will make a guess that the problem actually wants you to find an inverse of $A$ given a method to compute $A^{-1}b$ for any vector $b$, because that's what Gaussian elimination does.
Assume $A$ is a non-singular $n$-by-$n$ real matrix. Let $e_i$ be the $i$-th basis vector of the standard basis of $\mathbb R^n$. Use the given method (Gaussian elimination in this case) to solve for $x_i$ from $Ax_i = e_i$, $i = 1, 2, \ldots, n$. The matrix $[x_1 \ x_2 \ \ldots \ x_n]$ will be $A^{-1}$.
First triangulate the matrix using Gauss elimination with pivoting. Then solve $\textbf{Ax} = \textbf{I}$, $\textbf{I}$ being the first column of the identity matrix The solution $\textbf{I}$ will be the first column of $\textbf{A}^{-1}$ Do the same for the next column of the identity matrix Each successive solution for $\textbf{x}$ will be the next column of $\textbf{A}^{-1}$ You can do this for as big a matrix as you like. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9828600287437439, "perplexity": 171.87106237871558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657104119.19/warc/CC-MAIN-20140914011144-00228-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://electronicspost.com/zener-diode-voltage-regulator/ | # Zener Diode Voltage Regulator
## Zener Diode Voltage Regulator
The zener diode voltage regulator is based on a particular characteristic of zener diodes. When the zener diode is operated in the breakdown or zener region, the voltage across it is substantially constant for a large change of current through it. And it is this characteristic of a zener diode that permits it to be used as a voltage regulator.
### Circuit Diagram of Zener Diode Voltage Regulator
As long as input voltage Vin is greater than zener voltage VZ , the zener operates in the breakdown region and maintains constant voltage across the load. The series limiting resistance RS limits the input current.
The zener will maintain constant voltage across the load inspite of changes in load current or input voltage. As the load current increases, the zener current decreases so that current through resistance RS is constant. As output voltage = Vin – IRS, and I is constant, therefore, output voltage remains unchanged. The reverse would be true shouldthe load current decrease. The circuit will also correct for the changes in input voltages. Should the input
voltage Vin increase, more current will flow through the zener, the voltage drop across RS will increase but load voltage would remain constant. The reverse would be true should the input voltage decrease.
### Drawbacks of Zener Diode Voltage Regulator
• It has low efficiency for heavy load currents. It is because if the load current is large, there will be considerable power loss in the series limiting resistance.
• The output voltage slightly changes due to zener impedance as Vout = VZ + IZ ZZ. Changes in load current produce changes in zener current. Consequently, the output voltage also changes. Therefore, the use of this circuit is limited to only such applications where variations in load current and input voltage are small.
### Conditions Necessary for The Proper Operation of Zener Diode Voltage Regulator
• The zener must operate in the breakdown region or regulating region i.e. between IZ (max) and IZ (min). The current IZ (min) (generally 10 mA) is the minimum zener current to put the zener diode in the ON state i.e. regulating region. The current IZ (max) is the maximum zener current that zener diode can conduct without getting destroyed due to excessive heat.
• The zener should not be allowed to exceed maximum dissipation power otherwise it will be destroyed due to excessive heat. If maximum power dissipation of a zener is PZ (max) and zener voltage is VZ, then,
• There is a minimum value of RL to ensure that zener diode will remain in the regulating region i.e. breakdown region. If the value of RL falls below this minimum value, the proper voltage will not be available across the zener to drive it into the breakdown region.
You Might Like The Following Articles | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8241723775863647, "perplexity": 2483.430916135816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573080.8/warc/CC-MAIN-20190917141045-20190917163045-00289.warc.gz"} |
http://www.9math.com/book/multiply-11-fast | ## Multiply with 11 fast
Is there a way to multiply a two digits number with 11 fast?
Multiplying a two digits number with 11 is actually very simple: you just add the two digits and put them in the middle.
23*11=?
We just add 2 and 3 and put the result in the middle:
2+3=5
Now we put 5 between 2 and 3:
23*11=253
n:
11 * n:
Other examples:
34*11=374
72*11=792
22*11=242
53*11=583
54*11=594
But what if the sum of the two digits is bigger then 9? Do we still put the result in the middle?
Yes, but only the last digit of the sum. And we add 1 two the first digit of the initial number.
57*11=?
5+7=12
We put the last digit of the sum in the middle and we add 1 to the first digit of the initial number:
57*11=627
because:
5+1=6
2 is the last digit of 12
7 is the last digit of the initial number
What if the sum of the two digits is bigger then 9 and the first digit is 9?
95*11=?
9+5=14
We put the last digit of the sum in the middle and add 1 to the first digit of the initial number:
95*11=1045
In this case we have 4 digits numbers that start with this two digits 10.
Tests:
Part 1
24*11=?
26*11=?
31*11=?
17*11=?
36*11=?
81*11=?
45*11=?
Part 2
68*11=?
49*11=?
87*11=?
29*11=?
65*11=?
79*11=?
89*11=?
Part 3
93*11=?
97*11=?
95*11=?
Combinated
33*11=?
59*11=?
98*11=?
84*11=?
94*11=?
34*11=?
How does this work?
Let's say we have the two digits number
Let
if then
if then | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9051085710525513, "perplexity": 319.6263257404246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125841.92/warc/CC-MAIN-20170423031205-00560-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/linear-algebra-determinant.579827/ | # Homework Help: Linear Algebra: Determinant
1. Feb 21, 2012
### drosales
I need help with another homework problem
Let n be a positive integer and An*n a matrix such that det(A+B)=det(B) for all Bn*n. Show that A=0
Hint: prove property continues to hold if A is modified by any finite number of row or column elementary operations
It seems obvious that A=0 but i'm having trouble developing the proof. Any help would be great.
2. Feb 21, 2012
### micromass
Please post homework in the homework forum. I moved it for you now.
A hint for the proof: can you write a row/column operation as an elementary matrix??
3. Feb 21, 2012
### drosales
Yes and the product of the elementary matrices returns
A=E1*E2*..*En
is this what you are referring to?
4. Feb 21, 2012
### micromass
Yes. Let E be an elementary matrix, can you show that
$$det(EA+B)=det(B)$$
??
5. Feb 21, 2012
### drosales
Im not quite sure how to show this
6. Feb 21, 2012
### micromass
Hint: $B=EE^{-1}B$.
Use that $det(XY)=det(X)det(Y)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8990578651428223, "perplexity": 2406.4653209776907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513804.32/warc/CC-MAIN-20181021073314-20181021094814-00355.warc.gz"} |
http://hal.in2p3.fr/in2p3-00428892 | # CP violation and kaon-pion interactions in B-->Kpi+pi- decays
Abstract : We study CP violation and the contribution of the strong kaon-pion interactions in the three-body B-->Kpi+pi- decays. We extend our recent work on the effect of the two-pion S- and P-wave interactions to that of the corresponding kaon-pion ones. The weak amplitudes have a first term derived in QCD factorization and a second one as a phenomenological contribution added to the QCD penguin amplitudes. The effective QCD coefficients include the leading order contributions plus next-to-leading order vertex and penguins corrections. The matrix elements of the transition to the vacuum of the kaon-pion pairs, appearing naturally in the factorization formulation, are described by the strange Kpi scalar (S-wave) and vector (P-wave) form factors. These are determined from Muskhelishvili-Omnès coupled channel equations using experimental kaon-pion T-matrix elements, together with chiral symmetry and asymptotic QCD constraints. From the scalar form factor study, the modulus of the K0*(1430) decay constant is found to be (32±5) MeV. The additional phenomenological amplitudes are fitted to reproduce the Kpi effective mass and helicity angle distributions, the B-->K*(892)pi branching ratios and the CP asymmetries of the recent data from Belle and BABAR collaborations. We use also the new measurement by the BABAR group of the phase difference between the B0 and [overline B]0 decay amplitudes to K*(892)pi. Our predicted B±-->K0*(1430)pi±, K0*(1430)-->K±pi-/+ branching fraction, equal to (11.6±0.6)×10-6, is smaller than the result of the analyzes of both collaborations. For the neutral B0 decays, the predicted value is (11.1±0.5)×10-6. In order to reduce the large systematic uncertainties in the experimental determination of the B-->K0*(1430)pi branching fractions, a new parametrization is proposed. It is based on the Kpi scalar form factor, well constrained by theory and experiments other than those of B decays.
Document type :
Journal articles
http://hal.in2p3.fr/in2p3-00428892
Contributor : Sophie Heurteau <>
Submitted on : Thursday, October 29, 2009 - 5:45:49 PM
Last modification on : Wednesday, September 16, 2020 - 4:07:55 PM
### Citation
B. El-Bennich, A. Furman, R. Kamiński, L. Lesniak, B. Loiseau, et al.. CP violation and kaon-pion interactions in B-->Kpi+pi- decays. Physical Review D, American Physical Society, 2009, 79, 094005 (28 p.). ⟨10.1103/PhysRevD.79.094005⟩. ⟨in2p3-00428892⟩
Record views | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.91923588514328, "perplexity": 3694.7405845847115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154356.39/warc/CC-MAIN-20210802172339-20210802202339-00573.warc.gz"} |
https://github.com/adc-connect/adcc/blob/master/docs/theory.rst | {{ message }}
Switch branches/tags
Nothing to show
Cannot retrieve contributors at this time
189 lines (163 sloc) 8.7 KB
## Theoretical review of ADC methods
The polarisation propagator \Pi(\omega) is a quantity from many-body perturbation theory :cite:Fetter1971. Its relationship to electronically excited states spectra can be understood from the fact that its poles are located exactly at the vertical excitation energies \omega_n = E_n - E_0 :cite:Fetter1971,Schirmer1982. Here, E_0 is the energy of the ground state of the exact N-electron Hamiltonian \Op{H}, and E_n is the energy corresponding to excited state \ket{\Psi_n}. The structure of \Pi(\omega) close to these poles depends both on the ground state \ket{\Psi_0} and excited state \ket{\Psi_n}, such that, e.g., transition properties :cite:Schirmer1982,Schirmer2018 may be extracted from \Pi(\omega) as well.
Taking this as a starting point, the algebraic-diagrammatic construction scheme for the polarisation propagator (ADC) examines an alternative representation of the polarisation propagator :cite:Schirmer1982, the so-called intermediate-state representation (ISR). In this formalism, a set of creation and annihilation operators is applied to the exact ground state and the resulting precursor states are orthogonalised block-wise according to excitation class :cite:Schirmer1991. This procedure yields the so-called intermediate states \left\{ \ket{\tilde{\Psi}_I} \right\}_I, which are then employed to represent the polarisation propagator. A careful inspection of the resulting expression for \Pi(\omega) and its poles allows to relate the intermediate states (IS) to the excited states \ket{\Psi_n} of the Hamiltonian :cite:Schirmer1991 by a unitary transformation
\ket{\Psi_n} = \sum_{I} X_{I,n} \ket{\tilde{\Psi}_I}.
The expansion coefficients \mat{X} satisfy a Hermitian eigenvalue problem :cite:Schirmer1991
where \Omega_{nm} = \delta_{nm} \omega_n is the diagonal matrix of excitation energies and \mat{M} is the so-called ADC matrix. Its elements are directly accessible by representing the shifted Hamiltonian using IS, namely
M_{IJ} = \braket{\tilde{\Psi}_I}{\left(\Op{H} - E_0\right) \tilde{\Psi}_J}.
From the ADC eigenvectors \mat{X} in the IS basis, one may compute the density matrix \mat{\rho}^{n} for an excited state n or the transition density matrices \mat{\rho}^{n\leftarrow m} between state m and n, in the molecular orbital (MO) basis :cite:Schirmer2004,Wormit2014. Contracting these densities with the MO representation \mat{O} of a one-particle operator \Op{O} allow to compute arbitrary state properties T^{n} or transition properties T^{n\leftarrow m} through
\begin{aligned}
T^{n} &= \braket{\Psi_n}{\Op{O} \Psi_n}
= \tr (\mat{O} \mat{\rho}^{n}) \\
T^{n\leftarrow m} &= \braket{\Psi_n}{\Op{O} \Psi_m}
= \tr (\mat{O} \mat{\rho}^{n\leftarrow m}). \\
\end{aligned}
In this way, e.g., the MO representation of the dipole operator may be contracted with \mat{\rho}^{n\leftarrow m} to obtain the transition dipole moment between states m and n and from this the oscillator strength. Linear and non-linear molecular response properties, e.g., static polarizabilities or two-photon absorption cross-sections, are also accessible via this framework :cite:Trofimov2006,Knippenberg2012,Fransson2017.
As described so far, the above formalism builds the IS basis on top of the exact N-electron ground state and is thus exact as well. For practical calculations, however, the ADC scheme is not applied to the exact ground state, but to a Møller-Plesset ground state at order n of perturbation theory. The resulting ADC method is named ADC(n) and is by construction consistent with an MP(n) ground state. Detailed derivations and the resulting expressions for the ADC matrix \mat{M} as well as the aforementioned densities \mat{\rho}^{n} and \mat{\rho}^{n\leftarrow m} for various orders can be found in the literature :cite:Schirmer1982,Schirmer1991,Wormit2014,Dreuw2014,Schirmer2018 and will not be discussed here.
Fig 1a. Schematic ADC matrix Fig 1b. ADC(2) matrix of STO-3G water Fig 1c. ADC(3) matrix of STO-3G water
As a result of the construction of ADC(n) as excitations on top of an MP(n) ground state, the matrix \mat{M} exhibits a block structure, shown in Figure 1a. In this the singles block is denoted M_{11}, the doubles block M_{22} and the coupling block M_{21}. One may construct perturbation expansions for the individual blocks as well. For example in ADC(2) the lower-right M_{22} block is only present in zeroth order. In ADC(3) on the other hand this block is present at first order, which makes it consistent with an MP(3) ground state. In contrast, ADC(2)-x is an emph{ad hoc} modification of ADC(2), where only the doubles-doubles block is treated first order like in ADC(3), but the remaining blocks remain at the same order as in ADC(2) :cite:Dreuw2014.
On top of this block structure the individual blocks are sparse as well, see Figure 1b and c. This sparsity is a direct consequence of the selection rules obtained from spin and permutational symmetry in the tensor contractions required for computing \mat{M}. To exploit this sparsity when diagonalising the matrix :eq:eqn:adc_diagonalisation, adcc follows the conventional approach :cite:Dreuw2014,Wormit2014 to use contraction-based, iterative eigensolvers, such as the Jacobi-Davidson :cite:Davidson1975. Furthermore, all tensor operations in the required ADC matrix-vector products are performed on block-sparse tensors. For an optimal performance the spin and permutational symmetry of the ADC equations need to be taken into account when setting up the block tiling along the tensor axes. In this setting the computational scaling of ADC(2) is given as O(N^5) where N is the number of orbitals, whereas ADC(2)-x and ADC(3) scale as O(N^6). This procedure additionally ensures the numerical stability of the eigensolver with respect to the excitation manifold. That is to say, that (for restricted references) spin-pure guess vectors always lead to eigenvectors \mat{X} from the same manifold, such that the excitation manifold to probe can be reliably selected via the guesses without employing a spin-adapted basis. :cite:Dreuw2014
One important modifications of the ADC scheme as discussed above is the core-valence separation (CVS) :cite:Cederbaum1980,Trofimov2000,Wenzel2014b,Wenzel2014a,Wenzel2015. In this approximate ADC treatment targeting core-excited states, the strong localisation of the core electrons and the weak coupling between core-excited and valence-excited states is exploited to decouple and discard the valence excitations from the ADC matrix. This lowers the number of the actively treated orbitals and thus the computational demand for solving the ADC eigenproblem :eq:eqn:adc_diagonalisation. The validity of this approximation has been analysed in the literature and is backed up by computational studies comparing with experiment :cite:Norman2018,Fransson2019. With this, ADC can be used for considering core-excited states, and subsequent studies have also established the ability of calculating non-resonant X-ray emission spectra :cite:Fransson2019 and resonant inelastic X-ray scattering :cite:Rehn2017a. Other variants of ADC include spin-flip :cite:Lefrancois2015, where a modified Davidson guess allows to treat processes of simultaneous excitation and spin-flip, tackling few-reference problems in an elegant and consistent way :cite:Lefrancois2016,Lefrancois2017. Similar to other CI-like methods the range of orbitals which are considered for building the intermediate states may also be artificially truncated. For example, when considering valence-excitations, excitations from the core orbitals may be dropped leading to a frozen-core (FC) approximation. Similarly, high-energy virtual orbitals may be left unpopulated, leading to a frozen-virtual (FV) or restricted-virtual approximation :cite:Yang2017. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9228805303573608, "perplexity": 2371.7572934913774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00146.warc.gz"} |
http://mathoverflow.net/questions/47854/the-number-of-hyperplanes-determining-the-integer-points-of-a-polyhedron | The number of hyperplanes determining the integer points of a polyhedron
This question is inspired by this one.
Let $P \subset \mathbb R_{+}^n$ be a convex polyhedron whose complement in $\mathbb R_{+}^n$ has finite volume. Let $Int(P) = P \cap \mathbb N^n$. (For motivation: they are the set of exponent vectors of integrally closed, $0$-dimensional monomial ideals in $\mathbb C[x_1,\cdots, x_n]$, but we probably don't need it here).
Question 1: Is there a nice characterization (perhaps using the corner points) of when we can find one hyperplane $H$ that separate $Int(P)$ and its complement in $\mathbb N^n$? Namely, such that $Int(P)$ is precisely the intersection of a closed half-space defined by $H$ and $\mathbb N^n$?
Question 2: More generally, we can look at the least number of hyperplanes needed to cut out $Int(P)$. Is such number studied in the literature? Any good algorithm to find it?
Some examples: for $n=2$, let $P$ be the convex hull of $\{(0,2); (1,1); (3,0)\}$. Then one can find $H : 2x+3y=5$. But for the convex hull of $\{(0,3); (1,1); (3,0)\}$, it is easy to see that one needs $2$ lines.
-
The answer to both your questions is Gomory cuts. From a non-interger extreme point $x$ of $P$ it is easy to find an hyperplane $H$ which separate $x$ from $Int(P)$. Such an hyperplane is called a Gomory cut. It can be shown that by applying this procedure a finite number of time one can describe $Int(P)$ as a finite intersection of halfspaces. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.940200686454773, "perplexity": 112.39118691973295}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021878262/warc/CC-MAIN-20140305121758-00040-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://unapologetic.wordpress.com/2010/08/31/some-banach-spaces/?like=1&source=post_flair&_wpnonce=671450efa2 | # The Unapologetic Mathematician
## Some Banach Spaces
To complete what we were saying about the $L^p$ spaces, we need to show that they’re complete. As it turns out, we can adapt the proof that mean convergence is complete, but we will take a somewhat different approach. It suffices to show that for any sequence of functions $\{u_n\}$ in $L^p$ so that the series of $L^p$-norms converges
$\displaystyle\sum\limits_{n=1}^\infty\lVert u_n\rVert_p<\infty$
the series of functions converges to some function $f\in L^p$.
For finite $p$, Minkowski’s inequality allows us to conclude that
$\displaystyle\int\left(\sum\limits_{n=1}^\infty\lvert u_n\rvert\right)^p\,d\mu=\left(\sum\limits_{n=1}^\infty\lVert u_n\rVert_p\right)^p<\infty$
The monotone convergence theorem now tells us that the limiting function
$\displaystyle f=\sum\limits_{n=1}^\infty u_n$
is defined a.e., and that $f\in L^p$. The dominated convergence theorem can now verify that the partial sums of the series are $L^p$-convergent to $f$:
$\displaystyle\int\left\lvert f-\sum\limits_{k=1}^n u_k\right\rvert^p\,d\mu\leq\int\left(\sum\limits_{l=n+1}^\infty\lvert u_l\rvert\right)^p\to0$
In the case $p=\infty$, we can write $c_n=\lVert u_n\rVert_\infty$. Then $\lvert u_n(x)\rvert except on some set $E_n$ of measure zero. The union of all the $E_n$ must also be negligible, and so we can throw it all out and just have $\lvert u_n(x)\rvert. Now the series of the $c_n$ converges by assumption, and thus the series of the $u_n$ must converge to some function bounded by the sum of the $c_n$ (except on the union of the $E_n$).
August 31, 2010 - | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971387982368469, "perplexity": 141.72155173518925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398457799.57/warc/CC-MAIN-20151124205417-00160-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://projecteuclid.org/euclid.die/1367414142 | ## Differential and Integral Equations
### Global existence and nonexistence for a parabolic system with nonlinear boundary conditions
#### Abstract
We find both necessary and sufficient conditions on the nonnegative increasing functions $f$ and $g$ so that positive solutions of $$\begin{matrix} u_t=\Delta u+h(u,v)\qquad &v_t=\Delta v+r(u,v) \qquad &\text {in }\Omega\times (0,T)\\ \frac{\partial}{\partial\nu} u=f(v) \qquad &\frac{\partial}{\partial\nu} v=g(u)\qquad &\text {on } \partial\Omega\times (0,T) \end{matrix}$$ exist globally in time. We assume throughout that $h$ and $r$ are nonnegative, smooth and $\frac{h(u,v)}u$, $\frac{r(u,v)}v$ are globally bounded.
#### Article information
Source
Differential Integral Equations, Volume 11, Number 1 (1998), 179-190.
Dates
First available in Project Euclid: 1 May 2013
Permanent link to this document
https://projecteuclid.org/euclid.die/1367414142
Mathematical Reviews number (MathSciNet)
MR1608013
Zentralblatt MATH identifier
1004.35012
#### Citation
Rossi, Julio D.; Wolanski, Noemi. Global existence and nonexistence for a parabolic system with nonlinear boundary conditions. Differential Integral Equations 11 (1998), no. 1, 179--190. https://projecteuclid.org/euclid.die/1367414142 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.908292293548584, "perplexity": 1145.959172296262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583705091.62/warc/CC-MAIN-20190120082608-20190120104608-00104.warc.gz"} |
http://umj.imath.kiev.ua/article/?lang=en&article=5595 | 2019
Том 71
№ 11
# A comonotonic theorem for backward stochastic differential equations in $L^p$ and its applications
Zong Z.-J.
Abstract
We study backward stochastic differential equations (BSDEs) under weak assumptions on the data. We obtain a comonotonic theorem for BSDEs in $L^p,\quad 1, 1 < p ≤ 2$. As applications of this theorem, we study the relation between Choquet expectations and minimax expectations and the relation between Choquet expectations and generalized Peng’s $g$-expectations. These results generalize the known results of Chen et al.
English version (Springer): Ukrainian Mathematical Journal 64 (2012), no. 6, pp 857-874.
Citation Example: Zong Z.-J. A comonotonic theorem for backward stochastic differential equations in $L^p$ and its applications // Ukr. Mat. Zh. - 2012. - 64, № 6. - pp. 752-765.
Full text | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9662346839904785, "perplexity": 1239.0445219045366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145529.37/warc/CC-MAIN-20200221111140-20200221141140-00057.warc.gz"} |
https://stats.libretexts.org/Courses/Saint_Mary's_College_Notre_Dame/MATH_345__-_Probability_(Kuter)/3%3A_Discrete_Random_Variables/3.8%3A_Moment-Generating_Functions_(MGFs)_for_Discrete_Random_Variables | # 3.8: Moment-Generating Functions (MGFs) for Discrete Random Variables
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
The expected value and variance of a random variable are actually special cases of a more general class of numerical characteristics for random variables given by moments.
### Definition $$\PageIndex{1}$$
The rth moment of a random variable $$X$$ is given by
$$\text{E}[X^r].\notag$$
The rth central moment of a random variable $$X$$ is given by
$$\text{E}[(X-\mu)^r],\notag$$
where $$\mu = \text{E}[X]$$.
Note that the expected value of a random variable is given by the first moment, i.e., when $$r=1$$. Also, the variance of a random variable is given the second central moment.
As with expected value and variance, the moments of a random variable are used to characterize the distribution of the random variable and to compare the distribution to that of other random variables. Moments can be calculated directly from the definition, but, even for moderate values of $$r$$, this approach becomes cumbersome. The next definition and theorem provide an easier way to generate moments.
### Definition $$\PageIndex{2}$$
The moment-generating function (mgf) of a random variable $$X$$ is given by
$$M_X(t) = E[e^{tX}], \quad\text{for}\ t\in\mathbb{R}.\notag$$
### Theorem $$\PageIndex{1}$$
If random variable $$X$$ has mgf $$M_X(t)$$, then
$$M^{(r)}_X(0) = \frac{d^r}{dt^r}\left[M_X(t)\right]_{t=0} = \text{E}[X^r].\notag$$
In other words, the $$r^{\text{th}}$$ derivative of the mgf evaluated at $$t=0$$ gives the value of the $$r^{\text{th}}$$ moment.
Theorem 3.8.1 tells us how to derive the mgf of a random variable, since the mgf is given by taking the expected value of a function applied to the random variable:
$$M_X(t) = E[e^{tX}] = \sum_i e^{tx_i}\cdot p(x_i)\notag$$
We can now derive the first moment of the Poisson distribution, i.e., derive the fact we mentioned in Section 3.6, but left as an exercise, that the expected value is given by the parameter $$\lambda$$. We also find the variance.
### Example $$\PageIndex{1}$$
Let $$X\sim\text{Poisson}(\lambda)$$. Then, the pmf of $$X$$ is given by
$$p(x) = \frac{e^{-\lambda}\lambda^x}{x!}, \quad\text{for}\ x=0,1,2,\ldots.\notag$$
Before we derive the mgf for $$X$$, we recall from calculus the Taylor series expansion of the exponential function $$e^y$$:
$$e^y = \sum_{x=0}^{\infty} \frac{y^x}{x!}.\notag$$
Using this fact, we find
$$M_X(t) = \text{E}[e^{tX}] = \sum^{\infty}_{x=0} e^{tx}\cdot\frac{e^{-\lambda}\lambda^x}{x!} = e^{-\lambda}\sum^{\infty}_{x=0} \frac{(e^t\lambda)^x}{x!} = e^{-\lambda}e^{e^t\lambda} = e^{\lambda(e^t - 1)}.\notag$$
Now we take the first and second derivatives of $$M_X(t)$$. Remember we are differentiating with respect to $$t$$:
\begin{align*}
M'_X(t) &= \frac{d}{dt}\left[e^{\lambda(e^t - 1)}\right] = \lambda e^te^{\lambda(e^t - 1)} \\
M''_X(t) &= \frac{d}{dt}\left[\lambda e^te^{\lambda(e^t - 1)}\right] = \lambda e^te^{\lambda(e^t - 1)} + \lambda^2 e^{2t}e^{\lambda(e^t - 1)}
\end{align*}
Next we evaluate the derivatives at $$t=0$$ to find the first and second moments of $$X$$:
\begin{align*}
\text{E}[X] = M'_X(0) &= \lambda e^0e^{\lambda(e^0 - 1)} = \lambda \\
\text{E}[X^2] = M''_X(0) &= \lambda e^0e^{\lambda(e^0 - 1)} + \lambda^2 e^{0}e^{\lambda(e^0 - 1)} = \lambda + \lambda^2
\end{align*}
Finally, in order to find the variance, we use the alternate formula:
$$\text{Var}(X) = \text{E}[X^2] - \left(\text{E}[X]\right)^2 = \lambda + \lambda^2 - \lambda^2 = \lambda.\notag$$
Thus, we have shown that both the mean and variance for the Poisson$$(\lambda)$$ distribution is given by the parameter $$\lambda$$.
Note that the mgf of a random variable is a function of $$t$$. The main application of mgf's is to find the moments of a random variable, as the previous example demonstrated. There are more properties of mgf's that allow us to find moments for functions of random variables.
### Theorem $$\PageIndex{2}$$
Let $$X$$ be a random variable with mgf $$M_X(t)$$, and let $$a,b$$ be constants. If random variable $$Y= aX + b$$, then the mgf of $$Y$$ is given by
$$M_Y(t) = e^{bt}M_X(at).\notag$$
### Theorem $$\PageIndex{3}$$
If $$X_1, \ldots, X_n$$ are independent random variables with mgf's $$M_{X_1}(t), \ldots, M_{X_n}(t)$$, respectively, then the mgf of random variable $$Y = X_1 + \cdots + X_n$$ is given by
$$M_Y(t) = M_{X_1}(t) \cdots M_{X_n}(t).\notag$$
Recall that a binomially distributed random variable can be written as a sum of independent Bernoulli random variables. We use this and Theorem 3.8.3 to derive the mean and variance for a binomial distribution. First, we find the mean and variance of a Bernoulli distribution.
### Example $$\PageIndex{2}$$
Recall that $$X$$ has a Bernoulli$$(p)$$ distribution if it is assigned the value of 1 with probability $$p$$ and the value of 0 with probability $$1-p$$. Thus, the pmf of $$X$$ is given by
$$p(x) = \left\{\begin{array}{l l} 1-p, & \text{if}\ x=0 \\ p, & \text{if}\ x=1 \end{array}\right.\notag$$
In order to find the mean and variance of $$X$$, we first derive the mgf:
$$M_X(t) = \text{E}[e^{tX}] = e^{t(0)}(1-p) + e^{t(1)}p = 1 - p + e^tp.\notag$$
Now we differentiate $$M_X(t)$$ with respect to $$t$$:
\begin{align*}
M'_X(t) &= \frac{d}{dt}\left[1 - p + e^tp\right] = e^tp \\
M''_X(t) &= \frac{d}{dt}\left[e^tp\right] = e^tp
\end{align*}
Next we evaluate the derivatives at $$t=0$$ to find the first and second moments:
$$M'_X(0) = M''_X(0) = e^0p = p.\notag$$
Thus, the expected value of $$X$$ is $$\text{E}[X] = p$$. Finally, we use the alternate formula for calculating variance:
$$\text{Var}(X) = \text{E}[X^2] - \left(\text{E}[X]\right)^2 = p - p^2 = p(1-p).\notag$$
### Example $$\PageIndex{3}$$
Let $$X\sim\text{binomial}(n,p)$$. If $$X_1, \ldots, X_n$$ denote $$n$$ independent Bernoulli$$(p)$$ random variables, then we can write
$$X = X_1 + \cdots + X_n.\notag$$
In Example 3.8.2, we found the mgf for a Bernoulli$$(p)$$ random variable. Thus, we have
$$M_{X_i}(t) = 1 - p + e^tp, \quad\text{for}\ i=1, \ldots, n.\notag$$
Using Theorem 3.8.3, we derive the mgf for $$X$$:
$$M_X(t) = M_{X_1}(t) \cdots M_{X_n}(t) = (1-p+e^tp) \cdots (1-p+e^tp) = (1-p+e^tp)^n.\notag$$
Now we can use the mgf of $$X$$ to find the moments:
\begin{align*}
M'_X(t) &= \frac{d}{dt}\left[(1-p+e^tp)^n\right] = n(1-p+e^tp)^{n-1}e^tp \\
&\Rightarrow M'_X(0) = np \\
M''_X(t) &= \frac{d}{dt}\left[n(1-p+e^tp)^{n-1}e^tp\right] = n(n-1)(1-p+e^tp)^{n-2}(e^tp)^2 + n(1-p+e^tp)^{n-1}e^tp \\
&\Rightarrow M''_X(0) = n(n-1)p^2 + np
\end{align*}
Thus, the expected value of $$X$$ is $$\text{E}[X] = np$$, and the variance is
$$\text{Var}(X) = \text{E}[X^2] - (\text{E}[X])^2 = n(n-1)p^2 + np - (np)^2 = np(1-p).\notag$$
We end with a final property of mgf's that relates to the comparison of the distribution of random variables.
### Theorem $$\PageIndex{4}$$
The mgf $$M_X(t)$$ of random variable $$X$$ uniquely determines the probability distribution of $$X$$. In other words, if random variables $$X$$ and $$Y$$ have the same mgf, $$M_X(t) = M_Y(t)$$, then $$X$$ and $$Y$$ have the same probability distribution.
### Exercise $$\PageIndex{1}$$
Suppose the random variable $$X$$ has the following mgf:
$$M_X(t) = \left(0.85 + 0.15e^t\right)^{33}\notag$$ What is the distribution of $$X$$?
Hint
Use Theorem 3.8.4 and look at Example 3.8.3.
$$M_X(t) = (1-p+e^tp)^n,\notag$$ which is the mgf given with $$p=0.15$$ and $$n=33$$. Thus, $$X\sim \text{binomial}(33, 0.15)$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919655919075012, "perplexity": 174.25060201286055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00342.warc.gz"} |
https://brilliant.org/discussions/thread/conversion/ | ×
# Conversion
Two isomeric alkyl bromides (A) & (B), $$\ce{C5H11Br}$$ yield the following results in the laboratory. (A), on treatment with alcoholic $$\ce{ KOH}$$ gives (C) & (D). $$\ce{C5H10}$$. (C), on Ozonolysis gives HCHO and 2-Methyl propanal. (B) on treatment with alcoholic KOH gives (D) & (E), $$\ce{C5H10}$$. All the compounds (C), (D), (E) on catalytic hydrogenation give (F), $$\ce{C5H12}$$. Deduce the structures from (A) to (F).
Note by Rajdeep Bharati
1 year, 6 months ago
Sort by:
Okay, so the key step here is the ozonolysis of C. Can you figure out the structure of C from the given products? Once we have C, D is simple to find. Both C and D have a double bond, but in different positions. As C, D are formed from E2 elimination in A, the double bonds must be in positions adjacent to each other (in C and D). The same logic enables us to figure out E, and hence, B and F. If you require a full solution, tell me. · 1 year, 6 months ago
All right, could you provide a full solution, just to be sure. · 1 year, 6 months ago
Sorry for the late reply. Here it is-
From left to right, A to E
I used this to generate the image. Really great tool! · 1 year, 5 months ago
Yes! Thank you. · 1 year, 5 months ago
Are you sure the question is right? There is no mention of B beyond the first sentence. Also, ozonolysis on C gives no oxygen in the products D and E? Which compounds do the molecular formulas represent? · 1 year, 6 months ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8459261059761047, "perplexity": 2402.3665849105096}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105334.20/warc/CC-MAIN-20170819085604-20170819105604-00394.warc.gz"} |
http://math.stackexchange.com/questions/741934/definition-of-lie-groups | # Definition of Lie Groups
In the definition of Lie Group, we require that $$(x,y)\rightarrow x*y \text{ and } x\rightarrow x^{-1}$$ both be smooth. Are there any examples of groups that satisfy only one of these and not the other and are hence are not smooth manifolds? The definition I am using of smooth manifold is the same as we use for topological spaces, i.e. if it is locally diffeomorphism to $\mathbb{R}^n$
The reason I am asking this is because, I am wondering if the latter requirement follows from the former.
Edit:
Are there any examples of groups that satisfy only one of these and not the other and are hence are not smooth manifolds?
should be:
Are there any examples of groups that satisfy only one of these and not the other and are hence are not Lie groups?
-
Do you mean "and are not Lie groups" instead of "and are not smooth manifolds"? – John Apr 6 at 10:12
@John, I think it's ok. If they are groups and smooth manifolds, they are Lie groups too. So, for a group not to be Lie Group, it should not be smooth manifold. – user140802 Apr 6 at 11:32
Your comment's not right-as my answer indicates, there are bajillions of group structures on an uncountable set, in particular on any smooth manifold-too many for them all to be smooth. Besides, it's meaningless to ask whether a group that's not a smooth manifold could have smooth multiplication or inversion, since smooth maps are only defined on smooth manifolds. – Kevin Carlson Apr 6 at 11:42
Here's a set-theoretic family of examples of smooth manifolds that are groups with smooth inversion but not multiplication. Let $(G,e,*)$ be a group of exponent $2$, i.e. $g *g=e$ for every $g\in G$ with the cardinality $\mathfrak{c}$ of $\mathbb{R}$. For instance, $G$ could be a direct product of $\mathfrak{c}$ $\mathbb{Z}_2$s. Let $\phi:\mathbb{R}\to G$ be any bijection and define a group structure on $\mathbb{R}$ by $x\star y=\phi^{-1}(\phi(x)*\phi(y))$. This kind of construction always yields a group. Now since we picked our $G$ to have an inversion map preserved under bijection, inversion is guaranteed to be smooth: for $x\in\mathbb{R}, x^{-1}_\star=\phi^{-1}(\phi(x)^{-1}_*)=\phi^{-1}\phi(x)=x$, i.e. the inversion map $\mathbb{R}\to\mathbb{R}$ is just the identity.
But for the vast majority of choices of $\phi$ the multiplication will not be smooth. For since $\mathbb{R}$ has $\mathfrak{c}^\mathfrak{c}$ self-bijections, we've exhibited $\mathfrak{c}^\mathfrak{c}$ distinct group structures on $\mathbb{R}$ all with smooth inversion. On the other hand, a continuous-in particular smooth-group structure on $\mathbb{R}$ is specified by the maps $x,y\mapsto x\star y$ for $x,y\in \mathbb{Q}$, i.e. by an element of $\mathbb{R}^{\mathbb{Q}\times\mathbb{Q}},$ which has cardinality only $\mathfrak{c}$!
Nice answer!. The proof I know of "smooth multiplication implies smooth inversion" first proves it near $e$ using the inverse function theorem applied as a function $G\times G\rightarrow G$. Then one proves inversion is smooth everywhere by writing inversion at any point as a composition of inversion at $e$ with left and right multiplications by particular elements. Does this argument use the fact that $i$ is continuous? (Edit: Here's a sketch I found in a similar vein math.stanford.edu/~dlitt/briefnotes/notes/inversion.pdf) – Jason DeVito Apr 6 at 12:36
It seems to me that it does: for instance in Litt's proof to show the map $(g,g^{-1})\mapsto^\pi g$ is a homeomorphism one should know that $N^{-1}$ is open for $N$ an open neighborhood of $g$ to make $\pi^{-1}N$ open as the intersection of $\{(g,g^{-1})\}$ with $N\times N^{-1}$ in $G\times G$. On the other hand maybe you can get this from the lemma that the antidiagonal is a submanifold. – Kevin Carlson Apr 6 at 13:39
If a group $G$ is a smooth manifold with smooth multiplication, then it's a Lie group, without any assumption that inversion is even continuous. This can be proved by considering the map $F\colon G\times G\to G\times G$ defined by $F(g,h) = (g,gh)$. You can show that $F$ is bijective and $dF$ is invertible everywhere, so $F$ is a bijective local diffeomorphism and hence a diffeomorphism. Then the inversion map is easily constructed from $F^{-1}$. – Jack Lee Apr 6 at 14:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9668902158737183, "perplexity": 230.28977117638746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768831.100/warc/CC-MAIN-20141217075248-00086-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://pure.mpg.de/pubman/faces/ViewItemFullPage.jsp?itemId=item_1126719_2&view=EXPORT | English
# Item
ITEM ACTIONSEXPORT
Path integral approach to the full Dicke model with dipole-dipole interaction
Alcalde, M. A., Stephany, J., & Svaiter, N. F. (2011). Path integral approach to the full Dicke model with dipole-dipole interaction. Journal of Physics A: Mathematical and General, 44: 505301. Retrieved from http://arxiv.org/abs/1107.2945.
Item is
### Basic
show hide
Genre: Journal Article
### Files
show Files
hide Files
:
1107.2945 (Preprint), 124KB
Name:
1107.2945
Description:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
-
-
show
### Creators
show
hide
Creators:
Alcalde, M. Aparicio, Author
Stephany, J.1, Author
Svaiter, N. F., Author
Affiliations:
1Quantum Gravity and Unified Theories, AEI Golm, MPI for Gravitational Physics, Max Planck Society, 24014
### Content
show
hide
Free keywords: Quantum Physics, quant-ph,High Energy Physics - Theory, hep-th
Abstract: We consider the full Dicke spin-boson model composed by a single bosonic mode and an ensemble of $N$ identical two-level atoms with different couplings for the resonant and anti-resonant interaction terms, and incorporate a dipole-dipole interaction between the atoms. Assuming that the system is in thermal equilibrium with a reservoir at temperature $\beta^{-1}$, we compute the free energy in the thermodynamic limit $N\rightarrow\infty$ in the saddle-point approximation to the path integral and determine the critical temperature for the superradiant phase transition. In the zero temperature limit, we recover the critical coupling of the quantum phase transition, presented in the literature.
### Details
show
hide
Language(s):
Dates: 2011-07-142011
Publication Status: Published in print
Pages: -
Publishing info: -
Rev. Method: -
Identifiers: arXiv: 1107.2945
URI: http://arxiv.org/abs/1107.2945
Degree: -
show
show
show
### Source 1
show
hide
Title: Journal of Physics A: Mathematical and General
Source Genre: Journal
Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: 44 Sequence Number: 505301 Start / End Page: - Identifier: - | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.980671763420105, "perplexity": 4453.232882760186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998100.52/warc/CC-MAIN-20190616102719-20190616124719-00180.warc.gz"} |
https://tug.org/applications/fontinst/mail/tex-fonts/1997/msg00164.html | # Re: Unicode and math symbols
```
Chris wrote:
Berthold wrote --
> > (1) Which is why we have the `alphabetic presentation forms'
> > ff, ffi, ffl, fi, fl, slongt, st etc. in UNICODE.
>
> They are in the compatibility section.
>
> Well, they were put in *somewhere* because they are needed,
No, they were put in for compatibility and there use is not advised.
But nobody is heeding that avise! Applications in Windows NT *are*
using them. And I would not be suprised if they were put in after
arm twisting from the `Seattle Satans' as Sebastain refers to them.
And you can see why: (just about) the only way to get at glyphs in
fonts in these systems is either (1) by UNICODE or (2) by numeric
sequence number of arrangement of glyphs in the font, which of course
is quite random, although fixed for a given version of a given
font. In addition in some font technologies there need be tables with
mnemonic names (such as AFII numbers :-). So what is an application
developer to do? (The `just about' above refers to the exception
that you can make a `symbol' font which is treated as an incomprehensible
thing that the operating system does not mess with. You can put your
glyphs in any order you like and access them by numeric code).
Also "etc" are not there: these are the only Latin "aesthetic
ligatures" that are there, eg there is no ck ligature.
Yes, I know, and fj and a few others one might like see there. It took
them a long time to even add ff, ffi, ffl to the basic fi and fl.
> since (i) we do not have a usable and widely accepted glyph
> standard, and (ii) because most software wants to be able to have
> *some* way of telling what glyph in a font is to be used.
These do seem to be the two problems driving this issue.
But not just "some way": it seems that the only object some software can
use is a fixed-length number; and *only one* correspondence from
these numbers to ... what? ... to glyphs (in all fonts, in some fonts, or
what??) or to characters (ie units of information) or to both (ie
a one-one relationship between glyphs and characters?).
Well, there are many borderline cases we can argue about. But let just
take `greater'. I know what that means and you do. And we'll in most
cases recognize which glyph if any in a font is supposed to represent it.
It is very convenient that in most encodings it is at char code 62
in all fonts, bold, regular, oblique, blackletter, Script, what have you.
The more of this uniformity we have the better.
> But do I really need - in English - to make a distinction
> between the characters A-Z and the glyphs A-Z? Or, beyond that,
> most of the glyphs in the ISO Latin X tables (if we ignore the
I have little qualification to answer this but it may well be that you
do not need to make a big thing of the difference in these cases.
But the point of Unicode is to remove such cultural dominance of modern
European languages on IT.
And it fails in that. It *does* succeed in assiging unique codes to
characters from many languages. But it fails in dealing with the
in Western languages. Look for example at the rapidly growing
`alphabetic presentation forms' put in to try and cope with a bit of this.
> But anyway, meantime we need to make life easier! And despite all the
> explanations and arguments I don't see a whole lot wrong with using
> UNICODE as essentially a glyph standard for Latin X, Cyrillic, Greek,
> and yes, most math symbols, relations, operators, delimiters etc.
...
> Except that unfortunately they don't cover enough of math to be
> really useful...
And never will, according to some definitions of useful; but it will
also not cover my ck ligature, or all the lovely twirly things in
Poetica and similar fonts.
So set up a 16-bit glyph encoding if you wish, but do not try to
change Unicode because you want it to parallel your glyph encoding.
Leave Unicode itself to do what it is intended for.
That is not an option. On systems with system level font support
(i.e. not Unix) you get a lot of power from what the system provides
and at the same time inherit any limitations `they' put in. So in
the case of Windows NT for example (and perhaps AT&T Plan 9) the OS
supports UNICODE and applications can use it to display and print.
BUT: what they consider not to be in UNICODE is not accessible.
So for example when I convert Lucida Bright from Type 1 to TrueType
format using the built in converter, I can get at all the glyphs
except the ones not in their table, such as dotlessj. Ditto for ATM
(now in beta test). So the issue of what *is* in UNICODE *is* important.
Aside from that we don't want a hundred different versions of UNICODE++
(like the mess we have with \special). I have put dotlessj at FB0F,
but who knows where you will put it?
chris
Berthold.
``` | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8616373538970947, "perplexity": 3609.535613151868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827963.70/warc/CC-MAIN-20181216165437-20181216191437-00506.warc.gz"} |
http://math.stackexchange.com/questions/245850/lie-product-of-a-two-subalgebras | # Lie product of a two subalgebras
Let V and W be subalgebras of a Lie algebra $\mathcal{L}$. I want to show that $[V,W]$ is not always a subalgebra of $\mathcal{L}$.
-
I cannot understand what you are asking. – Mariano Suárez-Alvarez Nov 27 '12 at 18:47
Let V and W be subalgebras of $\mathcal{L}$ a Lie algebra. I want to show that $[V,W]$ is not always a subalgebra of $\mathcal{L}$. Is it clearer? Thank you very much – Nre Nov 27 '12 at 18:50
Edit the question and add the explanation there. – Mariano Suárez-Alvarez Nov 27 '12 at 18:52
I made a few changes in your question :D – Mariano Suárez-Alvarez Nov 27 '12 at 18:54
Yes I saw that thanks! – Nre Nov 27 '12 at 18:55
• Consider the Lie algebra $\mathcal L=\{X\in M_4(\mathbb R)\,|\,X^T=-X\}$ with the product $[X,Y]=XY-YX$. A basis of this algebra is given by the matrices $u_{i,j}=e_{i,j}-e_{j,i}$ for $i<j$, where the matrix $e_{i,j}$ has all its coefficients equal to zero but the $(i,j)$-th coefficient which is one. As an example $$u_{1,2}=\left( \begin{array}{cccc} 0 & 1 & 0 & 0\\ -1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{array}\right)\,.$$ Note the formulae, for $i,j,k$ all different, $$[u_{i,j},u_{j,k}]=-u_{i,k}$$ and for $i,j,k,l$ all different $$[u_{i,j},u_{k,l}]=0$$ allow us to compute easily with those matrices.
• Define the subalgebras $$V=Span(u_{1,2},u_{1,3},u_{2,3})\quad\text{and}\quad W=Span(u_{2,3},u_{2,4},u_{3,4})\,.$$ Then $$[V,W]=Span(u_{1,2},u_{1,3},u_{1,4,},u_{2,4},u_{3,4})$$ Note that $[u_{1,3},u_{1,2}]=u_{2,3}$ and thus $[V,W]$ is not a Lie algebra, since $u_{2,3}\notin[V,W]$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9469227194786072, "perplexity": 225.9785107379309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644060413.1/warc/CC-MAIN-20150827025420-00317-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://math.libretexts.org/Bookshelves/Differential_Equations/Book%3A_Differential_Equations_for_Engineers_(Lebl)/4%3A_Fourier_series_and_PDEs/4.01%3A_Boundary_value_problems | $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
# 4.1: Boundary value problems
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
## 4.1.1 Boundary value problems
Before we tackle the Fourier series, we need to study the so-called boundary value problems (or endpoint problems). For example, suppose we have
$x'' + \lambda x = 0,~~~ x(a)=0, ~~~ x(b)=0,$
for some constant $$\lambda$$, where $$x(t)$$ is defined for $$t$$ in the interval $$[a,b]$$. Unlike before, when we specified the value of the solution and its derivative at a single point, we now specify the value of the solution at two different points. Note that $$x=0$$ is a solution to this equation, so existence of solutions is not an issue here. Uniqueness of solutions is another issue. The general solution to $$x''+\lambda x = 0$$ has two arbitrary constants present. It is, therefore, natural (but wrong) to believe that requiring two conditions guarantees a unique solution.
Example $$\PageIndex{1}$$:
Take $$\lambda = 1, a=0, b=\pi$$. That is,
$x''+x=0, ~~~x(0)=0, ~~~ x(\pi)=0.$
Then $$x= \sin t$$ is another solution (besides $$x=0$$) satisfying both boundary conditions. There are more. Write down the general solution of the differential equation, which is $$x=A \cos t+B \sin t$$. The condition $$x(0)=0$$ forces $$A=0$$. Letting $$x(\pi)=0$$ does not give us any more information as $$x=B \sin t$$ already satisfies both boundary conditions. Hence, there are infinitely many solutions of the form $$x= B \sin t$$, where $$B$$ is an arbitrary constant.
Example $$\PageIndex{2}$$:
On the other hand, change to $$\lambda =2$$.
$x''+2x=0, ~~~ x(0)=0, ~~~ x(\pi)=0.$
Then the general solution is $$x=A \cos(\sqrt2 t)+B \sin(\sqrt2 t)$$. Letting $$x(0)=0$$ still forces $$A=0$$. We apply the second condition to find $$0=x(\pi)=B \sin(\sqrt2 t)$$. As $$\sin(\sqrt2 t) \neq 0$$ we obtain $$B=0$$. Therefore $$x=0$$ is the unique solution to this problem.
What is going on? We will be interested in finding which constants $$\lambda$$ allow a nonzero solution, and we will be interested in finding those solutions. This problem is an analogue of finding eigenvalues and eigenvectors of matrices.
### 4.1.2 Eigenvalue problems
For basic Fourier series theory we will need the following three eigenvalue problems. We will consider more general equations, but we will postpone this until chapter 5.
$x''+ \lambda x=0,~~~ x(a)=0, ~~~x(b)=0,$
$x''+ \lambda x=0,~~~ x'(a)=0, ~~~x'(b)=0,$
and
$x''+ \lambda x=0,~~~ x(a)=x(b), ~~~x'(a)=x'(b),$
A number $$\lambda$$ is called an eigenvalue of (4.1.4) (resp. (4.1.5) or (4.1.6)) if and only if there exists a nonzero (not identically zero) solution to (4.1.4) (resp. (4.1.5) or (4.1.6)) given that specific $$\lambda$$. The nonzero solution we found is called the corresponding eigenfunction.
Note the similarity to eigenvalues and eigenvectors of matrices. The similarity is not just coincidental. If we think of the equations as differential operators, then we are doing the same exact thing. For example, let $$L=- \frac{d^2}{dt^2}$$. We are looking for nonzero functions $$f$$ satisfying certain endpoint conditions that solve $$(L- \lambda)f = 0$$. A lot of the formalism from linear algebra can still apply here, though we will not pursue this line of reasoning too far.
Example $$\PageIndex{3}$$:
Let us find the eigenvalues and eigenfunctions of
$x''+ \lambda x=0,~~~ x(0)=0, ~~~x(\pi)=0.$
For reasons that will be clear from the computations, we will have to handle the cases $$\lambda > 0, \lambda=0, \lambda<0$$ separately. First suppose that $$\lambda > 0$$, then the general solution to $$x''+ \lambda x=0$$ is
$x=A \cos(\sqrt{\lambda}t)+B \sin(\sqrt{\lambda}t).$
The condition $$x(0)=0$$ implies immediately $$A$$. Next
$0=x(\pi)=B \sin(\sqrt{\lambda} \pi).$
If $$B$$ is zero, then $$x$$ is not a nonzero solution. So to get a nonzero solution we must have that $$\sin( \sqrt{\lambda} \pi)=0$$. Hence, $$\sqrt{\lambda} \pi$$ must be an integer multiple of $$\pi$$. In other words, $$\sqrt{\lambda}=k$$ for a positive integer $$k$$. Hence the positive eigenvalues are $$k^2$$ for all integers $$k \geq 1$$. The corresponding eigenfunctions can be taken as $$x =\sin(kt)$$. Just like for eigenvectors, we get all the multiples of an eigenfunction, so we only need to pick one.
Now suppose that $$\lambda=0$$. In this case the equation is $$x''=0$$ and the general solution is $$x=At+B$$. The condition $$x(0)=0$$ implies that $$B=0$$, and $$x(\pi)=0$$ implies that $$A=0$$. This means that $$\lambda = 0$$ is not an eigenvalue.
Finally, suppose that $$\lambda <0$$. In this case we have the general solution
$x= A \cosh( \sqrt{- \lambda}t) +B \sinh( \sqrt{- \lambda} t).$
Letting $$x(0)=0$$ implies that $$A=0$$ (recall $$\cosh0 =1$$ and $$\sinh0=0$$). So our solution must be $$x=B \sinh(\sqrt{- \lambda} t)$$ and satisfy $$x(\pi)=0$$. This is only possible if $$B$$ is zero. Why? Because $$\sinh \xi$$ is only zero when $$\xi=0$$. You should plot $$\sinh$$ to see this fact. We can also see this from the definition of sinh. We get $$0 = \sinh t = \frac{e^t-e^{-t}}{2}$$. Hence $$e^t=e^{-t}$$, which implies $$t=-t$$ and that is only true if $$t=0$$. So there are no negative eigenvalues.
In summary, the eigenvalues and corresponding eigenfunctions are
$\lambda_k = k^2~~~~ {\rm{~with~ an~ eigenfunction~}} ~~~~x_k=\sin(kt) ~~~~ {\rm{~for~ all~ integers~}} k \geq 1.$
Example $$\PageIndex{4}$$:
Let us compute the eigenvalues and eigenfunctions of
$x''+ \lambda x=0,~~~ x'(0)=0,~~~x'(\pi)=0.$
Again we will have to handle the cases $$\lambda > 0, \lambda=0, \lambda<0$$ separately. First suppose that $$\lambda > 0$$. The general solution to $$x'' + \lambda x=0$$ is $$x=A \cos(\sqrt{\lambda}t)+ B \sin(\sqrt{\lambda}t)$$. So
$x'=-A \sqrt{\lambda } \sin(\sqrt{\lambda }t)+B \sqrt{\lambda } \cos(\sqrt{\lambda }t).$
The condition $$x'(0)=0$$ implies immediately $$B=0$$. Next
$0=x'(\pi)=-A \sqrt{\lambda} \sin(\sqrt{\lambda} \pi).$
Again $$A$$ cannot be zero if $$\lambda$$ is to be an eigenvalue, and $$\sin(\sqrt{\lambda} \pi)$$ is only zero if $$\sqrt{\lambda}=k$$ for a positive integer $$k$$. Hence the positive eigenvalues are again $$k^2$$ for all integers $$k \geq 1$$. And the corresponding eigenfunctions can be taken as $$x= \cos(kt)$$.
Now suppose that $$\lambda = 0$$. In this case the equation is $$x''=0$$ and the general solution is $$x=At +B$$ so $$x'=A$$. The condition $$x'(0)=0$$ implies that $$A=0$$. Now $$x'(\pi)=0$$ also simply implies $$A=0$$. This means that $$B$$ could be anything (let us take it to be 1). So $$\lambda = 0$$ is an eigenvalue and $$x=1$$ is a corresponding eigenfunction.
Finally, let $$\lambda < 0$$. In this case we have the general solution $$x=A \cosh(\sqrt{ - \lambda}t)+B \sinh(\sqrt{ - \lambda}t)$$ and hence
$x' = A \sqrt{-\lambda} \sinh(\sqrt{ - \lambda}t) + B \sqrt{-\lambda} \cosh(\sqrt{ - \lambda}t).$
We have already seen (with roles of $$A$$ and $$B$$ switched) that for this to be zero at $$t=0$$ and $$t= \pi$$ it implies that $$A=B=0$$. Hence there are no negative eigenvalues.
In summary, the eigenvalues and corresponding eigenfunctions are
$\lambda_k = k^2~~~~ {\rm{~with~ an~ eigenfunction~}} ~~~~x_k=\cos(kt) ~~~~ {\rm{~for~ all~ integers~}} k \geq 1,$
and there is another eigenvalue
$\lambda_0 = 0~~~~ {\rm{~with~ an~ eigenfunction~}} ~~~~x_0= 1.$
The following problem is the one that leads to the general Fourier series.
Example $$\PageIndex{5}$$:
Let us compute the eigenvalues and eigenfunctions of
$x''+ \lambda x=0,~~~~x(- \pi)=x(\pi),~~~~x'(- \pi)=x'(\pi).$
Notice that we have not specified the values or the derivatives at the endpoints, but rather that they are the same at the beginning and at the end of the interval.
Let us skip $$\lambda < 0$$. The computations are the same as before, and again we find that there are no negative eigenvalues.
For $$\lambda =0$$, the general solution is $$x=At + B$$. The condition $$x(- \pi)=x(\pi)$$ implies that $$A=0$$ $$( A \pi +B=-A \pi+B$$ implies $$A=0)$$. The second condition $$x'(- \pi)=x'( \pi)$$ says nothing about $$B$$ and hence $$\lambda = 0$$ is an eigenvalue with a corresponding eigenfunction $$x=1$$.
For $$\lambda >0$$ we get that $$x = A \cos(\sqrt{ \lambda}t) + B \sin(\sqrt{ \lambda}t)$$. Now
$A \cos( - \sqrt{\lambda} \pi)+ B \sin( - \sqrt{\lambda} \pi) = A \cos( \sqrt{\lambda} \pi)+ B \sin( \sqrt{\lambda} \pi).$
We remember that $$\cos(- \theta)=\cos( \theta)$$ and $$\sin(- \theta)= - \sin( \theta)$$. Therefore,
$A \cos( \sqrt{\lambda} \pi)- B \sin( \sqrt{\lambda} \pi) = A \cos( \sqrt{\lambda} \pi)+ B \sin( \sqrt{\lambda} \pi).$
Hence either $$B=0$$ or $$\sin(\sqrt{\lambda} \pi)=0$$. Similarly (exercise) if we differentiate $$x$$ and plug in the second condition we find that $$A=0$$ or $$\sin(\sqrt{\lambda} \pi)=0$$. Therefore, unless we want $$A$$ and $$B$$ to both be zero (which we do not) we must have $$\sin(\sqrt{\lambda} \pi)=0$$. Hence, $$\sqrt{\lambda}$$ is an integer and the eigenvalues are yet again $$\lambda = k^2$$ for an integer $$k \geq 1$$. In this case, however, $$x=A \cos(kt)+ B \sin(kt)$$ is an eigenfunction for any $$A$$ and any $$B$$. So we have two linearly independent eigenfunctions $$\sin(kt)$$ and $$\cos(kt)$$. Remember that for a matrix we could also have had two eigenvectors corresponding to a single eigenvalue if the eigenvalue was repeated.
In summary, the eigenvalues and corresponding eigenfunctions are
$\lambda_k = k^2~~~~ {\rm{~with~ the~ eigenfunctions~}} ~~~~\cos(kt)~~~~ {\rm{and}}~~~~ \sin(kt) ~~~~ {\rm{~for~ all~ integers~}} k \geq 1, \\ \lambda_0 = 0~~~~ {\rm{~with~ an~ eigenfunction~}} ~~~~x_0=1.$
### 4.1.3 Orthogonality of eigenfunctions
Something that will be very useful in the next section is the orthogonality property of the eigenfunctions. This is an analogue of the following fact about eigenvectors of a matrix. A matrix is called symmetric if $$A=A^T$$. Eigenvectors for two distinct eigenvalues of a symmetric matrix are orthogonal. That symmetry is required. We will not prove this fact here. The differential operators we are dealing with act much like a symmetric matrix. We, therefore, get the following theorem.
Theorem 4.1.1. Suppose that $$x_1(t)$$ and $$x_2(t)$$ are two eigenfunctions of the problem (4.1.4), (4.1.5) or (4.1.6) for two different eigenvalues $$\lambda_1$$ and $$\lambda_2$$. Then they are orthogonal in the sense that
$\int^b_a x_1(t)x_2(t)dt=0.$
Note that the terminology comes from the fact that the integral is a type of inner product. We will expand on this in the next section. The theorem has a very short, elegant, and illuminating proof so let us give it here. First note that we have the following two equations.
$x''_1 +\lambda_1x_1=0~~~~ {\rm{and}}~~~~ x''_2+\lambda_2x_2 = 0.$
Multiply the first by $$x_2$$ and the second by $$x_1$$ and subtract to get
$(\lambda_1- \lambda_2)x_1x_2=x''_2x_1-x_2x''_1.$
Now integrate both sides of the equation.
$(\lambda_1- \lambda_2) \int^b_a x_1x_2dt=\int^b_a x''_2x_1-x_2x''_1dt \\ = \int^b_a \frac{d}{dt} (x'_2x_1-x_2x'_1)dt \\ = [ x'_2x_1-x_2x'_1]^b_{t=a} = 0.$
The last equality holds because of the boundary conditions. For example, if we consider (4.1.4) we have $$x_1(a)=x_1(b)=x_2(a)=x_2(b)=0$$ and so $$x'_2x_1-x_2x'_1$$ is zero at both $$a$$ and $$b$$. As $$\lambda_1 \neq \lambda_2$$, the theorem follows.
Exercise $$\PageIndex{1}$$:
(easy). Finish the theorem (check the last equality in the proof) for the cases (4.1.5) and (4.1.6).
We have seen previously that $$\sin(nt)$$ was an eigenfunction for the problem $$x''+ \lambda x=0, x(0)=0, x(\pi)=0$$. Hence we have the integral
$\int^{\pi}_0 \sin(mt) \sin(nt)dt=0,~~~~{\rm{when}}~ m \neq n.$
Similarly
$\int^{\pi}_0 \cos(mt) \cos(nt)dt=0,~~~~{\rm{when}}~ m \neq n.$
And finally we also get
$\int^{\pi}_{- \pi} \sin(mt) \sin(nt)dt=0,~~~~{\rm{when}}~ m \neq n,$
$\int^{\pi}_{- \pi} \cos(mt) \cos(nt)dt=0,~~~~{\rm{when}}~ m \neq n,$
and
$\int^{\pi}_{- \pi} \cos(mt) \sin(nt)dt=0.$
### 4.1.4 Fredholm alternative
We now touch on a very useful theorem in the theory of differential equations. The theorem holds in a more general setting than we are going to state it, but for our purposes the following statement is sufficient. We will give a slightly more general version in chapter 5.
Theorem 4.1.2 (Fredholm alternative). Exactly one of the following statements holds. Either
$x'' + \lambda x=0,~~~~x(a)=0,~~~~x(b)=0$
has a nonzero solution, or
$x'' + \lambda x=f(t),~~~~x(a)=0,~~~~x(b)=0$
has a unique solution for every function $$f$$ continuous on $$[a,b]$$.
The theorem is also true for the other types of boundary conditions we considered. The theorem means that if $$\lambda$$ is not an eigenvalue, the nonhomogeneous equation (4.1.32) has a unique solution for every right hand side. On the other hand if $$\lambda$$ is an eigenvalue, then (4.1.32) need not have a solution for every $$f$$, and furthermore, even if it happens to have a solution, the solution is not unique.
We also want to reinforce the idea here that linear differential operators have much in common with matrices. So it is no surprise that there is a finite dimensional version of Fredholm alternative for matrices as well. Let $$A$$ be an $$n \times n$$ matrix. The Fredholm alternative then states that either $$(A- \lambda I) \vec{x}= \vec{0}$$ has a nontrivial solution, or $$(A- \lambda I) \vec{x}= \vec{b}$$ has a solution for every $$\vec{b}$$.
A lot of intuition from linear algebra can be applied to linear differential operators, but one must be careful of course. For example, one difference we have already seen is that in general a differential operator will have infinitely many eigenvalues, while a matrix has only finitely many.
### 4.1.5 Application
Let us consider a physical application of an endpoint problem. Suppose we have a tightly stretched quickly spinning elastic string or rope of uniform linear density $$\rho$$. Let us put this problem into the $$xy-$$plane. The $$x$$ axis represents the position on the string. The string rotates at angular velocity $$\omega$$, so we will assume that the whole $$xy-$$plane rotates at angular velocity $$\omega$$. We will assume that the string stays in this $$xy-$$plane and $$y$$ will measure its deflection from the equilibrium position, $$y=0$$, on the $$x$$ axis. Hence, we will find a graph giving the shape of the string. We will idealize the string to have no volume to just be a mathematical curve. If we take a small segment and we look at the tension at the endpoints, we see that this force is tangential and we will assume that the magnitude is the same at both end points. Hence the magnitude is constant everywhere and we will call its magnitude $$T$$. If we assume that the deflection is small, then we can use Newton’s second law to get an equation
$Ty''+ \rho \omega^2y=0.$
Let $$L$$ be the length of the string and the string is fixed at the beginning and end points. Hence, $$y(0)=0$$ and $$y(L)=0$$. See Figure 4.1.
We rewrite the equation as $$y''+ \frac{\rho \omega^2}{T}y=0$$. The setup is similar to Example 4.1.3, except for the interval length being $$L$$ instead of $$\pi$$. We are looking for eigenvalues of $$y''+ \lambda y=0, y(0)=0, y(L)=0$$ where $$\lambda =\frac{\rho \omega^2}{T}$$. As before there are no nonpositive eigenvalues. With $$\lambda>0$$, the general solution to the equation is $$y=A \cos(\sqrt{\lambda}x)+B \sin(\sqrt{\lambda}x)$$. The condition $$y(0)=0$$ implies that $$A=0$$ as before. The condition $$y(L)=0$$ implies that $$\sin(\sqrt{\lambda}L)=0$$ and hence $$\sqrt{\lambda}L=k \pi$$ for some integer $$k>0$$, so
$\frac{\rho \omega^2}{T}= \lambda=\frac{k^2 \pi^2}{L^2}.$
What does this say about the shape of the string? It says that for all parameters $$\rho, \omega, T$$ not satisfying the above equation, the string is in the equilibrium position, $$y=0$$. When $$\frac{\rho \omega^2}{T}= \frac{k^2 \pi^2}{L^2}$$, then the string will “pop out” some distance $$B$$ at the midpoint. We cannot compute $$B$$ with the information we have.
Let us assume that $$\rho$$ and $$T$$ are fixed and we are changing $$\omega$$. For most values of $$\omega$$ the string is in the equilibrium state. When the angular velocity $$\omega$$ hits a value $$\omega= \frac{k \pi \sqrt{T}}{L \sqrt{ \rho}}$$, then the string will pop out and will have the shape of a sin wave crossing the $$x$$ axis $$k$$ times. When $$\omega$$ changes again, the string returns to the equilibrium position. You can see that the higher the angular velocity the more times it crosses the $$x$$ axis when it is popped out. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9846746325492859, "perplexity": 90.89764447324337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737289.75/warc/CC-MAIN-20200808051116-20200808081116-00228.warc.gz"} |
http://www.artofproblemsolving.com/wiki/index.php/1997_USAMO_Problems/Problem_5 | # 1997 USAMO Problems/Problem 5
## Problem
Prove that, for all positive real numbers
.
## Solution 2
Outline:
1. Because the inequality is homogenous, scale by an arbitrary factor such that .
2. Replace all with 1. Then, multiply both sides by to clear the denominators.
3. Expand each product of trinomials.
5. You are left with . Homogenize the inequality by multiplying each term of the LHS by . Because majorizes , this inequality holds true by bunching. (Alternatively, one sees the required AM-GM is . Sum similar expressions to obtain the desired result.)
## Solution 3 (Isolated fudging)
Because the inequality is homogenous (i.e. can be replaced with without changing the inequality other than by a factor of for some ), without loss of generality, let .
Lemma: Proof: Rearranging gives , which is a simple consequence of and
Thus, by :
1997 USAMO (Problems • Resources) Preceded byProblem 4 Followed byProblem 6 1 • 2 • 3 • 4 • 5 • 6 All USAMO Problems and Solutions
ACS WASC
ACCREDITED
SCHOOL
Our Team Our History Jobs | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9627362489700317, "perplexity": 1664.6408491149725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065910.17/warc/CC-MAIN-20150827025425-00336-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://socratic.org/questions/57df46aa11ef6b4bde2e0cf6 | Algebra
Topics
# How do you solve 6t^4-5t^3+200t+12000 = 175000 ?
##### 1 Answer
Jul 11, 2017
The real solutions are approximately:
$t \approx 13.000377$
$t \approx - 12.6846$
#### Explanation:
Given:
$p \left(t\right) = 6 {t}^{4} - 5 {t}^{3} + 200 t + 12000 = 175000$
This is a slightly strange question, in that there is one answer very close to a rational number, but it is not exact. Given that (and the fact that the exact algebraic solutions are horribly complicated), it seems to make sense to use a numerical method to find it...
Let:
$f \left(t\right) = p \left(t\right) - 175000 = 6 {t}^{4} - 5 {t}^{3} + 200 t - 163000$
Then the derivative of $f \left(t\right)$ is:
$f ' \left(t\right) = 24 {t}^{3} - 15 {t}^{2} + 200$
We want to find the zeros of $f \left(t\right)$ using Newton's method:
Given an approximate zero ${a}_{i}$, a better approximation is:
${a}_{i + 1} = {a}_{i} - \frac{f \left({a}_{i}\right)}{f ' \left({a}_{i}\right)}$
Applying this formula repeatedly we will get better and better approximations.
Where should we start?
Ignoring the terms in ${t}^{3}$ and $t$, we need:
$6 {t}^{4} \approx 163000$
So:
${t}^{4} \approx \frac{163000}{6} \approx 27000$
Then (for real valued solutions at least):
${t}^{2} \approx \sqrt{27000} \approx 164$
and
$t \approx \pm \sqrt{164} \approx \pm \sqrt{169} = \pm 13$
Trying ${a}_{0} = 13$
We find:
$f \left({a}_{0}\right) = 6 {\left(\textcolor{b l u e}{13}\right)}^{4} - 5 {\left(\textcolor{b l u e}{13}\right)}^{3} + 200 \left(\textcolor{b l u e}{13}\right) - 163000$
$\textcolor{w h i t e}{f \left({a}_{0}\right)} = 171366 - 10985 + 2600 - 163000$
$\textcolor{w h i t e}{f \left({a}_{0}\right)} = - 19$
$f ' \left({a}_{0}\right) = 24 {\left(\textcolor{b l u e}{13}\right)}^{3} - 15 {\left(\textcolor{b l u e}{13}\right)}^{2} + 200$
$\textcolor{w h i t e}{f ' \left({a}_{0}\right)} = 52728 - 2535 + 200$
$\textcolor{w h i t e}{f ' \left({a}_{0}\right)} = 50393$
So the next approximation would be:
${a}_{1} = {a}_{0} - \frac{f \left({a}_{0}\right)}{f ' \left({a}_{0}\right)}$
$\textcolor{w h i t e}{{a}_{1}} = 13 - \frac{- 19}{50393}$
$\textcolor{w h i t e}{{a}_{1}} = 13 + \frac{19}{50393}$
$\textcolor{w h i t e}{{a}_{1}} \approx 13.000377$
This approximation is correct to $6$ decimal places.
If we try ${a}_{0} = - 13$ instead we find:
$f \left({a}_{0}\right) = 6 {\left(\textcolor{b l u e}{- 13}\right)}^{4} - 5 {\left(\textcolor{b l u e}{- 13}\right)}^{3} + 200 \left(\textcolor{b l u e}{- 13}\right) - 163000$
$\textcolor{w h i t e}{f \left({a}_{0}\right)} = 171366 + 10985 - 2600 - 163000$
$\textcolor{w h i t e}{f \left({a}_{0}\right)} = 16751$
$f ' \left({a}_{0}\right) = 24 {\left(\textcolor{b l u e}{- 13}\right)}^{3} - 15 {\left(\textcolor{b l u e}{- 13}\right)}^{2} + 200$
$\textcolor{w h i t e}{f ' \left({a}_{0}\right)} = - 52728 - 2535 + 200$
$\textcolor{w h i t e}{f ' \left({a}_{0}\right)} = - 55063$
So the next approximation would be:
${a}_{1} = {a}_{0} - \frac{f \left({a}_{0}\right)}{f ' \left({a}_{0}\right)}$
$\textcolor{w h i t e}{{a}_{1}} = - 13 - \frac{16751}{- 55063}$
$\textcolor{w h i t e}{{a}_{1}} = - 13 + \frac{16751}{55063}$
$\textcolor{w h i t e}{{a}_{1}} \approx - 12.695785$
The actual value is closer to $- 12.6846$, so you would need to apply Newton's formula at least one more time for that kind of accuracy.
We can also use Newton's formula to find the two complex zeros, by starting with ${a}_{0} = \pm 13 i$
##### Impact of this question
258 views around the world
You can reuse this answer
Creative Commons License | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 40, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9794785380363464, "perplexity": 683.2470061765703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540579703.26/warc/CC-MAIN-20191214014220-20191214042220-00075.warc.gz"} |
http://arkadiusz-jadczyk.eu/blog/2017/05/03/ | ### Einstein the Stubborn
Before developing his 1915 General Theory of Relativity, Einstein held the “Entwurf” theory. Tullio Levi-Civita from Padua, one of the founders of tensor calculus, objected to a major problematic element in this theory, which reflected its global problem: its field equations were restricted to an adapted coordinate system. Einstein proved that his gravitational tensor was a covariant tensor for adapted coordinate systems. In an exchange of letters and postcards that began in March 1915 and ended in May 1915, Levi-Civita presented his objections to Einstein’s above proof. Einstein tried to find ways to save his proof, and found it hard to give it up. Finally Levi-Civita convinced Einstein about a fault in his arguments. However, only in spring 1916, long after Einstein had abandoned the 1914 theory, did he finally understand the main problem with his 1914 gravitational tensor. In autumn 1915 the Göttingen brilliant mathematician David Hilbert found the central flaw in Einstein’s 1914 derivation. On March 30, 1916, Einstein sent to Hilbert a letter admitting, “The error you found in my paper of 1914 has now become completely clear to me”.
That is what Weinstein writes about Einstein in Einstein the Stubborn: Correspondence between Einstein and Levi-Civita
Finally Einstein learned what he needed to learn and worked out his “General Relativity Theory” – the theory of gravitation based on the mathematics of (pseudo) Riemannian metric tensor. From metric tensor one calculates the “Levi-Civita connection”, encoded in what are called “Christoffel symbols“. From them one calculates the curvature: ‘Matter tells space how to curve, space tells matter how to move.’ That is the essence of Einstein’s theory of gravitation. Einstein was, for a while, happy with this picture. But only for a while. Many physicists are happy with it even today.
But that is not the subject of my post today. My plan is simply to calculate the Christoffel symbols for the SL(2,R) invariant metric that we have discussed in Conformally Euclidean geometry of the upper half-plane
Let me recall the metric:
(1)
So, it is a symmetric matrix that depends on the coordinates of a point in the upper half-plane of complex numbers with positive imaginary part We usually write as a matrix with entries where We have two coordinates, we may call It is customary in differential geometry to use the coordinate index as the upper index. It is a convention, but a useful convention. Usually it is accompanied with another convention, so called “Einstein summation convention”. This Einstein convention is that whenever we see a term that contains one symbol with index down and another symbol with index up – it means that this is an unwritten sum from to where is the number of dimensions. We are dealing with so whenever we see something like or it means: or The name of the repeated index does not matter, it can be any letter, just different from other letters present in the given term. We call it a “dummy index”.
The metric is written with indices as lower indices. We call them “covariant”. Upper indices are usually called contravariant. The metric should always be invertible, otherwise terrible things can happen, the universe may cease to have its ordinary meaning. The inverse metric is usually written as a contravariant tensor In our case the inverse metric exists because we assume that
(2)
For it would become zero, and the different parts of the Universe would not know what to do. There would be a confusion, the door to paranormal could get opened. And I am not joking. I wrote about it in an unpublished paper. The content of this paper was too scary for the referees of Physics Letters. The paper is available here: Vanishing Vierbein in Gauge Theories of Gravitation. As I said – it was never published – but it is being cited by others, for instance here: Phys.Rev. D62 (2000) 044004 DOI: 10.1103/PhysRevD.62.044004 \url{}, or here: Found.Phys.38:7-37,2008 DOI: 10.1007/s10701-007-9190-0 \url{}
Let us now calculate the Christoffel symbols for our metric. We use the standard formulas from differential geometry, they can be found in Wikipedia at Christoffel symbols of the second kind
(3)
The expression for is symmetric in – the theory has no torsion. There are symbols that must be computed. If four dimensions one needs to calculate of these symbols – lot of calculation, as each of these 40 symbols is a sum of four terms (sum over ). In old times people were not as lazy as they are today. They were calculating. Today I am using Mathematica (or Maple, or whatever). Using Mathematica, for instance, it goes as follows:
Only four of the six symbols are different from zero, and they are very simple. Christoffel symbols enter geodesic equations, when they are different from zero – the geodesic lines curve. They may curve because coordinates are curved, or because the space is curved. We will discuss it next in our toy model. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169575929641724, "perplexity": 510.28563520985097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744649.77/warc/CC-MAIN-20181118201101-20181118223101-00089.warc.gz"} |
http://asp.eurasipjournals.com/content/2011/1/138 | Research
Adaptive example-based super-resolution using kernel PCA with a novel classification approach
Takahiro Ogawa* and Miki Haseyama
Author Affiliations
Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Japan
For all author emails, please log on.
EURASIP Journal on Advances in Signal Processing 2011, 2011:138 doi:10.1186/1687-6180-2011-138
Received: 8 June 2011 Accepted: 22 December 2011 Published: 22 December 2011
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
An adaptive example-based super-resolution (SR) using kernel principal component analysis (PCA) with a novel classification approach is presented in this paper. In order to enable estimation of missing high-frequency components for each kind of texture in target low-resolution (LR) images, the proposed method performs clustering of high-resolution (HR) patches clipped from training HR images in advance. Based on two nonlinear eigenspaces, respectively, generated from HR patches and their corresponding low-frequency components in each cluster, an inverse map, which can estimate missing high-frequency components from only the known low-frequency components, is derived. Furthermore, by monitoring errors caused in the above estimation process, the proposed method enables adaptive selection of the optimal cluster for each target local patch, and this corresponds to the novel classification approach in our method. Then, by combining the above two approaches, the proposed method can adaptively estimate the missing high-frequency components, and successful reconstruction of the HR image is realized.
Keywords:
Super-resolution; resolution enhancement; image enlargement; Kernel PCA; classification
1 Introduction
In the field of image processing, high-resolution images are needed for various fundamental applications such as surveillance, high-definition TV and medical image processing [1]. However, it is often difficult to capture images with sufficient high resolution (HR) from current image sensors. Thus, methodologies for increasing resolution levels are used to bridge the gap between demands of applications and the limitations of hardware; and such methodologies include image scaling, interpolation, zooming and enlargement.
Traditionally, nearest neighbor, bilinear, bicubic [2], and sinc [3] (Lanczos) approaches have been utilized for enhancing spatial resolutions of low-resolution (LR) images. However, since they do not estimate high-frequency components missed from the original HR images, their results suffer from some blurring. In order to overcome this difficulty, many researchers have proposed super-resolution (SR) methods for estimating the missing high-frequency components, and this enhancement technique has recently been one of the most active research areas [1,4-7]. Super-resolution refers to the task which generates an HR image from one or more LR images by estimating the high-frequency components while minimizing the effects of aliasing, blurring, and noise. Generally, SR methods are divided into two categories: reconstruction-based and learning-based (example-based) approaches [7,8]. The reconstruction-based approach tries to recover the HR image from observed multiple LR images. Numerous SR reconstruction methods have been proposed in the literature, and Park et al. provided a good review of them [1]. Most reconstruction-based methods perform registration between LR images based on their motions, followed by restoration for blur and noise removal. On the other hand, in the learning-based approach, the HR image is recovered by utilizing several other images as training data. These motion-free techniques have been adopted by many researchers, and a number of learning-based SR methods have been proposed [9-18]. For example, Freeman et al. proposed example-based SR methods that estimate missing high-frequency components from mid-frequency components of a target image based on Markov networks and provide an HR image [10,11]. In this paper, we focus on the learning-based SR approach. Conventionally, learning-based SR methods using principal component analysis (PCA) have been proposed for face hallucination [19]. Furthermore, by applying kernel methods to the PCA, Chakrabarti et al. improved the performance of the face hallucination [20] based on the Kernel PCA (KPCA; [21,22]). Most of these techniques are based on global approaches in the sense that processing is done on the whole of LR images simultaneously. This imposes the constraint that all of the training images should be globally similar, i.e., they should represent a similar class of objects [7,23,24]. Therefore, the global approach is suitable for images of a particular class such as face images and fingerprint images. However, since the global approach requires the assumption that all of the training images are in the same class, it is difficult to apply it to arbitrary images.
As a solution to the above problem, several methods based on local approaches in which processing is done for each local patch within target images have recently been proposed [13,25,26]. Kim et al. developed a global-based face hallucination method and a local-based SR method of general images by using the KPCA [27]. It should be noted that even if the PCA or KPCA is used in the local approaches, all of the training local patches are not necessarily in the same class, and their eigenspace tends not to be obtained accurately. In addition, Kanemura et al. proposed a framework for expanding a given image based on an interpolator which is trained in advance with training data by using sparse Bayesian estimation [12]. This method is not based on PCA and KPCA, but calculates the Bayes-based interpolator to obtain HR images. In this method, one interpolator is estimated for expanding a target image, and thus, the image should also contain only the same kind of class. Then it is desirable that training local patches are first clustered and the SR is performed for each target local patch using the optimal cluster. Hu et al. adopted the above scheme to realize the reconstruction of HR local patches based on nonlinear eigenspaces obtained from clusters of training local patches by the KPCA [8]. Furthermore, we have also proposed a method for reconstructing missing intensities based on a new classification scheme [28]. This method performs the super-resolution by treating this problem as a missing intensity interpolation problem. Specifically, our previous method introduces two constraints, eigenspaces of HR patches and known intensities, and the iterative projection onto these constraints is performed to estimate HR images based on the interpolation of the missing intensities removed by the subsampling process. Thus, in our previous work, intensities of a target LR image are directly utilized as those of the enlarged result. Thus, if the target LR image is obtained by blurring and subsampling its HR image, the intensities in the estimated HR image contain errors.
In conventional SR methods using the PCA or KPCA, but not including our previous work [28], there have been two issues. First, it is assumed in these methods that the LR patches and their corresponding HR patches that are, respectively, projected onto linear or nonlinear eigenspaces are the same, these eigenspaces being obtained from training HR patches [8,27]. However, these two are generally different, and there is a tendency for this assumption not to be satisfied. Second, to select optimal training HR patches for target LR patches, distances between their corresponding LR patches are only utilized.
Unfortunately, it is well known that the selected HR patches are not necessarily optimal for the target LR patches, and this problem is known as the outlier problem. This problem has also been reported by Datsenko and Elad [29,30].
In this paper, we present an adaptive example-based SR method using KPCA with a novel texture classification approach. The proposed method first performs the clustering of training HR patches and generates two nonlinear eigenspaces of HR patches and their corresponding low-frequency components belonging to each cluster by the KPCA.
Furthermore, to avoid the problems of previously reported methods, we introduce two novel approaches into the estimation of missing high-frequency components for the corresponding patches containing low-frequency components obtained from a target LR image: (i) an inverse map, which estimates the missing high-frequency components, is derived from a degradation model of the LR image and the two nonlinear eigenspaces of each cluster and (ii) classification of the target patches is performed by monitoring errors caused in the estimation process of the missing high-frequency components. The first approach is introduced to solve the problem of the assumptions utilized in the previously reported methods. Then, since the proposed method directly derives the inverse map of the missing process of the high-frequency components, we do not rely on their assumptions. The second approach is introduced to solve the outlier problem. Obviously, it is difficult to perfectly perform classification that can avoid this problem as long as the high-frequency components of the target patches are completely unknown. Thus, the proposed method modifies the conventional classification schemes utilizing distances between LR patches directly. Specifically, the error caused in the estimation process of the missing high-frequency components by each cluster is monitored and utilized as a new criterion for performing the classification. This error corresponds to the minimum distance of the estimation result and the known parts of the target patch, and thus we adopt it as the new criterion. Consequently, by the inverse map determined from the nonlinear eigenspaces of the optimal cluster, the missing high-frequency components of the target patches are adaptively estimated. Therefore, successful performance of the SR can be expected. This paper is organized as follows: first, in Section 2, we briefly explain KPCA used in the proposed method. In Section 3, we discuss the formulation model of LR images. In Section 4, the adaptive KPCA-based SR algorithm is presented. In Section 5, the effectiveness of our method is verified by some results of experiments. Concluding remarks are presented in Section 6.
2 Kernel principal component analysis
In this section, we briefly explain KPCA used in the proposed method. KPCA was first introduced by Schölkopf et al. [21,22], and it is a useful tool for analyzing data which contain nonlinear structures. Given target data xi (i = 1, 2, . . . , N), they are first mapped into a feature space via a nonlinear map: , where M is the dimension of xi .Then we can obtain the data mapped into the feature space, ϕ(x1), ϕ(x2), . . . , ϕ(xN). For simplifying the following explanation, we assume these data are centered, i.e.,
(1)
For performing PCA, the covariance matrix
(2)
is calculated, and we have to find eigenvalues λ and eigenvectors u which satisfy
(3)
In this paper, vector/matrix transpose in both input and feature spaces is denoted by the superscript '.
Note that the eigenvectors u lie in the span of ϕ(x1), ϕ(x2), . . . , ϕ(xN), and they can be represented as follows:
(4)
where Ξ = [ϕ(x1), ϕ(x2), . . . , ϕ(xN)] and α is an N × 1 vector. Then Equation 3 can be rewritten as follows:
(5)
Furthermore, by multiplying Ξ' by both sides, the following equation can be obtained:
(6)
Therefore, from Equation 2, R can be represented by , and the above equation is rewritten as
(7)
where K = Ξ'Ξ. Finally,
(8)
is obtained. By solving the above equation, α can be obtained, and the eigenvectors u can be obtained from Equation 4.
Note that (i, j)th element of K is obtained by ϕ(xi)'ϕ(xj). In kernel methods, it can be obtained by using kernel trick [21]. Specifically, it can be obtained by some kernel functions κ(xi, xj) using only xi and xj in the input space.
3 Formulation model of LR images
This section presents the formulation model of LR images in our method. In the common degradation model, an original HR image F is blurred and decimated, and the target LR image including the additive noise is obtained. Then, this degradation model is represented as follows:
(9)
where f and F are, respectively, vectors whose elements are the raster-scanned intensities in the LR image f and its corresponding HR image F. Therefore, the dimension of these vectors are, respectively, the number of pixels in f and F. D and B are the decimation and blur matrices, respectively. The vector n is the noise vector, whose dimension is the same as that of f. In this paper, we assume that n is the zero vector in order to make the problem easier. Note that if decimation is performed without any blur, the observed LR image is severely aliased.
Generally, actual LR images captured from commercially available cameras tend to be taken without suffering from aliasing. Thus, we assume that such captured LR images do not contain any aliasing effects. However, it should be noted that for realizing the SR, we can consider several assumptions, and thus, we focus on the following three cases:
Case 1 : LR images are captured based on the low-pass filter followed by the decimation procedure, and any aliasing effects do not occur, where this case corresponds to our assumption. Therefore, we should estimate the missing high-frequency components removed by the low-pass filter.
Case 2 : LR images are captured by only the decimation procedure without using any low-pass filters. In this case, some aliasing effects occur, and interpolation-based methods work better than our method.
Case 3 : LR images are captured based on the low-pass filter followed by the decimation procedure, but some aliasing effects occur. In this case, the problem becomes much more difficult than those of Cases 1 and 2. Furthermore, in our method, it becomes difficult to model this degradation process.
We focus only on Case 1 to realize the SR, but some comparisons between our method and the methods focusing on Case 2 are added in the experiments.
For the following explanation, we clarify the definitions of the following four images:
HR image F whose vector is F in Equation 9 is the original image that we try to estimate.
Blurred HR image whose vector is BF is obtained by applying the low-pass filter to the HR image F. Its size is the same as that of the HR image.
LR image f whose vector is f (= DBF) is obtained by applying the subsampling to the blurred HR image .
High-frequency components whose vector is F - BF are obtained by subtracting BF from F.
Note that the HR image, the blurred HR image, and the high-frequency components have the same size. In order to define the blurred HR image, the LR image, and the high-frequency components, we have to provide which kind of the low-pass filter is utilized for defining the matrix B. Generally, it is difficult to know the details of the low-pass filter and provide the knowledge of the blur matrix B. Therefore, we simply assume that the low-pass filter is fixed to the sinc filter with the hamming window in this paper. In the proposed method, high-frequency components of target images must be estimated from only their low-frequency components and other HR training images. This means when the high-frequency components are perfectly removed, the problem becomes the most difficult and useful for the performance verification. Since it is well known that the sinc filter is suitable one to effectively remove the high-frequency components, we adopted this filter. Furthermore, the sinc filter has infinite length coefficients, and thus we also adopted the hamming window to truncate the filter coefficients. The details of the low-pass filter is shown in Section 5. Since the matrix B is fixed, we discuss the sensitivity of our method to the errors in the matrix B in Section 5.
In the proposed method, we assume that LR images are captured based on the low-pass filter followed by the decimation, and aliasing effects do not occur. Furthermore, the decimation matrix is only an operator which subsamples pixel values. Therefore, when the magnification factor is determined for target LR images, the matrices B and D can be also obtained in our method. Specifically, the decimation matrix D can be easily defined when the magnification factor is determined. In addition, the blurring matrix B is also defined by the sinc function with the hamming window in such a way that target LR images do not suffer from aliasing effects. In this way, the matrices B and D can be defined, but in our method, these matrices are not directly utilized for the reconstruction. The details are shown in the following section.
As shown in Figure 1, by upsampling the target LR image f, we can obtain the blurred HR image . However, it is difficult to reconstruct the original HR image F from since the high-frequency components of F are missed by the blurring. Furthermore, the reconstruction of the HR image becomes more difficult with increase in the amount of blurring [7].
Figure 1. Illustration of the formulation model of LR images.
An adaptive SR method based on the KPCA with a novel texture classification approach is presented in this section. Figure 2 shows an outline of our method. First, the proposed method clips local patches from training HR images and performs their clustering based on the KPCA. Then two nonlinear eigenspaces of the HR patches and their corresponding low-frequency components are, respectively, obtained for each cluster. Furthermore, the proposed method clips a local patch from the blurred HR image and estimates its missing high-frequency components using the following novel approaches based on the obtained nonlinear eigenspaces: (i) derivation of an inverse map for estimating the missing high-frequency components of g by the two nonlinear eigenspaces of each cluster, where g is an original HR patch of and (ii) adaptive selection of the optimal cluster for the target local patch based on errors caused in the high-frequency component estimation using the inverse map in (i). As shown in Equation 9, estimation of the HR image is ill posed, and we cannot obtain the inverse map that directly estimates the missing high-frequency components. Therefore, the proposed method models the degradation process in the lower-dimensional nonlinear eigenspaces and enables the derivation of its inverse map. Furthermore, the second approach is necessary to select the optimal nonlinear eigenspaces for the target patch without suffering from the outlier problem. Then, by introducing these two approaches into the estimation of the missing high-frequency components, adaptive reconstruction of HR patches becomes feasible, and successful SR should be achieved.
Figure 2. Outline of the KPCA-based adaptive SR algorithm. The proposed method is composed of two procedures: clustering of training HR patches and estimation of missing high-frequency components of a target image. In the missing high-frequency component estimation algorithm, adaptive selection of the optimal cluster is newly introduced in the proposed method.
In order to realize the adaptive SR algorithm, the training HR patches must first be assigned to several clusters before generating each cluster's nonlinear eigenspaces. Therefore, the clustering method is described in detail in 4.1, and the method for estimating the missing high-frequency components of the target local patches is presented in 4.2.
4.1 Clustering of training HR patches
In this subsection, clustering of training HR patches into K clusters is described. In the proposed method, we calculate a nonlinear eigenspace for each cluster and enable the modeling of the elements belonging to each cluster by its nonlinear eigenspace. Then, based on these nonlinear eigenspaces, the proposed method can perform the clustering of training HR patches in this subsection and the high-frequency component estimation, which simultaneously realizes the classification of target patches for realizing the adaptive reconstruction, in the following subsection. This subsection focuses on the clustering of training HR patches based on the nonlinear eigenspaces.
From one or some training HR images, the proposed method clips local patches gi (i = 1, 2, . . . , N; N being the number of the clipped local patches), whose size is w × h pixels, at the same interval. Next, for each local patch, two images, and , which contain low-frequency and high-frequency components of gi, respectively, are obtained. This means , respectively, correspond to local patches clipped from the same position of (a) HR image, (b) Blurred HR image, and (d) high-frequency components shown in the previous section. Then the two vectors li and hi containing raster-scanned elements of and , respectively, are calculated. Furthermore, li is mapped into the feature space via a nonlinear map: [22], where the nonlinear map whose kernel function is the Gaussian kernel is utilized. Specifically, given two vectors a and b (∈ Rwh), the Gaussian kernel function in the proposed method is defined as follows:
(10)
where is a parameter of the Gaussian kernel. Then the following equation is satisfied:
(11)
Then a new vector ϕi = [ϕ(li)', hi']' is defined. Note that an exact pre-image, which is the inverse mapping from the feature space back to the input space, typically does not exist [31]. Therefore, the estimated pre-image includes some errors. Since the final results estimated in the proposed method are the missing high-frequency components, we do not utilize the nonlinear map for hi (i = 1, 2, . . . , N).
From the obtained results ϕi (i = 1, 2, . . . , N), the proposed method performs clustering that minimizes the following criterion:
(12)
where Nk is the number of elements belonging to cluster k. Generally, superscript is used to indicate the power of a number. However, in this paper, only k does not represent the power of a number. The vectors and (j = 1, 2, . . . , Nk), respectively, represent li and hi of gi (i = 1, 2, . . . , N) assigned to cluster k. In Equation 12, the proposed method minimizes C with respect to the belonging cluster number of each local patch gi. Each known local patch belongs to the cluster whose nonlinear eigenspace can perform the most accurate approximation of its low- and high-frequency components. Therefore, using Equation 12, we try to determine the clustering results, i.e., which cluster is the optimal for each known local patch gi.
Note that in Equation 12, and in the input space are, respectively, the results projected onto the nonlinear eigenspace of cluster k. Then, in order to calculate them, we must first obtain the projection result onto the nonlinear eigenspace of cluster k for each . Furthermore, when is defined and its projection result onto the nonlinear eigenspace of cluster k is defined as in the feature space, the following equation is satisfied:
(13)
where Uk is an eigenvector matrix of cluster k, and is the mean vector of (j = 1, 2, . . . , Nk) and is obtained by
(14)
In the above equation, ek = [1, 1, . . . , 1]' is an Nk × 1 vector. As described above, is the projection result of onto the nonlinear eigenspace of cluster k, i.e., the approximation result of in the subspace of cluster k. Therefore, Equation 13 represents the projection of j-th element of cluster k onto the nonlinear eigenspace of cluster k. Note that from Equation 13, can be defined as . In detail, corresponds to the projection result of the low-frequency components in the feature space. Furthermore, corresponds to the result of the high-frequency components, and it can be obtained directly. However, in Equation 12 cannot be directly obtained since the projection result is in the feature space. Generally, we have to solve the pre-image estimation problem of from , i.e., , which satisfies , has to be estimated. In this paper, we call this pre-image approximation as [Approximation 1] for the following explanation. Generally, if we perform the pre-image estimation of from , estimation errors occur. In the proposed method, we adopt some useful derivations in the following explanation and enable the calculation of in Equation 12 without directly solving the pre-image problem of .
In the above equation,
(15)
is an eigenvector matrix of ΞkHkHkΞk', where Dk is the dimension of the eigenspace of cluster k, and it is set to the value whose cumulative proportion is larger than Th. The value Th is a threshold to determine the dimension of the nonlinear eigenspaces from its cumulative proportion. Furthermore, and Hk is a centering matrix defined as follows:
(16)
where Ek is the Nk × Nk identity matrix. The matrix H plays the centralizing role, and it is commonly used in general PCA and KPCA-based methods.
In Equation 15, the eigenvectors are infinite-dimensional since are eigenvectors of the vectors with the infinite dimension. This means that the dimension of the eigenvectors must be the same as that of . Then since is infinite dimensional, the dimension of is also infinite. It should be noted that since there are Dk eigenvectors , these Dk vectors span the nonlinear eigenspace of cluster k. From the above reason, Equation 13, therefore, cannot be calculated directly. Thus, we introduce the computational scheme, kernel trick, into the calculation of Equation 13. The eigenvector matrix Uk satisfies the following singular value decomposition:
(17)
where Λk is the eigenvalue matrix and Vk is the eigenvector matrix of HkΞk'ΞkHk. Therefore, Uk can be obtained as follows:
(18)
As described above, the approximation of the matrix Uk is performed. This is a common scheme in KPCA-based methods, where we call this approximation [Approximation 2], hereafter. Since the columns of the matrix Uk are infinite-dimensional, we cannot directly use this matrix for the projection onto the nonlinear eigenspace. Therefore, to solve this problem, the matrix Uk is approximated by Equation 18 for realizing the kernel trick. Note that if Dk becomes the same as the rank of Ξk, the approximation in Equation 18 becomes equivalent relationship.
From Equations 14 and 18, Equation 13 can be rewritten as
(19)
where
(20)
Next, since we utilize the nonlinear map of the Gaussian kernel, in Equation 12 satisfies
(21)
Furthermore, given and , they satisfy . Thus, from Equation 19, in Equation 21 is obtained as follows:
(22)
Then, by using Equations 21 and 22, in Equation 12 can be obtained as follows:
(23)
Furthermore, since is calculated from Equation 19 as
(24)
in Equation 12 is also obtained as follows:
(25)
Then, from Equations 23 and 25, the criterion C in Equation 12 can be calculated. It should be noted that for calculating the criterion C, we, respectively, use Approximations 1 and 2 once through Equations 21-25.
In Equation 13, Uk is utilized for the projection onto the eigenspace spanned by their eigenvectors . Therefore, the criterion C represents the sum of the approximation errors of in their eigenspaces. This means that the squared error in Equation 12 corresponds to the distance from the nonlinear eigenspace of each cluster in the input space. Then, the new criterion C is useful for the clustering of training HR local patches. From the clustering results, we can obtain the eigenvector matrix Uk for belonging to cluster k. Furthermore, we define and also calculate the eigenvector matrix for belonging to cluster k. Finally, we can, respectively, obtain the two nonlinear eigenspaces of HR training patches and their corresponding low-frequency components for each cluster k.
4.2 Adaptive estimation of missing high-frequency components
In this subsection, we present an adaptive estimation of missing high-frequency components based on the KPCA. We, respectively, define the vectors of g and as ϕ* = [ϕ(l)', h']' and in the same way as ϕi and . From the above definitions, the following equation is satisfied:
(26)
where Ep × q and Op × q are, respectively, the identity matrix and the zero matrix whose sizes are p × q. Furthermore, Dϕ represents the dimension of the feature space, i.e., infinite dimension in our method. The matrix is the identity matrix whose dimension is the same as that of ϕ(l) and Owh × wh represents the zero matrix which removes the high-frequency components. As shown in the previous section, our method assumes that LR images are obtained by removing their high-frequency components, and aliasing effects do not occur. This means our problem is to estimate the perfectly removed high-frequency components from the known low-frequency components. Therefore, the problem shown in this section is equivalent to Equation 9, and the solution that is consistent with Equation 9 can be obtained.
In Equation 26, since the matrix Σ is singular, we cannot directly calculate its inverse matrix to estimate the missing high-frequency components h and obtain the original HR image. Thus, the proposed method, respectively, maps ϕ* and onto the nonlinear eigenspace of HR patches and that of their low-frequency components in cluster k. Furthermore, the projection corresponding to the inverse matrix of Σ is derived in these subspaces. We show its specific algorithm in the rest of this subsection and its overview is shown in Figure 3.
Figure 3. Overview of the algorithm for estimating missing high-frequency components. The direct estimation of HR patches from their LR patches is infeasible since the matrix Σ is singular and its inverse matrix cannot be obtained. Thus, the proposed method projects those two patches onto the low-dimensional subspaces and enables the derivation of the projection corresponding to the inverse matrix of Σ.
First, the vector ϕ* is projected onto the Dk-dimensional nonlinear eigenspace of cluster k by using the eigenvector matrix Uk as follows:
(27)
Furthermore, the vector is also projected onto the Dk-dimensional nonlinear eigenspace of cluster k by using the eigenvector matrix as follows:
(28)
where is defined as
(29)
and . Furthermore, ϕ* is approximately calculated as follows:
(30)
In the above equation, the vector of the original HR patch is approximated in the nonlinear eigenspace of cluster k, where we call this approximation [Approximation 3]. The nonlinear eigenspace of cluster k can perform the least-square approximation of its belonging elements. Therefore, if the target local patch belongs to cluster k, accurate approximation can be realized. Then the proposed method introduces the classification procedures for determining which cluster includes the target local patch in the following explanation. Next, by substituting Equations 26 and 30 into Equation 28, the following equation is obtained:
(31)
Thus,
(32)
since
(33)
The vector corresponds to the mean vector of the vectors whose high-frequency components are removed from . Then
(34)
is derived, where .
In Equation 32, if the rank of Σ is larger than Dk, the matrix becomes a non-singular matrix, and its inverse matrix 80can be calculated. In detail, the rank of the matrices and Uk is Dk. Although the rank of Σ is not full and its inverse matrix cannot be directly obtained, the rank of becomes min (Dk, rank(Σ)). Therefore, if rank(Σ) ≥ Dk, can be calculated. Then
(35)
Finally, by substituting Equations 27 and 28 into the above equation, the following equation can be obtained:
(36)
Then we can calculate an approximation result of ϕ* from cluster k's eigenspace as follows:
(37)
Furthermore, in the same way as Equation 19, we can obtain the following equation:
(38)
where Tk is calculated as follows:
(39)
and is an eigenvector matrix of . Note that the estimation result, which we have to estimate, is the vector h of the unknown high-frequency components. Since Equation 38 is rewritten as
(40)
where . Thus, from Equations 14 and 40, the vector hk, which is the estimation result of h by cluster k, is calculated as follows:
(41)
Then, by utilizing the nonlinear eigenspace of cluster k, the proposed method can estimate the missing high-frequency components. In this scheme, we, respectively, use Approximations 2 and 3 once through Equations 31-41.
The proposed method enables the calculation of the inverse map which can directly reconstruct the high-frequency components. In the previously reported methods [8,27], they simply project the known frequency components to the eigenspaces of the HR patches, and their schemes do not correspond to the estimation of the missing high-frequency components. Thus, these methods do not always provide the optimal solutions. On the other hand, the proposed method can provide the optimal estimation results if the target local patches can be represented in the obtained eigenspaces, correctly. This is the biggest difference between our method and the conventional methods.
Furthermore, we analyze our method in detail as follows.
It is well-known that the elements of , which are gi belonging to cluster k, can be correctly approximated in their nonlinear eigenspace in the least-squares sense. Therefore, if we can appropriately classify the target local patch into the optimal cluster from only the known parts , the proposed method successfully estimates the missing high-frequency components h by its nonlinear eigenspace. Unfortunately, if we directly utilize for selecting the optimal cluster, it might be impossible to avoid the outlier problem. Thus, in order to achieve classification of the target local patch without suffering from this problem, the proposed method utilizes the following novel criterion as a substitute for Equation 12:
(42)
where lk is a pre-image of . In the above equation, since we utilize the nonlinear map of the Gaussian kernel, ||l - lk||2 is satisfied as follows:
(43)
and is calculated from Equations 14 and 40 below.
(44)
Then, from Equations 43 and 44, the criterion in Equation 42 can be rewritten as follows:
(45)
In this derivation, Approximation 1 is used once. The criterion represents the squared error calculated between the low-frequency components lk reconstructed with the high-frequency components hk by cluster k's nonlinear eigenspace and the known original low-frequency components l.
We introduce the new criterion into the classification of the target local patch as shown in Equations 42 and 45. Equations 42 and 45 utilized in the proposed method represent the errors of the low-frequency components reconstructed with the high-frequency components by Equation 40. In the proposed method, if both of the target low-frequency and high-frequency components are perfectly represented by the nonlinear eigenspaces of cluster k, the approximation relationship in Equation 32 becomes the equal relationship. Therefore, if we can ignore the approximation in Equation 38, the original HR patches can be reconstructed perfectly. In such a case, the errors caused in the low-frequency and high-frequency components become zero. However, if we apply the proposed method to general images, the target low-frequency and high-frequency components cannot perfectly be represented by the nonlinear eigenspaces of one cluster, and the errors are caused in those two components. Specifically, the caused errors are obtained as
(46)
from the estimation results. However, we cannot calculate the above equation since the true high-frequency components h are unknown. There will always be a finite value for the last term ||h - hk||2. However, since h is unknown, we cannot know this term, and thus some assumptions become necessary. Thus, we assume that this term is constant, i.e., if we set ||h - hk||2 = 0, the result will not change. Therefore, we set ||h - hk||2 = 0 and calculate the minimum errors of . This means the proposed method utilizes the minimum errors caused in the HR result estimated by the inverse projection which can optimally provide the original image for the elements of each cluster. Then the proposed method utilizes the error in Equation 45 as the criterion for the classification. In the previously reported method based on KPCA [8], they only applied the simple k-means method to the known low-frequency components for the clustering and the classification. Thus, this approach is quite independent of the KPCA-based reconstruction scheme, and there is no guarantee of providing the optimal clustering and classification results. On the other hand, the proposed method derives all of the criteria for the clustering and the classification from the KPCA-based reconstruction scheme. Therefore, it can be expected that this difference between the previously reported method and our method provides a solution to the outlier problem.
From the above explanation, we can see in Equation 45 is a suitable criterion for classifying the target local patch into the optimal cluster kopt. Then, the proposed method regards estimated by the selected cluster kopt as the output, and becomes the estimated vector of the target HR patch g.
As described above, it becomes feasible to reconstruct the HR patches from the optimal cluster in the proposed method. Finally, we clip local patches (w × h pixels) at the same interval from the blurred HR image and reconstruct their corresponding HR patches. Note that each pixel has multiple reconstruction results if the clipping interval is smaller than the size of the local patches. In such a case, the proposed method outputs the result minimizing Equation 45 as the final result. Then, the adaptive SR can be realized by the proposed method.
5 Experimental results
In this section, we verify the performance of the proposed method. As shown in Figures 4a, 5a, and 6a, we prepared three test images Lena, Peppers, and Goldhill utilized in many papers. In order to obtain their LR images shown in Figures 4b, 5b, and 6b, we subsampled them to quarter size by using the sinc filter with the hamming window. Specifically, the filter w(m, n) of size (2L + 1) × (2L + 1) is defined as
Figure 4. Comparison of results (Test image "Lena", 512 × 512 pixels) obtained by using different image enlargement methods. (a) Original HR image. (b) LR image of (a). HR image reconstructed by (c) sinc interpolation, (d) reference [11], (e) reference [27], (f) reference [8], (g) reference [12], (h) reference [28], and (i) the proposed method.
Figure 5. Comparison of results (Test image "Peppers", 512 × 512 pixels) obtained by using different image enlargement methods. (a) Original HR image. (b) LR image of (a). HR image reconstructed by (c) sinc interpolation, (d) reference [11], (e) reference [27], (f) reference [8], (g) reference [12], (h) reference [28], and (i) the proposed method.
Figure 6. Comparison of results (Test image "Goldhill" (512 × 512 pixels) obtained by using different image enlargement methods. (a) Original HR image. (b) LR image of (a). HR image reconstructed by (c) sinc interpolation, (d) reference [11], (e) reference [27], (f) reference [8], (g) reference [12], (h) reference [28], and (i) the proposed method.
(47)
where s corresponds to the magnification factor, and we set L = 12. In these figures, we simply enlarged the LR images to the size of the original images. When we estimate an HR result from its LR image, the other two HR images and Boat, Girl, Mandrill are utilized as the training data. In the proposed method, we simply set its parameters as follows: w = 8, h = 8, , , Th = 0.9, is 0.075 times the variance of ||li - lj||2 (i, j = 1, 2, . . . , N), and K = 7. Note that the parameters and K seem to affect the performance of the proposed method. Thus, we discuss the determinations of these two parameters and their sensitivities in Appendix. In this experiment, we applied the previously reported methods and the proposed method to Lena, Peppers, and Goldhill and obtained their HR results, where the magnification factor was set to four. For comparison, we adopt the method utilizing the sinc interpolation, which is the same filter used in the downsampling process and the most traditional approach, and three previously reported methods [8,11,27]. Since the method in [11] is a representative method of the example-based super-resolution, we utilized this method in the experiment. Furthermore, the method [27] is also a representative method which utilizes KPCA for performing the super-resolution, and its improvement is achieved by utilizing the classification scheme in [8]. Therefore, these two methods are suitable for the comparison to verify the proposed KPCA-based method including the novel classification approach. In addition, the methods in [12,28] have been proposed for realizing accurate SR. Therefore, since these methods can be regarded as state-of-the-art ones, we also adopted them for comparison of the proposed method.
First, we focus on test image Lena shown in Figure 4. We, respectively, show the HR images estimated by the sinc interpolation, the previously reported methods [8,11,12,27,28], and the proposed method in Figures 4c-i. In the experiments, the HR images estimated by both of the conventional methods and the proposed method were simply high-boost filtered for better comparison as shown in [27]. From the zoomed portions shown in Figures 7 and 8, we can see that the proposed method preserves the sharpness more successfully than do the previously reported methods. Furthermore, from the other two results shown in Figures 5, 6, and 9, 10, 11, 12, we can see various kinds of images are successfully reconstructed by our method. As shown in Figures 4, 5, 6, 7, 8, 9, 10, 11, 12, Goldhill contains more high-frequency components than the other two test images Lena and Peppers. Therefore, the difference of the performance between the previously reported methods and the proposed method becomes significant.
Figure 7. Zoomed example 1 of test image "Lena". (a)-(i) Zoomed portions of Figure 4a-i, respectively.
Figure 8. Zoomed example 2 of test image "Lena". (a)-(i) Zoomed portions of Figure 4a-i, respectively.
Figure 9. Zoomed example 1 of test image "Peppers". (a)-(i) Zoomed portions of Figure 5a-i, respectively.
Figure 10. Zoomed example 2 of test image "Peppers". (a)-(i) Zoomed portions of Figure 5a-i, respectively.
Figure 11. Zoomed example 1 of test image "Goldhill". (a)-(i) Zoomed portions of Figure 6a-i, respectively.
Figure 12. Zoomed example 2 of test image "Goldhill". (a)-(i) Zoomed portions of Figure 6a-i, respectively.
In the previously reported methods, the obtained HR images tend to be blurred in edge and texture regions. In detail, the proposed method keeps the sharpness in edge regions of test image Lena as shown in Figure 7. Furthermore, in the texture regions which are shown in Figure 8, the difference between the proposed method and the other methods becomes significant. Furthermore, in Figures 9 and 10, the center regions contain more high-frequency components compared with the other regions. Thus, the proposed method successfully reconstructs sharp edges and textures. As described above, test image Goldhill contains more high-frequency components than the other two test images, the difference of our method and the other ones is quite significant. Particularly, in Figure 11, roofs and windows can be successfully reconstructed with keeping sharpness by the proposed method. In addition, in Figure 12, the whole areas can be also accurately enhanced.
Some previously reported methods such as [12,27] estimate one model for performing the SR. Then, if various kinds of training images are provided, it becomes difficult to successfully estimate the high-frequency components, and the obtained results tend to be blurred. Thus, we have to perform clustering of training patches in advance and reconstruct the high-frequency components by the optimal cluster. However, if the selection of the optimal cluster is not accurate, the estimation of the high-frequency components becomes also difficult. We guess that the limitation of the method in [8] occurs from this reason. The detailed analysis is shown later.
Note that our previously reported method [28] also includes the classification procedures, but its SR approach is different from our approach. This means the method in [28] performs the SR by interpolating new intensities between the intensities of LR images. Thus, the degradation model is different from that of this paper. Thus, it suffers from some degradation. On the other hand, the proposed method realizes the super-resolution by estimating missing high-frequency components removed by the blurring in the downsampling process. In detail, the proposed method derives the inverse projection of the blurring process by using the nonlinear eigenspaces. Since the estimation of the inverse projection for the blurring process is an ill-posed problem, the proposed method performs the approximation of the blurring process in the low-dimensional subspaces, i.e., the nonlinear eigenspaces, and enables the derivation of its inverse projection.
Next, in order to quantitatively verify the performance of the proposed method and the previously reported methods in Figures 4, 5, 6, we show the structural similarity (SSIM) index [32] in Table 1. Unfortunately, it has been reported that the mean squared error (MSE) peak signal-to-noise ratio and its variants may not have a high correlation with visual quality [8,32-34]. Recent advances in full-reference image quality assessment (IQA) have resulted in the emergence of several powerful perceptual distortion measures that outperform the MSE and its variants. The SSIM index is utilized as a representative measure in many fields of the image processing, and thus, we adopt the SSIM index in this experiment. As shown in Table 1, the proposed method has the highest values for all test images. Therefore, our method realizes successful example-based super-resolution subjectively and quantitatively.
Table 1. Image reconstruction performance comparison (SSIM) of the proposed method and the previously reported methods
As described above, the MSE cannot reflect perceptual distortions, and its value becomes higher for images altered with some distortions such as mean luminance shift, contrast stretch, spatial shift, spatial scaling, and rotation, etc., yet negligible loss of subjective image quality. Furthermore, blurring severely deteriorates the image quality, but its MSE becomes lower than those of the above alternation. On the other hand, the SSIM index is defined by separately calculating the three similarities in terms of the luminance, variance, and structure, which are derived based on the human visual system (HVS) not accounted for by the MSE. Therefore, it becomes a better quality measure providing a solution to the above problem, and this is also confirmed in several researchers.
We discuss the effectiveness of the proposed method. As explained above, many previously reported methods, which utilize the PCA or KPCA for the SR, assume that LR patches (middle-frequency components) and their corresponding HR patches (high-frequency components) that are, respectively, projected onto linear or nonlinear eigenspaces are the same. However, there is a tendency for this assumption not to be satisfied for general images. On the other hand, the proposed method derives the inverse map, which enables estimation of the missing high-frequency components in the nonlinear eigenspace of each cluster, and solves the conventional problem. Furthermore, the proposed method monitors the error caused in the above high-frequency component estimation process and utilizes it for selecting the optimal cluster. This approach, therefore, solves the outlier problem of the conventional methods. In order to confirm the effectiveness of this novel approach, we show the percentage of target local patches that can be classified into correct clusters. Note that the ground truth can be obtained by using their original HR images. From the obtained results, the previously reported method [8] can correctly classify about 9.29% of the patches and suffers from the outlier problem. On the other hand, the proposed method selects the optimal clusters for all target patches, i.e., we can correctly classify all patches using Equation 45 even if we cannot utilize Equations 12 and 46. Furthermore, we show the results of the classification performed for the three test images in Figures 13, 14, 15. Since the proposed method assigns local images to seven clusters, seven assignment results are shown for each image. In these figures, the white areas represent the areas reconstructed by cluster k (k = 1, 2, . . . , 7). Note that the proposed method performs the estimation of the missing high-frequency components for the overlapped patches, and thus, these figures show the pixels whose high-frequency components are estimated by cluster k minimizing Equation 45. Then the effectiveness of our new approach is verified. Also, in the previously reported method [11], the performance of the SR severely depends on the provided training images, and it tends to suffer from the outline problems. Consequently, by introducing the new approaches into the estimation scheme of the high-frequency components, accurate reconstruction of the HR images can be realized by the proposed method.
Figure 13. Classification results of "Lena". (a)-(g), respectively, correspond to clusters 1-7.
Figure 14. Classification results of "Peppers". (a)-(g), respectively, correspond to clusters 1-7.
Figure 15. Classification results of "Goldhill". (a)-(g) respectively, correspond to clusters 1-7.
Next, we discuss the sensitivity of the proposed method and the previously reported methods to the errors in the matrix B. Specifically, we calculated the LR images using the Haar and Daubechies filters and reconstructed their HR images using the proposed and conventional methods as shown in Figures 16, 17, 18. From the obtained results, it is observed that not only the previously reported methods but also the proposed method is not so sensitive to the errors in the matrix B. In the proposed method, the inverse projection for estimating the missing high-frequency components is obtained without directly using the matrix B. The previously reported methods do not also utilize the matrix B, directly. Then they tend not to suffer from the degradation due to the errors in the matrix B.
Figure 16. HR image reconstructed by the previously reported methods and the proposed method from the LR images obtained by the Haar and Daubechies filters (Test image "Lena"). HR image reconstructed from the LR image obtained by using the Haar filter by (a) reference [11] (SSIM index: 0.7941), (b) reference [27] (SSIM index: 0.8235), (c) reference [8] (SSIM index: 0.8159), (d) reference [12] (SSIM index: 0.8428), (e) reference [28] (SSIM index: 0.8337), and (f) the proposed method (SSIM index: 0.8542). HR image reconstructed from the LR image obtained by using the Daubechies filter by (g) reference [11] (SSIM index: 0.7950), (h) reference [27] (SSIM index: 0.8455), (i) reference [8] (SSIM index: 0.8148), (j) reference [12] (SSIM index: 0.8458), (k) reference [28] (SSIM index: 0.8320), and (l) the proposed method (SSIM index: 0.8508).
Figure 17. Zoomed example 1 of Figure 16. (a)-(l) Zoomed portions of Figure 16a-l, respectively.
Figure 18. Zoomed example 2 of Figure 16. (a)-(l) Zoomed portions of Figure 16a-l, respectively.
Finally, we show some experimental results obtained by applying the previously reported methods and the proposed method to actual LR images captured from a commercially available camera "Canon IXY DIGITAL 50". We, respectively, show two test images in Figures 19a and 20a and their training images in Figures 19b, c and 20b, c. The upper-left and lower-left areas in Figures 19a and 20a, respectively, correspond to the target images, and they were enlarged by the previously reported methods and the proposed method as shown in Figures 21 and 22, where the magnification factor was set to eight. It should be noted that the experiments were performed under the same conditions as those shown in Figures 4, 5, and 6. From the obtained results, we can see that the proposed method also realizes more successful reconstruction of the HR images than those of the previously reported methods. As shown in Figures 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, the difference between the proposed method and the previously reported methods becomes more significant as the amount of the high-frequency components in the target images becomes larger. In detail, regions at sculptures and characters, respectively, shown in Figures 21 and 22 have successfully been reconstructed by the proposed method.
Figure 19. Test image and Training images. (a) Target image (101 × 101 pixels) whose upper-left area (50 × 50 pixels) is enlarged as shown in Figure 21. (b) Training image 1 (1600 × 1200 pixels). (c) Training image 2 (1600 × 1200 pixels).
Figure 20. Test image and training images. (a) Target image (101 × 101 pixels) whose lower-left area (50 × 50 pixels) is enlarged as shown in Figure 22. (b) Training image 1 (1600 × 1200 pixels). (c) Training image 2 (1600 × 1200 pixels).
Figure 21. Results obtained by applying the previously reported methods and the proposed method to the actual LR image shown in Figure 19a: (a) LR image. HR image reconstructed by (b) sinc interpolation, (c) reference [11], (d) reference [27], (e) reference [8], (f) reference [12], (g) reference [28], and (h) the proposed method. The magnification factor is set to eight, and the obtained results are 400 × 400 pixels.
Figure 22. Results obtained by applying the previously reported methods and the proposed method to the actual LR image shown in Figure 20a. (a) LR image. HR image reconstructed by (b) sinc interpolation, (c) reference [11], (d) reference [27], (e) reference [8], (f) reference [12], (g) reference [28], and (h) the proposed method. The magnification factor is set to eight, and the obtained results are 400 × 400 pixels.
6 Conclusions
In this paper, we have presented an adaptive SR method based on KPCA with a novel texture classification approach. In order to obtain accurate HR images, the proposed method first performs clustering of the training HR patches and derives an inverse map for estimating the missing high-frequency components from the two nonlinear eigenspaces of training HR patches and their corresponding low-frequency components in each cluster. Furthermore, the adaptive selection approach of the optimal cluster based on the errors caused in the estimation process of the missing high-frequency components enables each HR patch to be reconstructed successfully. Then, by combining the above two approaches, the proposed method realizes adaptive example-based SR. Finally, the improvement of the proposed method over previously reported methods was confirmed.
In the experiments, the parameters of our method were set to simple values from some experiments. These parameters should be adaptively determined from the observed images. Thus, we need to complement this determination algorithm.
Abbreviations
HR: high-resolution; KPCA: kernel principal component analysis; LR: low-resolution; PCA: principal component analysis; SR: super-resolution.
Appendix: Determination of parameters
The determination of the parameters utilized in the proposed method is shown. The parameters which seem to affect the performance of the proposed method are and K. Therefore, we change these parameters and discuss the determination of their optimal values and their sensitivities to the proposed method. Specifically, we set to α time the variance of ||li - lj||2 (i, j = 1, 2, . . . , N), where α was changed as α = 0.05, 0.075, . . . , 0.2. Furthermore, K was set to K = 4, 5, . . . , 10. In the experiments, the magnification factor was set to two for the simplicity. Figure 23 shows the relationship between , K, and the SSIM index of the reconstruction results for six test images Lena, Peppers, Goldhill, Boat, Gril, and Mandrill. Note that for each test image, the other five HR images were utilized as the training images. The determination of the parameters and K and their sensitivities are shown as follows:
Figure 23. Relationship between , K, and the SSIM index of the reconstruction results. Results of (a) Lena, (b) Peppers, (c) Goldhill, (d) Boat, (e) Girl, and (f) Mandrill.
Parameter of the Gaussian kernel From Figure 23, we can see the SSIM index almost monotonically increases with decreasing . When the parameter of the Gaussian kernel is set to a larger value, the expression ability of local patches tends to become worse. On the other hand, if it is set to a smaller value, the overfitting tends to occur. Therefore, from this figure, we set the parameter of the Gaussian kernel as = 0.075 × the variance of ||li - lj||2 since the performance of the proposed method for the three test images tends to become the highest. Note that this parameter is not so sensitive as shown in the results of Figure 23, i.e., the results are not sensitive to the parameter even if we set it to the larger or smaller values.
Number of clusters: K(= 7) From Figure 23, we can see the SSIM index of the proposed method becomes the highest value when K = 7 in several images, and the performance is not severely sensitive to the value of K. The parameter K is the number of clusters, and it should be set to the number of textures contained in the target image. However, since it is difficult to automatically find the number of textures in the target image, we simply set K = 7 in the experiments. The adaptive determination of the number of clusters will be the subject of the subsequent reports.
Competing interests
The authors declare that they have no competing interests.
Acknowledgements
This work was partly supported by Grant-in-Aid for Scientific Research (B) 21300030, Japan Society for the Promotion of Science (JSPS).
References
1. SC Park, MK Park, MG Kang, Super-resolution image reconstruction: A technical overview. IEEE Signal Proces Mag 20(3), 21–36 (2003). Publisher Full Text
2. R Keys, Cubic convolution interpolation for digital image processing. IEEE Trans Acoust Speech Signal Proces 29(6), 1153–1160 (1981). Publisher Full Text
3. AV Oppenheim, RW Schafer, Discrete-Time Signal Processing, 2nd edn. (Prentice Hall, New Jersey, 1999)
4. S Baker, S Kanade, T Kanade, Limits on super-resolution and how to break them. IEEE Trans Pattern Anal Mach Intell 24(9), 1167–1183 (2002). Publisher Full Text
5. S Farsiu, D Robinson, M Elad, P Milanfar, Advances and challenges in super-resolution. Int J Imaging Syst Technol 14(2), 47–57 (2004). Publisher Full Text
6. JD van Ouwerkerk, Image super-resolution survey. Image Vis Comput 24(10), 1039–1052 (2006). Publisher Full Text
7. CV Jiji, S Chaudhuri, P Chatterjee, Single frame image super-resolution: should we process locally or globally? Multidimens Syst Signal Process 18(2-3), 123–125 (2007). Publisher Full Text
8. Y Hu, T Shen, KM Lam, S Zhao, A novel example-based super-resolution approach based on patch classification and the KPCA prior model. Comput Intell Secur 1, 6–11 (2008)
9. A Hertzmann, CE Jacobs, N Oliver BC, DH Salesin, Image analogies. Comput Graph (Proc Siggraph) 2001, 327–340 (2001)
10. WT Freeman, EC Pasztor, OT Carmichael, Learning low-level vision. Int J Comput Vis 40, 25–47 (2000). Publisher Full Text
11. WT Freeman, TR Jones, EC Pasztor, Example-based super-resolution. IEEE Comput Graph Appl 22(2), 56–65 (2002). Publisher Full Text
12. A Kanemura, S Maeda, S Ishii, Sparse Bayesian learning of filters for efficient image expansion. IEEE Trans Image Process 19(6), 1480–1490 (2010). PubMed Abstract | Publisher Full Text
13. TA Stephenson, T Chen, Adaptive Markov random fields for example-based super-resolution of faces. EURASIP J Appl Signal Process 2006, 225–225 (2006)
14. Q Wang, X Tang, H Shum, Patch based blind image super resolution. Proceedings of ICCV 2005 1, 709–716 (2005)
15. X Li, KM Lam, G Qiu, L Shen, S Wang, An efficient example-based approach for image super-resolution. Proceedings of ICNNSP 2008, 575–580 (2008)
16. J Sun, N Zheng, H Tao, H Shum, Image hallucination with primal sketch priors. Proceedings of IEEE CVPR '03 2, 729–736 (2003)
17. CV Jiji, MV Joshi, S Chaudhuri, Single-frame image super-resolution using learned wavelet coefficients. Int J Imaging Syst Technol 14(3), 105–112 (2004). Publisher Full Text
18. CV Jiji, S Chaudhuri, Single-frame image super-resolution through contourlet learning. EURASIP J Appl Signal Process 2006(10), 1–11 (2006)
19. X Wang, X Trang, Hallucinating face by eigentransformation. IEEE Trans Syst Man Cybern 35(3), 425–434 (2005). Publisher Full Text
20. A Chakrabarti, AN Rajagopalan, R Chellappa, Super-resolution of face images using kernel PCA-based prior. IEEE Trans Multimedia 9(4), 888–892 (2007)
21. B Schölkopf, A Smola, KR Müller, Nonlinear principal component analysis as a kernel eigen value problem. Neural Comput 10, 1299–1319 (1998). Publisher Full Text
22. B Schölkoph, S Mika, C Burges, P Knirsch, KR Müller, G Rätsch, A Smola, Input space versus feature space in kernel-based methods. IEEE Trans Neural Netw 10(5), 1000–1017 (1999). PubMed Abstract | Publisher Full Text
23. S Chaudhuri, MV Joshi, Motion-Free Super-Resolution (Springer, New York, 2005)
24. M Turk, A Pentland, Eigenfaces for recognition. J Cogn Neurosci 3, 71–86 (1991). Publisher Full Text
25. C Bishop, A Blake, B Marthi, Super-resolution enhancement of video. Proceedings of 9th International Workshop on Artificial Intelligence and Statistics (AISTATS '03) (Key West, 2003)
26. KI Kim, Y Kwon, Example-based learning for single-image super-resolution. Proceedings of the 30th DAGM Symposium on Pattern Recognition. Lecture Notes in Computer Science, 456–465 (2008)
27. KI Kim, B Schölkopf, Iterative kernel principal component analysis for image modeling. IEEE Trans Pattern Anal Mach Intell 27(9), 1351–1366 (2005). PubMed Abstract | Publisher Full Text
28. T Ogawa, M Haseyama, Missing intensity interpolation using a kernel PCA-based POCS algorithm and its applications. IEEE Trans Image Process 20(2), 417–432 (2011). PubMed Abstract | Publisher Full Text
29. D Datsenko, M Elad, Example-based single document image super-resolution: a global MAP approach with outlier rejection. Multidimens Syst Signal Process 18(2-3), 103–121 (2007). Publisher Full Text
30. M Elad, D Datsenko, Example-based regularization deployed to super-resolution reconstruction of a single image. Comput J 52, 15–30 (2009)
31. JTY Kwok, IWH Tsang, The pre-image problem in kernel methods. IEEE Trans Neural Netw 15(6), 1517–1525 (2004). PubMed Abstract | Publisher Full Text
32. Z Wang, AC Bovik, HR Sheikh, EP Simoncelli, Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4), 600–612 (2004). PubMed Abstract | Publisher Full Text
33. I Avcbas, B Sankur, K Sayood, Statistical evaluation of image quality measures. J Elec Imaging 11(2), 206–223 (2003)
34. C Staelin, D Greig, M Fischer, R Maurer, Neural network image scaling using spatial errors. Technical Report (HP Laboratories, Israel) (2003) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8080338835716248, "perplexity": 1006.6152644773808}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767247.82/warc/CC-MAIN-20141217075247-00099-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/relative-roughness-of-pipes.234447/ | # Relative Roughness of pipes
1. May 11, 2008
### xaan
Relative Roughness of pipes!!!
1. The problem statement, all variables and given/known data
Here is the Data:
Reservoir Water Entry Pipeline Pipeline Pipeline
Number Level Coeff. Length Diameter Roughness
m [AHD] [m] [m] [mm]
0 20.665 0.814 432.453 1.222 0.089
1 17.787 0.544 111.972 1.361 0.147
2 11.166 0.583 201.258 1.171 0.076
2. Relevant equations
I need to solve this three reservoir problem, but I'm stuck in finding the Reynold's number and Friction factor from Relative densities. Relative roughness is just (Pipeline Roughness/Pipe Diameter). From Moody Diagram, the highest Relative roughness is 0.05 and I can't see anything higher than that, but the values I get are all higher than 0.05 for relative roughness.
3. The attempt at a solution
Is there a solution to this?
2. May 16, 2008
### coomast
Hello Xaan, you're relative roughnesses are smaller than 0.05. Look at the unit of relative roughness and the ones you are given in the table. That should clarify one of your problems. The one for calculating the Reynolds number is something else. Do you know the formula for this? Can you give some more information on how this system looks like? It is a bit unclear for me to understand.
3. May 16, 2008
### xaan
Yes I realised that relative roughnesses have to be smaller than 0.05. I had to go back and double check it from where I got the info from, and they told me the units they used was wrong. Instead of mm, they put m for pipe diameter. lol so I solved it and I go all the dischages and their directions:
Q1 + Q2 = Q3
*cheerz!* | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.824468195438385, "perplexity": 1377.6386688253278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608877.60/warc/CC-MAIN-20170527075209-20170527095209-00250.warc.gz"} |
https://biodesign.seas.harvard.edu/publications/varying-negative-work-assistance-ankle-soft-exosuit-during-loaded-walking | # Varying negative work assistance at the ankle with a soft exosuit during loaded walking
### Citation:
P. Malcolm, et al., “Varying negative work assistance at the ankle with a soft exosuit during loaded walking,” Journal of NeuroEngineering and Rehabilitation, vol. 14, no. 1, pp. 62, 2017.
PDF 1.79 MB
June 26
### Abstract:
Background
Only very recently, studies have shown that it is possible to reduce the metabolic rate of unloaded and loaded walking using robotic ankle exoskeletons. Some studies obtained this result by means of high positive work assistance while others combined negative and positive work assistance. There is no consensus about the isolated contribution of negative work assistance. Therefore, the aim of the present study is to examine the effect of varying negative work assistance at the ankle joint while maintaining a fixed level of positive work assistance with a multi-articular soft exosuit.
Methods
We tested eight participants during walking at 1.5 ms−1 with a 23-kg backpack. Participants wore a version of the exosuit that assisted plantarflexion via Bowden cables tethered to an off-board actuation platform. In four active conditions we provided different rates of exosuit bilateral ankle negative work assistance ranging from 0.015 to 0.037 W kg−1 and a fixed rate of positive work assistance of 0.19 W kg−1.
Results
All active conditions significantly reduced metabolic rate by 11 to 15% compared to a reference condition, where the participants wore the exosuit but no assistance was provided. We found no significant effect of negative work assistance. However, there was a trend (p = .08) toward greater reduction in metabolic rate with increasing negative work assistance, which could be explained by observed reductions in biological ankle and hip joint power and moment.
Conclusions
The non-significant trend of increasing negative work assistance with increasing reductions in metabolic rate motivates the value in further studies on the relative effects of negative and positive work assistance. There may be benefit in varying negative work over a greater range or in isolation from positive work assistance.
Publisher's Version
Last updated on 07/10/2017 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8437466621398926, "perplexity": 3353.7206181362576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000353.82/warc/CC-MAIN-20190626134339-20190626160339-00283.warc.gz"} |
http://www.maths.ox.ac.uk/node/9563 | # Mathematics of Phase Transitions From pde' s to many particle systems and back?
2 March 2012
16:30
Stephan Luckhaus
Abstract
What is a phase transition? The first thing that comes to mind is boiling and freezing of water. The material clearly changes its behaviour without any chemical reaction. One way to arrive at a mathematical model is to associate different material behavior, ie., constitutive laws, to different phases. This is a continuum physics viewpoint, and when a law for the switching between phases is specified, we arrive at pde problems. The oldest paper on such a problem by Clapeyron and Lame is nearly 200 years old; it is basically on what has later been called the Stefan problem for the heat equation. The law for switching is given e.g. by the melting temperature. This can be taken to be a phenomenological law or thermodynamically justified as an equilibrium condition. The theory does not explain delayed switching (undercooling) and it does not give insight in structural differences between the phases. To some extent the first can be explained with the help of a free energy associated with the interface between different phases. This was proposed by Gibbs, is relevant on small space scales, and leads to mean curvature equations for the interface – the so-called Gibbs Thompson condition. The equations do not by themselves lead to a unique evolution. Indeed to close the resulting pde’s with a reasonable switching or nucleation law is an open problem. Based on atomistic concepts, making use of surface energy in a purely phenomenological way, Becker and Döring developed a model for nucleation as a kinetic theory for size distributions of nuclei. The internal structure of each phase is still not considered in this ansatz. An easier problem concerns solid-solid phase transitions. The theory is furthest developped in the context of equilibrium statistical mechanics on lattices, starting with the Ising model for ferromagnets. In this context phases correspond to (extremal) equilibrium Gibbs measures in infinite volume. Interfacial free energy appears as a finite volume correction to free energy. The drawback is that the theory is still basically equilibrium and isothermal. There is no satisfactory theory of metastable states and of local kinetic energy in this framework.
• Colloquia | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8107001185417175, "perplexity": 508.4862384108522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512121.15/warc/CC-MAIN-20171211033436-20171211053436-00383.warc.gz"} |
https://brilliant.org/problems/dot-expansion-3/ | # Dot Expansion
Geometry Level 3
There are two equal dots, I and II. Both dots are in the same horizontal line, called C, at a distance of 240cm from each other. The dot I expands itself, creating an infinite line, called A, which makes 90 degrees with C. The dot II does the same, but the line created is called A'. The dot I also expands itself to the right, creating another line, called B, that makes a 45 degrees angle with C. At the same time, the dot II starts to expand itself to the left, creating a line, called B', that also makes a 45 degrees angle with C. Dot I expands the line B by 1,2cm each second, and so does dot II to the line B' (their speed on the lines B and B' is $$v = 1,2 \text{ cm/s}$$) . Let $$\alpha$$ be the time, in seconds, that it takes to both dots meet in their expansions on the lines B and B'. Also, let $$\beta$$ be the distance, in centimeters, that the dot II runs on the line B' until it meets the dot I on the line B. Consider $$\epsilon = \alpha + \beta$$. What is the value of $$\lfloor\epsilon\rfloor$$?
Notes and Assumptions:
Use a protractor on a notebook page to create the lines B and B'. Try to do just as the image. Do not use trigonometric measures for the given angles (for real). Also, consider that the initial positions, vertical and horizontal, of both dots is 0 centimeters.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9859951734542847, "perplexity": 688.145425773775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00535-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://labs.tib.eu/arxiv/?author=N.%20Vignaroli | • ### LHCSki 2016 - A First Discussion of 13 TeV Results(1607.01212)
July 8, 2016 hep-ph, hep-ex
These are the proceedings of the LHCSki 2016 workshop "A First Discussion of 13 TeV Results" that has been held at the Obergurgl Universit\"atszentrum, Tirol, Austria, April 10 - 15, 2016. In this workshop the consequences of the most recent results from the LHC have been discussed, with a focus also on the interplay with dark matter physics, flavor physics, and precision measurements. Contributions from the workshop speakers have been compiled into this document.
• ### Physics at a 100 TeV pp collider: beyond the Standard Model phenomena(1606.00947)
June 3, 2016 hep-ph, hep-ex
This report summarises the physics opportunities in the search and study of physics beyond the Standard Model at a 100 TeV pp collider.
• ### Snowmass 2013 Top quark working group report(1311.2028)
Nov. 8, 2013 hep-ph, hep-ex
This report summarizes the work of the Energy Frontier Top Quark working group of the 2013 Community Summer Study (Snowmass). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8629099130630493, "perplexity": 3607.720919799384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193221.49/warc/CC-MAIN-20201127131802-20201127161802-00018.warc.gz"} |
http://mathoverflow.net/questions/61577/the-pic0-of-an-abelian-variety | # The $Pic^0$ of an abelian variety
Given a variety abelian $A$ defined over an algebraically closed field of characteristic $0$, Mumford define $Pic^0(A)$= $L \in Pic (A) | T^*_x{L}L = L \ for \ all \ x \ in A$ , where $T_x$ is translation by x.
I wonder if this coincides with the usual definition: $Pic ^ 0 ( A )$ is the connected component of identity in $Pic (A)$?
-
If I'm not mistaken, you've copied down Mumford's definition incorrectly: it should be the set of all line bundles $L$ such that $T_x^* L \cong L$ for all $x \in A$.
Once you make this correction: yes, this turns out to be the connected component of the identity in $\operatorname{Pic}(A)$. If you read further on in the book, you'll probably find this out. If not, try for instance Milne's notes on abelian varieties.
-
Yes, you're right, it was just way to write – Flávio Apr 13 '11 at 18:38
You mean the connected component of identity in $Pic (A)$, no? – Flávio Apr 13 '11 at 18:47
@Flavio: yes.... – Pete L. Clark Apr 13 '11 at 21:09
Although probably a bit late, I'd like to point out that you can find a beautiful exposition of the theory of the Picard scheme in the survey article by S. L. Kleiman with the same title, which is part 5 of the volume "Fundamental Algebraic Geometry" edited by Fantechi et al. and pubished by the AMS.
In particular, your question is answered in detail in $\S 9.5$ ("The connected component of the identity").
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8418118953704834, "perplexity": 168.1075265472665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398459333.27/warc/CC-MAIN-20151124205419-00353-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://forum.bebac.at/forum_entry.php?id=21872&order=time | ## Terry’s homebrew [RSABE / ABEL]
Hi mittyri,
» here ya go
Oh pleeeze, not that one!
Of course, I have it but forgot that it deals with the partial replicate. I didn’t look at it for years cause you need a magnifying glass to read the tables (where the LaTeX screwed up making the formulas unusable).
BTW, it was never published in a peer-reviewed journal… IMHO, proceedings are just one little step above “personal communication”.
Dif-tor heh smusma 🖖
Helmut Schütz
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9042272567749023, "perplexity": 3559.504853225387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00317.warc.gz"} |
http://burkeinstitute.caltech.edu/events/86347 | Search
# Sampling-complexity phase diagrams
Tuesday, July 30, 2019
3:00 PM - 4:00 PM
Location: Annenberg 213
Abhinav Deshpande, Graduate Student, University of Maryland
Abstract: In this talk, I argue that the question of whether a physical system can be simulated on a classical computer is important not just from a practical perspective but also a fundamental one. We consider the complexity of approximate sampling from states arising due to time evolution under a Hamiltonian or from equilibrium states of quantum many-body Hamiltonians. I will comment on extensions of these results to other physical systems. I will further sketch out what insight the obtained "complexity phase diagrams" may shed on the underlying Hamiltonians in question, illustrating that the field of quantum computational supremacy has applications in theoretical physics.
Series: Institute for Quantum Information (IQI) Weekly Seminar Series | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9026645421981812, "perplexity": 1031.604394704596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00534.warc.gz"} |
http://mathhelpforum.com/trigonometry/125637-pythagorean-identities-print.html | # Pythagorean identities
Show 40 post(s) from this thread on one page
Page 1 of 2 12 Last
• Jan 26th 2010, 05:12 PM
purplec16
Pythagorean identities
Use Pythagorean identities to write the expression as an integer.
tan^2 4β - sec^2 4β
• Jan 26th 2010, 05:58 PM
skeeter
Quote:
Originally Posted by purplec16
Use Pythagorean identities to write the expression as an integer.
tan^2 4β - sec^2 4β
hint ...
$1 + \tan^2(4\beta) = \sec^2(4\beta)$
• Jan 26th 2010, 06:16 PM
purplec16
lol...omg i'm still lost...
• Jan 26th 2010, 06:25 PM
purplec16
$1+tan^2(4\beta)-sec^2(4\beta)
$
$tan^2(4\beta)-sec^2(4\beta)=-1
$
is this the next step, i dont understand how you get it to equal to an integer...i.e. get rid of the beta
• Jan 26th 2010, 06:27 PM
pickslides
Skeeter has given you the answer. You don't need to get rid of beta.
$1 + \tan^2(4\beta) = \sec^2(4\beta)$
$1 + \tan^2(4\beta) - \sec^2(4\beta)=0$
$\tan^2(4\beta) - \sec^2(4\beta)=-1$
• Jan 26th 2010, 06:29 PM
purplec16
Oh ok...wow i knew how to do it then...
so what if it was something like
$4 tan^2(\beta)-4sec^2(\beta)$
• Jan 26th 2010, 06:30 PM
purplec16
Oh ok...wow i knew how to do it then...
so what if it was something like
$4 tan^2 (\beta)-4 sec^2 (\beta)$
what would happen in that case?
• Jan 26th 2010, 06:33 PM
pickslides
Quote:
Originally Posted by purplec16
Use Pythagorean identities to write the expression as an integer.
$\tan^2(4\beta) - \sec^2(4\beta)=-1$
$-1$ is an integer so you are done.
For the next question
$4 tan^2 (\beta)-4 sec^2 (\beta)$
$4( tan^2 (\beta)- sec^2 (\beta))$
$4( -1)$
• Jan 26th 2010, 06:40 PM
purplec16
Sorry to bother you but would u be able to assist me in solving something like this:
$\frac{cot^2\alpha-4}{cot^2\alpha-cot\alpha-6}$
• Jan 26th 2010, 06:46 PM
pickslides
What are you trying to do? Just simplify?
$\frac{\cot^2\alpha-4}{\cot^2\alpha-\cot\alpha-6}$
make $x =\cot\alpha$
$\frac{x^2-4}{x^2-x-6}$
$\frac{(x-2)(x+2)}{(x-3)(x-2)}$
$\frac{x+2}{x-3}$
$\frac{\cot\alpha+ 2}{\cot\alpha -3}$
• Jan 26th 2010, 06:47 PM
purplec16
yes simplifry the expression
• Jan 26th 2010, 06:51 PM
pickslides
Quote:
Originally Posted by purplec16
yes simplifry the expression
Is done in the previous post.
Can also say
$\frac{\cot\alpha+ 2}{\cot\alpha -3} = 1+\frac{5}{\cot\alpha -3}$
• Jan 26th 2010, 07:16 PM
purplec16
Quick Question for this expression:
$5sin^2(\theta/4)+5cos^2(\theta/4)$
how do u get rid of the $(\theta/4)$ to make it equal to 5? i understand that it will be equal to 5
is it that i dont have to worry about the theta and jus simplify it equal to five
i.e. $5(sin^2(\theta/4)+cos^2(\theta/4))$
$5(1)$
• Jan 26th 2010, 07:29 PM
pickslides
$\sin^2\left(\frac{\theta}{4}\right)+\cos^2\left(\f rac{\theta}{4}\right) = 1$
Now factor out the 5 and follow what I did in post #8.
• Jan 26th 2010, 07:31 PM
purplec16
Ok, thank you so much, I did that
Show 40 post(s) from this thread on one page
Page 1 of 2 12 Last | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183021783828735, "perplexity": 3634.8383376035827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320545.67/warc/CC-MAIN-20170625170634-20170625190634-00192.warc.gz"} |
http://stylescoop.net/sum-of/sum-of-squared-errors-example.html | Home > Sum Of > Sum Of Squared Errors Example
# Sum Of Squared Errors Example
## Contents
So, the SSE for stage 1 is: 6. If all cases within a cluster are identical the SSE would then be equal to 0. One-way ANOVA calculations Formulas for one-way ANOVA hand calculations Although computer programs that do ANOVA calculations now are common, for reference purposes this page describes how to calculate the various entries Carl Friedrich Gauss, who introduced the use of mean squared error, was aware of its arbitrariness and was in agreement with objections to it on these grounds.[1] The mathematical benefits of http://stylescoop.net/sum-of/sum-of-squared-errors-formula.html
Used in Ward's Method of clustering in the first stage of clustering only the first 2 cells clustered together would increase SSEtotal. ISBN0-471-17082-8. For example, if you have a model with three factors, X1, X2, and X3, the sequential sums of squares for X2 shows how much of the remaining variation X2 explains, given Sometimes, the factor is a treatment, and therefore the row heading is instead labeled as Treatment.
## Sum Of Squared Errors Example
Cargando... It can be used as a measure of variation within a cluster. Unlike the corrected sum of squares, the uncorrected sum of squares includes error. Cargando...
This is why equation 3 has to be used. A small RSS indicates a tight fit of the model to the data. Please help improve this article by adding citations to reliable sources. Sum Squared Error Matlab A missing value (e.g.
Estimator The MSE of an estimator θ ^ {\displaystyle {\hat {\theta }}} with respect to an unknown parameter θ {\displaystyle \theta } is defined as MSE ( θ ^ ) It is the unique portion of SS Regression explained by a factor, given all other factors in the model, regardless of the order they were entered into the model. MSE is a risk function, corresponding to the expected value of the squared error loss or quadratic loss. In the learning study, the factor is the learning method. (2) DF means "the degrees of freedom in the source." (3) SS means "the sum of squares due to the source."
By using this site, you agree to the Terms of Use and Privacy Policy. How To Find Sse In Statistics The larger this ratio is, the more the treatments affect the outcome. Note that, although the MSE (as defined in the present article) is not an unbiased estimator of the error variance, it is consistent, given the consistency of the predictor. The 'error' from each point to this center is then determined and added together (equation 1).
## Sum Of Squared Errors Excel
ArmstrongPSYC2190 256.056 visualizaciones 21:10 Statistics 101: Simple Linear Regression (Part 1), The Very Basics - Duración: 22:56. The best I could do is this: when a new cluster is formed, say between clusters i & j the new distance between this cluster and another cluster (k) can be Sum Of Squared Errors Example Variance Further information: Sample variance The usual estimator for the variance is the corrected sample variance: S n − 1 2 = 1 n − 1 ∑ i = 1 n Sst Formula The sum of the squared errors, , is defined as follows:
Where: is the actual observations time series is the estimated or forecasted time series Examples Example 1: A B C
However, one can use other estimators for σ 2 {\displaystyle \sigma ^{2}} which are proportional to S n − 1 2 {\displaystyle S_{n-1}^{2}} , and an appropriate choice can always give http://stylescoop.net/sum-of/sum-of-standard-errors.html This will determine the distance for each of cell i's variables (v) from each of the mean vectors variable (xvx) and add it to the same for cell j. If we define S a 2 = n − 1 a S n − 1 2 = 1 a ∑ i = 1 n ( X i − X ¯ ) Therefore, we'll calculate the P-value, as it appears in the column labeled P, by comparing the F-statistic to anF-distribution withm−1 numerator degrees of freedom andn−mdenominator degrees of freedom. Sum Squared
Dij = distance between cell i and cell j; xvi = value of variable v for cell i; etc. This again has to be added giving a total SSE3 of 1.287305. The calculations appear in the following table. http://stylescoop.net/sum-of/sum-of-squared-errors-calculator.html Now there are these clusters at stage 4 (the rest are single cells and don't contribute to the SSE): 1. (2 & 19) from stage 1; SSE = 0.278797 2. (8
The sequential and adjusted sums of squares will be the same for all terms if the design matrix is orthogonal. How To Calculate Sse In Excel Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. The sum of squares represents a measure of variation or deviation from the mean.
## This is just for the first stage because all other SSE's are going to be 0 and the SSE at stage 1 = equation 7.
There are, however, some scenarios where mean squared error can serve as a good approximation to a loss function occurring naturally in an application.[6] Like variance, mean squared error has the This can also be rearranged to be written as seen in J.H. References ^ a b Lehmann, E. Sum Of Squared Errors In Clustering Belmont, CA, USA: Thomson Higher Education.
These numbers are the quantities that are assembled in the ANOVA table that was shown previously. Skip to Content Eberly College of Science STAT 414 / 415 Probability Theory and You square the result in each row, and the sum of these squared values is 1.34. The total $$SS$$ = $$SS(Total)$$ = sum of squares of all observations $$- CM$$. $$\begin{eqnarray} SS(Total) & = & \sum_{i=1}^3 \sum_{j=1}^5 y_{ij}^2 - CM \\ & & \\ & = check over here Cargando... This table lists the results (in hundreds of hours). This also is a known, computed quantity, and it varies by sample and by out-of-sample test space. Y is the forecasted time series data (a one dimensional array of cells (e.g. Cell 3 combines with cells 8 & 17 (which were already joined at stage 3). zedstatistics 323.453 visualizaciones 15:00 How Calculate Sum of Squares - Duración: 2:34. Equation 5 can't be used in this case because that would be like treating the cluster with cells 8 & 17 in it as a single point with no error (SSE) Step 1: compute $$CM$$ STEP 1 Compute $$CM$$, the correction for the mean.$$ CM = \frac{ \left( \sum_{i=1}^3 \sum_{j=1}^5 y_{ij} \right)^2}{N_{total}} = \frac{(\mbox{Total of all observations})^2}{N_{total}} = \frac{(108.1)^2}{15} = 779.041 To compute the SSE for this example, the first step is to find the mean for each column.
However, instead of determining the distance between 2 cells (i & j) its between cell i (or j) and the vector means of cells i & j. Matrix expression for the OLS residual sum of squares The general regression model with n observations and k explanators, the first of which is a constant unit vector whose coefficient is | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9298078417778015, "perplexity": 782.4106673218866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.7/warc/CC-MAIN-20170822221214-20170823001214-00077.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=3896664 | # Question about MOND and gravity
by mesa
Tags: gravity, mond
P: 6,863
Quote by Jonathan Scott Where did you get that from?
I was misremembering something that you seem to have remembered correctly.....
The really weird thing about MOND is that it actually works for a huge range of different galaxies using the same a0 value, and correctly predicted the results for Low Surface Brightness (LSB) galaxies before any measurements had been made on them.
When people saw that this was like *wow* there might be something here. It's this particular observation that gave MOND quite a bit of credibility for a time.
However, it doesn't work at larger scales (such as galaxy clusters and interacting galaxies) nor at smaller scales (globular clusters within galaxies) without further tweaking.
Yup. The trouble is that the more "tweaking" you have to do to get things to work, the less strong the theory is. Both dark matter and modified gravity require tweaking to get the right fit with observations, but at this point dark matter seems to require a lot less tweaking than modified gravity, but this is one of those things that could change quickly.
One other thing about arguments toward elegance is that different people can weight things differently. If someone looks at the data that modified gravity requires less tweaking than dark matter, it can be hard to argue otherwise because these are somewhat subjective.
P: 6,863 The other problem is that Krupa seems to misunderstand the applicability of LCDM. The idea behind LCDM is that the big bang produced large scale clumps and that these clumps influences where galaxies form. How galaxies actually form is outside of the theory, so LCDM really says nothing about things at small scales. No one has been able to reproduce the cosmological observations with only modified gravity (lots of people have tried). Once you assume that some dark matter is necessary, then it becomes easier to assume (unless you have some reason otherwise) that it's all dark matter.
PF Patron
P: 1,090
Quote by mesa I'm not sure I am getting this, how do the atoms in the star affect the overall velocity of the star? Or are you saying this just a way of looking at the effect of gravity on the scale of the very large vs small and that it seems silly to have different rules for both systems?
MOND has a different gravitational acceleration rule for cases where the gravitational acceleration is of the order of a0 or weaker. If this rule were just treated as additional to Newtonian gravity, it appears that corrections due to MOND would already have been necessary to match solar system experiments (although it's not completely conclusive, because MOND accelerations don't add up in the same way as Newtonian gravity). For this reason, MOND assumes an interpolation function which means that the acceleration of an object in a very weak field obeys the MOND rule but in a stronger field it obeys Newtonian rules (or GR where that level of accuracy is necessary).
A star on the edge of a galaxy is treated by MOND as being very weakly accelerated as a whole by the galaxy, so the MOND rule applies. However, if you consider the component atoms of the star, they are all within the gravitational field of the star itself, so the overall gravitational acceleration on those atoms would be expected to be much greater than a0, which means they would obey Newtonian gravitation and be "immune" to MOND. It is difficult to see how the atoms of a star can accelerate in one way but the star as a whole accelerate in a different way.
Similarly, a system of masses such as a binary star or a star and planets at the edge of the galaxy is also treated by MOND as a single object in the low-acceleration regime, even though the components are clearly subject to higher accelerations from each other.
Note however that the MOND force is quite tricky to work with anyway, in particular because it is not linear in the source mass.
P: 336
Quote by twofish-quant No one has been able to reproduce the cosmological observations with only modified gravity (lots of people have tried). Once you assume that some dark matter is necessary, then it becomes easier to assume (unless you have some reason otherwise) that it's all dark matter.
How is the formula set up for the dark matter halo? Can we work out an example? Perhaps predict the velocity of a star using basic Newtonian Mechanics vs DMT vs MOND
Quote by twofish-quant Galaxy rotation curves are only one "weird thing", and frankly, if galaxy rotation curves were the only "weird thing" that we see, then MOND would make more sense to me than dark matter.
So there is a great deal more to the predictions of these systems than star velocity alone. Is star velocity the predominating area or are there other aspects of equal or greater importance? I would like to put some chalk to a board on star velocities unless you feel there is a better place to start, can we work out an example?
Quote by Jonathan Scott A star on the edge of a galaxy is treated by MOND as being very weakly accelerated as a whole by the galaxy, so the MOND rule applies. However, if you consider the component atoms of the star, they are all within the gravitational field of the star itself, so the overall gravitational acceleration on those atoms would be expected to be much greater than a0, which means they would obey Newtonian gravitation and be "immune" to MOND. It is difficult to see how the atoms of a star can accelerate in one way but the star as a whole accelerate in a different way.
Okay, I understand what you were saying now.
P: 336 I was looking at the MOND equation, it looks like the adjustment is 'hidden' at smaller scales allowing newtonian mechanics to work on our scale as the function brings it's value to 1 while adjusting to increased values for 'a' as the effects of gravity would become weaker as distance 'r' is increased. I can not figure out how the function μ(a/a0) actually works except that a0 becomes more significant with respect to an increase in the value for 'r' as it reduces 'a' to a lesser value than a0 = 1^-9m/s^2, a very tiny value. So the equation has terms in it I am unfamiliar with: ∇ - ??? ρ - this is a function for the spread of mass in a galaxy is it not? If so how does it work? I don't see the symbol to the right for gravitational potential as written in the function Any thoughts?
PF Patron
P: 1,090
Quote by mesa I was looking at the MOND equation, it looks like the adjustment is 'hidden' at smaller scales allowing newtonian mechanics to work on our scale as the function brings it's value to 1 while adjusting to increased values for 'a' as the effects of gravity would become weaker as distance 'r' is increased. I can not figure out how the function μ(a/a0) actually works except that a0 becomes more significant with respect to an increase in the value for 'r' as it reduces 'a' to a lesser value than a0 = 1^-9m/s^2, a very tiny value. So the equation has terms in it I am unfamiliar with: ∇ - ??? ρ - this is a function for the spread of mass in a galaxy is it not? If so how does it work? I don't see the symbol to the right for gravitational potential as written in the function Any thoughts?
The symbol ∇ or "nabla" is used as the mathematical operator called "Del" which is the vector differential operator, used as a short notation for the differential operators grad, div and curl (depending on whether it is applied to a scalar, or to a vector via dot product, or to a vector via cross product). If you don't know about those, it's probably beyond the scope of this forum to explain. Technically, it is equivalent to a sort of vector with the following partial derivative operator components:
$$\left ( \frac{\partial}{\partial x}, \frac{\partial}{\partial y}, \frac{\partial}{\partial z} \right )$$
For example, if you apply the gradient operator to the gravitational potential, you get the vector field describing the gravitational acceleration.
The symbol ρ in the MOND article is simply the local density of mass (in mass per unit volume).
P: 336
Quote by Jonathan Scott If you don't know about those, it's probably beyond the scope of this forum to explain.
Lets give it a shot. Where would you like to start?
P: 1 Sorry if I'm in the wrong section to ask this question. I'm trying to find out when astronomers discovered that the solar system oscillates through the galactic plane. I just can't imagine the Mayans having the ability to determine that it actually occurs on a 26,000 (or whatever) year period. Thanks for the consideration 123 mark
P: 6,863
Quote by mesa Lets give it a shot. Where would you like to start?
Let me just sketch out the problem.
You have a function X. You have a set of rules to convert that function into another function Y.
The problem is here is that learning what those rules are is a one semester course in calculus. Look at 18.02 on MIT OCW.
P: 336
Quote by twofish-quant Let me just sketch out the problem. You have a function X. You have a set of rules to convert that function into another function Y. The problem is here is that learning what those rules are is a one semester course in calculus. Look at 18.02 on MIT OCW.
So lets start with the basics and go from there, how do they calculate for ρ? Do they just take the average for the mass of the entire galaxy or is it based on what is inside the area swept by a star?
With DMT I was told by an astrophysicist that the dark matter is put into the model and essentially is an adjustment to the mass to have the stars match newtons gravitation formula. Is that right? Is Gm/r^2 modified in any way?
PF Patron
P: 1,090
Quote by mesa So lets start with the basics and go from there, how do they calculate for ρ? Do they just take the average for the mass of the entire galaxy or is it based on what is inside the area swept by a star?
The local density of matter in the form of stars or gas is estimated from the luminosity of that part of the galaxy in various parts of the spectrum.
The Newtonian acceleration is then calculated in the usual way by integration (summing the effect of all the mass). With spherical symmetry, Newtonian gravity would simplify to being equivalent to having all the mass inside a given orbit concentrated at the center, but for galaxies the shape is more complicated. The MOND acceleration can then be calculated in terms of the Newtonian acceleration.
With DMT I was told by an astrophysicist that the dark matter is put into the model and essentially is an adjustment to the mass to have the stars match newtons gravitation formula. Is that right? Is Gm/r^2 modified in any way?
Yes, Dark Matter simply adds additional invisible source mass obeying the standard Newtonian gravitational formula (as an approximation to GR).
A very weird feature of the MOND rule is that for a wide range of galaxies it correctly predicts the velocity distribution based only on the distribution of visible matter. If dark matter is the real explanation, this suggests that the distribution of the dark matter in galaxies must somehow be strongly linked with the distribution of the visible matter in such a way as to reproduce the MOND result, but so far there is no theoretical explanation for this.
P: 336
Sorry it took a few days to get back to you, had finals last couple days.
Quote by Jonathan Scott The Newtonian acceleration is then calculated in the usual way by integration (summing the effect of all the mass). With spherical symmetry, Newtonian gravity would simplify to being equivalent to having all the mass inside a given orbit concentrated at the center, but for galaxies the shape is more complicated. The MOND acceleration can then be calculated in terms of the Newtonian acceleration.
I'm a little surprised that would work, how is the integretion setup? Is it a function of the gravity of each sun and it's affect on the next by putting together an artificail layout based on average distances apart or is it simply the sum of all the masses thrown into the center for the swept area of the galaxy by a particular star?
I was told by an astrophysicist that it has only been recently that papers were published changing the model from a spherical density to a more disc like shape, I found this surprising as well.
Quote by Jonathan Scott A very weird feature of the MOND rule is that for a wide range of galaxies it correctly predicts the velocity distribution based only on the distribution of visible matter. If dark matter is the real explanation, this suggests that the distribution of the dark matter in galaxies must somehow be strongly linked with the distribution of the visible matter in such a way as to reproduce the MOND result, but so far there is no theoretical explanation for this.
That's very interesting, so MOND at least is able to show a possible correlation between matter and dark matter (that is if DMT is correct).
PF Patron
P: 1,090
Quote by mesa I'm a little surprised that would work, how is the integretion setup? Is it a function of the gravity of each sun and it's affect on the next by putting together an artificail layout based on average distances apart or is it simply the sum of all the masses thrown into the center for the swept area of the galaxy by a particular star? I was told by an astrophysicist that it has only been recently that papers were published changing the model from a spherical density to a more disc like shape, I found this surprising as well.
I don't know the practical details. For a simple case, I guess one could assume one could treat the mass as a series of rings of varying density surrounding a spherical nucleus. In a more complex cases one could use numerical methods to sum the effects of mass density over a modelled shape of the galaxy consistent with the observations. There's certainly no need to model the individual stars, because from sufficient distance the gravitational effect is essentially the same as that of a continuous medium with an appropriate average density.
P: 336
Quote by Jonathan Scott I don't know the practical details. For a simple case, I guess one could assume one could treat the mass as a series of rings of varying density surrounding a spherical nucleus. In a more complex cases one could use numerical methods to sum the effects of mass density over a modelled shape of the galaxy consistent with the observations. There's certainly no need to model the individual stars, because from sufficient distance the gravitational effect is essentially the same as that of a continuous medium with an appropriate average density.
Where do you think would be a good place to start to find the actual formulas used for these calculations? I looked online and came up with very little. Are there members on the board that would be helpful?
I am going to quiz the proffesors at school again and see if I can get a more complete answer. I was told by one it was basically the same as you stated originally; the mass is essentially summed and put into the center and then calculated.
That seems overly simplified and frankly I don't see how that could calculate anything properly.
Related Discussions Cosmology 3 General Physics 24 Astrophysics 1 Astrophysics 4 Cosmology 3 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8830456137657166, "perplexity": 399.1194748799657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163054599/warc/CC-MAIN-20131204131734-00010-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://web.cs.ucla.edu/~yjchoi/publications/KhosraviNeurIPS19/ | # On Tractable Computation of Expected Predictions
Pasha Khosravi, YooJung Choi, Yitao Liang, Antonio Vergari, and Guy Van den Broeck.
In Advances in Neural Information Processing Systems 32 (NeurIPS), 2019
### TL;DR
We achieve tractable probabilistic reasoning of prediction models by identifying generative-discriminative pairs in the form of circuit representations that enable tractable computation of expectations, as well as higher-order moments.
### Abstract
Computing expected predictions of discriminative models is a fundamental task in machine learning that appears in many interesting applications such as fairness, handling missing values, and data analysis. Unfortunately, computing expectations of a discriminative model with respect to a probability distribution defined by an arbitrary generative model has been proven to be hard in general. In fact, the task is intractable even for simple models such as logistic regression and a naive Bayes distribution. In this paper, we identify a pair of generative and discriminative models that enables tractable computation of expectations, as well as moments of any order, of the latter with respect to the former in case of regression. Specifically, we consider expressive probabilistic circuits with certain structural constraints that support tractable probabilistic inference. Moreover, we exploit the tractable computation of high-order moments to derive an algorithm to approximate the expectations for classification scenarios in which exact computations are intractable. Our framework to compute expected predictions allows for handling of missing data during prediction time in a principled and accurate way and enables reasoning about the behavior of discriminative models. We empirically show our algorithm to consistently outperform standard imputation techniques on a variety of datasets. Finally, we illustrate how our framework can be used for exploratory data analysis.
### Citation
@inproceedings{KhosraviNeurIPS19,
author = {Khosravi, Pasha and Choi, YooJung and Liang, Yitao and Vergari, Antonio and Van den Broeck, Guy},
title = {On Tractable Computation of Expected Predictions},
booktitle = {Advances in Neural Information Processing Systems 32 (NeurIPS)},
month = {dec},
year = {2019},
}
Preliminary version titled “Tractable Computation of the Moments of Predictive Models” appeared in the ICML 2019 Workshop on Tractable Probabilistic Modeling (TPM). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010830283164978, "perplexity": 1195.88687968348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00521.warc.gz"} |
http://mathoverflow.net/questions/133581/quasicrystals-and-the-riemann-hypothesis?answertab=active | # Quasicrystals and the Riemann Hypothesis
Let $0 < k_1 < k_2 < k_3 < \cdots$ be all the zeros of the Riemann zeta function on the critical line:
$$\zeta(\frac{1}{2} + i k_j) = 0$$
Let $f$ be the Fourier transform of the sum of Dirac deltas supported at these points. In other words:
$$f(x) = \sum_{j = 1}^\infty e^{ik_j x}$$
This is not a function, but it's a tempered distribution. Matt McIrvin made a graph of it:
McIrvin seems to get an infinite linear combination of Dirac deltas supported at logarithms of powers of prime numbers. But Freeman Dyson seems to claim that this is only known to be true assuming the Riemann Hypothesis. So, my question is:
1) What do people know about $f$ without assuming the Riemann Hypothesis?
2) What do people know about $f$ assuming the Riemann Hypothesis?
3) Can we prove that $f$ is a linear combination of Dirac deltas supported at prime powers, if we assume the Riemann Hypothesis?
4) Is there some property of $f$, like being a linear combination of Dirac deltas supported at prime powers, that's known to imply the Riemann Hypothesis?
Part of why I'm confused is that J. Main, V. A. Mandelshtam, G. Wunner and H. S. Taylor have a paper containing some equations (equations 8 and 9) that seem to imply
$$\sum_{j = 1}^\infty \delta(k - k_j) = - \frac{1}{\pi} \sum_{p} \sum_{m = 1}^\infty \frac{\ln p}{p^{m/2}} e^{i k \ln{p^m}}$$
where the sum over $p$ is a sum over primes. This seems to be just what we need to show $f$ is an infinite linear combination of Dirac deltas supported at logarithms of prime powers!
I may be getting some signs and factors of $2 \pi$ wrong, but I don't think that's the main problem: I think the problem is whether a formula resembling the above one is known to be true, or whether it's only known given the Riemann Hypothesis.
Here's what Freeman Dyson said about this in 2009. Unfortunately he omits some details I'm dying to know:
The proof of the Riemann Hypothesis is a worthy goal, and it is not for us to ask whether we can reach it. I will give you some hints describing how it might be achieved. Here I will be giving voice to the mathematician that I was fifty years ago before I became a physicist. I will talk first about the Riemann Hypothesis and then about quasi-crystals.
There were until recently two supreme unsolved problems in the world of pure mathematics, the proof of Fermat's Last Theorem and the proof of the Riemann Hypothesis. Twelve years ago, my Princeton colleague Andrew Wiles polished off Fermat's Last Theorem, and only the Riemann Hypothesis remains. Wiles' proof of the Fermat Theorem was not just a technical stunt. It required the discovery and exploration of a new field of mathematical ideas, far wider and more consequential than the Fermat Theorem itself. It is likely that any proof of the Riemann Hypothesis will likewise lead to a deeper understanding of many diverse areas of mathematics and perhaps of physics too. Riemann's zeta-function, and other zeta-functions similar to it, appear ubiquitously in number theory, in the theory of dynamical systems, in geometry, in function theory, and in physics. The zeta-function stands at a junction where paths lead in many directions. A proof of the hypothesis will illuminate all the connections. Like every serious student of pure mathematics, when I was young I had dreams of proving the Riemann Hypothesis. I had some vague ideas that I thought might lead to a proof. In recent years, after the discovery of quasi-crystals, my ideas became a little less vague. I offer them here for the consideration of any young mathematician who has ambitions to win a Fields Medal.
Quasi-crystals can exist in spaces of one, two, or three dimensions. From the point of view of physics, the three-dimensional quasi-crystals are the most interesting, since they inhabit our three-dimensional world and can be studied experimentally. From the point of view of a mathematician, one-dimensional quasi-crystals are much more interesting than two-dimensional or three-dimensional quasi-crystals because they exist in far greater variety. The mathematical definition of a quasi-crystal is as follows. A quasi-crystal is a distribution of discrete point masses whose Fourier transform is a distribution of discrete point frequencies. Or to say it more briefly, a quasi-crystal is a pure point distribution that has a pure point spectrum. This definition includes as a special case the ordinary crystals, which are periodic distributions with periodic spectra.
Excluding the ordinary crystals, quasi-crystals in three dimensions come in very limited variety, all of them associated with the icosahedral group. The two-dimensional quasicrystals are more numerous, roughly one distinct type associated with each regular polygon in a plane. The two-dimensional quasi-crystal with pentagonal symmetry is the famous Penrose tiling of the plane. Finally, the one-dimensional quasi-crystals have a far richer structure since they are not tied to any rotational symmetries. So far as I know, no complete enumeration of one-dimensional quasi-crystals exists. It is known that a unique quasi-crystal exists corresponding to every Pisot–Vijayaraghavan number or PV number. A PV number is a real algebraic integer, a root of a polynomial equation with integer coefficients, such that all the other roots have absolute value less than one${}^1$. The set of all PV numbers is infinite and has a remarkable topological structure. The set of all one-dimensional quasi-crystals has a structure at least as rich as the set of all PV numbers and probably much richer. We do not know for sure, but it is likely that a huge universe of one-dimensional quasi-crystals not associated with PV numbers is waiting to be discovered.
Here comes the connection of the one-dimensional quasi-crystals with the Riemann hypothesis. If the Riemann hypothesis is true, then the zeros of the zeta-function form a one-dimensional quasi-crystal according to the definition. They constitute a distribution of point masses on a straight line, and their Fourier transform is likewise a distribution of point masses, one at each of the logarithms of ordinary prime numbers and prime-power numbers. My friend Andrew Odlyzko has published a beautiful computer calculation of the Fourier transform of the zeta-function zeros${}^2$. The calculation shows precisely the expected structure of the Fourier transform, with a sharp discontinuity at every logarithm of a prime or prime-power number and nowhere else.
My suggestion is the following. Let us pretend that we do not know that the Riemann Hypothesis is true. Let us tackle the problem from the other end. Let us try to obtain a complete enumeration and classification of one-dimensional quasicrystals. That is to say, we enumerate and classify all point distributions that have a discrete point spectrum...We shall then find the well-known quasi-crystals associated with PV numbers, and also a whole universe of other quasicrystals, known and unknown. Among the multitude of other quasi-crystals we search for one corresponding to the Riemann zeta-function and one corresponding to each of the other zeta-functions that resemble the Riemann zeta-function. Suppose that we find one of the quasi-crystals in our enumeration with properties that identify it with the zeros of the Riemann zeta-function. Then we have proved the Riemann Hypothesis and we can wait for the telephone call announcing the award of the Fields Medal.
These are of course idle dreams. The problem of classifying one-dimensional quasi-crystals is horrendously difficult, probably at least as difficult as the problems that Andrew Wiles took seven years to explore. But if we take a Baconian point of view, the history of mathematics is a history of horrendously difficult problems being solved by young people too ignorant to know that they were impossible. The classification of quasi-crystals is a worthy goal, and might even turn out to be achievable.
1 M.J. Bertin et al., Pisot and Salem Numbers, Birkhäuser, Boston, 1992.
[2] A.M. Odlyzko, Primes, quantum chaos and computers, in Number Theory: Proceedings of a Symposium, 4 May 1989, Washington, DC, USA (National Research Council, 1990), pp. 35–46.
-
Tangentially, "Nick S" left some critical comments on Dyson's 1D quasicrystal idea at an earlier MO question, "Approaches to Riemann hypothesis using methods outside number theory," comments that I cannot evaluate myself: mathoverflow.net/questions/34699/… – Joseph O'Rourke Jun 13 '13 at 0:50
What does it mean to print the "graph" of a tempered distribution? Does it mean to print the graph of a sufficiently close $\mathcal{C}^\infty$ approximant? – Qfwfq Dec 4 '13 at 15:46
* approximation – Qfwfq Dec 4 '13 at 15:47
Alain Connes (here is the link http://arxiv.org/abs/math/9811068) proved a similar statement in his Selecta paper in 1999, but his method shows above presentation is true only with zeros on critical line, (actually he showed more, it is sum of powers of x^it(logx)^j, where i is square root of -1, j is discrete index, "i.t" is the imaginary part of zero on the critical line ) still to show there is no other zero off the critical line.
This letter of Sarnak to Bombieri might be of some help http://web.math.princeton.edu/sarnak/BombieriLtr2002.pdf
-
I heard often about this, but so far I have yet to find an actual proof that RH implies that the zeroes of the RZF form a 1-dimensional quasicrystal. Actually there is no formal definition of quasicrystal yet, the new definition (1992) of a crystal is intentionally vague and we will probably not have a better formal definition until we understand them better.
Also note that the zeroes of RZF have arbitrarily large gaps, which is not something the quasicrystal community accepts. And there is a reason for that: The diffraction of an infinite quasicrystal is formally defined as the "limit" of diffraction of larger samples of the material. In general this limit might not exist, which happens when large samples of the solid have completely different diffraction, but it can easily be proven that all those finite diffractions are inside some compact spaces, thus we always have cluster points and we can get a limit by simply going to subsequences/subnets (depending if we work in $\mathbb{R}^d$ or arbitrary lcag). This means that some solids have different diffractions depending on how we average.
While this looks clumsy, in reality is not. It only happens if different larger and larger samples of our solid have completely different properties, which means that different large pieces of the solid are basically different materials.... Imagine finding a new material and drilling some samples to diffract. Unknown to us, the rock we find has two halves, one material $A$ and one material $B$. What will our diffraction be? Well the answer is: depends where we drill the sample. We could get the diffraction of $A$, of $B$ or many different diffractions of mixtures of $A$ and $B$. Those are exactly the cluster points we get.
We typically ask that the solid is nice (typically an uniquely ergodicity assumption), which leads to many nice properties and unique diffraction, but this is most of the times not a needed assumption.
So back to zeroes of RZF. The fact that the zeroes of RZF have arbitrarily large holes imply that $0$ is a diffraction measure of this system (i.e. we could drill by chance all our samples just from those holes). The system definitely has multiple diffractions, which makes the problem harder: deciding if this is a quasicrystal depends on which one we pick.
Since we are in $\mathbb{R}$, there are some choices which are more natural, so lets ignore this first issue... And lets simplify things, lets forget the measure approach and work directly with distributions.
If the tempered distribution $\sum_{j = 1}^\infty e^{ik_j x}$ is a translation bounded discrete measure, then I think Hof/Lagarias proved that the set has pure point diffraction, which is understood to imply quasicrystal. This seems to be the case here.
But there is also to consider the following Theorem by Cordoba([1]), which I only know from a paper of Lagarias (I didn't read the original yet):
Theorem: Let $S \subset \mathbb{R}^d$ be an uniformly discrete set. If the Fourier transform of $\delta_S= \sum_{x \in S} \delta_x$ is a tempered distribution which is a translation bounded discrete measure, then $S$ is a finite union of translates of full rank lattices in $\mathbb{R}^d$.
This shows that the zeroes of RZF cannot fit this description. The only possibility left is that Fourier transform of $\delta_S= \sum_{x \in S} \delta_x$ is a distribution which is a sum of Dirac deltas, but is not a translation bounded measure. But then, as far as I know, there is nothing done in the quasycrystal community in this direction, and I don't know how this fits within our Theory (Hof might have done something in this direction though).
Last but not least, classifying all one dimensional quasicrystals seems like a problem which is impossible to solve, at least in a reasonable way. The issue comes from the fact that at any point in $\chi \in \hat{\mathbb{R^d} }$ we lose in the diffraction the phase information, which is a complex number on the unit circle. There are $c^c$ potential values the phase could take, not all of the potential values would work, but it is easy to show that in general, for a given diffraction, there are at least $c$ values which can work and I suspect that there are more than $c$ good values.
Because of this, each diffraction corresponds to infinitely many solids, some which are related and some which are not. If somehow we manage to get the classification of 1-D quasicrystals, we still have to face two huge issues :
• The classification will be an uncountable list of classes, each with uncountable elements. And given the many cases of models with the same diffraction, the classification is highly likely to be not nice..
• If we have a list of all quasicrystals, how do we check if the zeroes of RZF are or are not in the list, without knowing already the zeroes?
To me, this approach seems similar to trying to classify the zeroes of all analytic functions, and then check which ones correspond to the RZF....
([1]) A. Cordoba, Dirac Combs, Letter Math Physics, 17 (1989), 191-196
-
Just to clarify something, couldn't fit this nicely anywhere. Cordoba's result guarantees that for generic points sets $S$ the Fourier Trasnform of $\delta_S$ is not a measure. Intuitively the diffraction measure of $S$ is just $\left| \hat{\delta_S} \right|^2$. This doesn't seem to make too much sense in the distribution theory, but can be nicely set up using the theory of Fourier transform of measures. Any positive definite translation bounded measure is FT and its FT is a measure, and the FT in this case coincide with the FT of tempered distributions.... – Nick S Jun 20 '13 at 16:00
The main issues about the phase problem is the fact that $S$ and $\hat{\delta_S}$ uniquely determine eachother. So the problem reduces to this: Given $\left| \hat{\delta_S} \right|^2$ determine $\hat{\delta_S}$.. – Nick S Jun 20 '13 at 16:03
Thanks for all that! The replies to my question made me realize that if you take the Fourier transform of the sum of Dirac deltas supported at the nontrivial Riemann zeta zeros (let's assume they're all on the line $\mathrm{Re}(z) = 1/2$), the result is not a discrete measure: there's also a continuous part! So Dyson's remarks seem to require some correction, and I don't know how to correct them. – John Baez Jun 20 '13 at 19:15
The correct formula for the the Fourier transform of the sum of Dirac deltas supported at the nontrivial Riemann zeta zeros is called the Guinand-Weil explicit formula, and I wrote it down near the bottom of this: golem.ph.utexas.edu/category/2013/06/… – John Baez Jun 20 '13 at 19:17
The classical result that comes closest to this is Landau's (1911) formula: for any $x > 1$,
$\frac{1}{T} \sum_{0 < \Im \rho \leq T} x^\rho = -\frac{1}{2\pi} \Lambda(x) + O(\frac{\log T}{T})$
where the sum is taken over all zeta zeros in the critical strip $0 < \Re \rho < 1$ with imaginary part between $0$ and $T$; $\Lambda$ is the von Mangoldt function, which is supported on the prime powers.
Of course that sum is over all zeros and doesn't itself imply anything about the sum over the zeros on the critical line.
-
The identity of distributions resembles the Weil-Guinand explicit formula, see here: http://en.wikipedia.org/wiki/Explicit_formula
To my knowledge, you either have to include the trivial zeros of zeta or the distributions coming from $\Gamma$-factors and poles, so the formula as stated seems wrong from that perspective.
On the history: Guinand was able to prove this formula assuming RH, Weil removed the RH condition. Such a formula actually holds true for every function with a certain vertical growth restriction, an Euler product and a functional equation, e.g., the Selberg zeta function or every automorphic L function.
The issue whether or not RH hold might be equivalent to which kind of test functions the formula can be applied, eg., if the zeros do not lie on a line, the Fourier transform must extent to a neighborhood in a unique fasion (holomorphic).
What do we know about your distribution $f$ is essentially (Fourier) dual to what we know about the explicit formula which is in some sense (modulo Fourier uncertainity) equivalent to the knowledge about the zeros.
There is a certain positivity condition known to be equivalent to RH due to Weil(?), which can be shown easily via the explicit formula. I can't recall a good reference for it but there are plenty of survey article about RH (either an article of Conrey or Sarnak).
Edit: See the review of Weil's article for precise versions of my vague statements about equivalences for RH
Quote: "The author further proves that a necessary and sufficient condition for the validity of the Riemann hypothesis for L(s) is that the right-hand side of (I) is ≥0 for all functions F(x) of a certain class. He also gives a necessary and sufficient condition for the validity of the Riemann hypothesis for all functions L(s) belonging to k and this in the form that a certain distribution on the group of idèle-classes should be of positive type."
-
See also my exposition here for a derivation: mathoverflow.net/questions/62816/… – plusepsilon.de Jun 13 '13 at 13:39
Thanks, that's very helpful. – John Baez Jun 14 '13 at 21:41
A very clear text on Explicite formula, Weil distributions and Weil positivity criterion for RH is "THE EXPLICIT FORMULA IN SIMPLE TERMS" by Jean-Francois Burnol at arxiv.org/abs/math/9810169v2; he also treats Dirichlet L-functions in a subsequent paper arxiv.org/abs/math/9902080v1. – Yauhen Radyna Jul 1 '13 at 19:28
Just for your reference, the equation: $$\sum_{j = 1}^\infty \delta(k - k_j) = - \frac{i}{\pi} \sum_{p} \sum_{m = 1}^\infty \frac{\ln p}{p^{m/2}} e^{i k \ln{p^m}}$$ seems to be a re-statement, in a distributional setting, of Riemann's explicit formula. It can be proven unconditionally, with appropriate test functions on each side. See for example Lemma 1 in http://www.math.sjsu.edu/~goldston/article38.pdf
EDIT: If you don't assume the Riemann Hypothesis, then of course some of the $k_j$ will have to be complex numbers.
I think that helps a lot. If the Riemann Hypothesis holds, all the $k_j$ are real, so the Fourier transform of the right-hand side will be a linear combination of the Dirac deltas. If the Riemann Hypothesis is false, some of the $k_j$ will be complex, so I see no reason to expect the Fourier transform of the right-hand side to be a linear combination of Dirac deltas. However, it would take me some work to prove it's not. Has someone done that? – John Baez Jun 13 '13 at 1:48
... I need to think more about it but certainly most versions of the explicit formula that I came accross assumes that at least one of $h$ or $\widehat{h}$ is analytic in some strip, since complex analysis is used in the proof. The only exception is arxiv.org/abs/1203.5328 (section 2) where the explicit formula is proven by a rather convoluted method, but it might look a bit more like what you would like to do... – Broadeducation Jun 13 '13 at 4:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8863361477851868, "perplexity": 358.16701126021627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/how-to-find-prove-this-two-variable-limit.608972/ | # How to find/prove this two variable limit
1. May 25, 2012
### ozone
It has been a while since I have done limis now, and I want to make sure I am doing this correctly.
I have the problem listed below
$(x-y)/(x+y) as (x,y) -> (0,0)$
In my mind I can gather that it will asymptote as you approach from either (-1,1) or (1,-1)
I also know that it will asymptote in opposite directions, and as far as I know this is what defines a function to have "no limit".
I am just wondering what I need to state algebraicly in order to "prove" this limit.
2. May 25, 2012
### DonAntonio
Choose $\,y=x\,$ and the limit is zero, but now chose $\,y=2x\,$ and the limit then is $\,\displaystyle{-\frac{1}{3}}$
Ergo: the limit doesn't exist.
DonAntonio
3. May 26, 2012
### haruspex
It doesn't mean anything to let two variables tend to a limit simultaneously unless you specify the relationship between them. This is different from there being no limit. It has many possible limits and the question is incompletely specified.
4. May 26, 2012
### micromass
Of course you can have limits of functions in two variables without having a relationship between them. This is basic multivariable calculus.
5. May 26, 2012
### haruspex
No. You can define the limit of the function as one variable tends to some value, and the limit of that as the other variable tends to some value. That's taking the limits to be in a particular order. If you swap the order you might or might not get the same result. Each is perfectly well defined in itself, but if you don't specify the order, and don't specify a relationship between them for approaching the limits simultaneously, it is simply not defined.
6. May 26, 2012
### micromass
Did you take multivariable calculus??
For $f:\mathbb{R}^2\rightarrow \mathbb{R}$, we define $\lim_{(x,y)\rightarrow (a,b)} f(x,y)=L$ if
$$\forall \varepsilon >0: \exists \delta > 0: \forall (x,y)\in \mathbb{R}^2:~0<\|(x,y)-(a,b)\|_2<\delta~\Rightarrow~|f(x,y)-L|<\varepsilon$$
where we of course define $\|(x,y)\|_2=\sqrt{x^2+y^2}$.
7. May 26, 2012
### DonAntonio
Either you didn't study several variables calculus or you already forgot: if the limit $\,\displaystyle{\lim_{(x,y)\to (x_0,y_0)}f(x,y)}\,$ exists then by definition
it must exist and be the same no matter how the variables approach the point $\,(x_0,y_0)\,$, and this is why
my first post proves the limit wanted in the OP doesn't exist.
DonAntonio
Last edited: May 26, 2012
8. May 26, 2012
### haruspex
I must have forgotten - sorry. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9533734917640686, "perplexity": 574.2406008218518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823445.39/warc/CC-MAIN-20181210212544-20181210234044-00197.warc.gz"} |
http://math.stackexchange.com/questions/390887/is-the-kernel-of-a-homomorphism-from-a-boolean-ring-to-mathbbz-2-always-a-m | # Is the kernel of a homomorphism from a Boolean ring to $\mathbb{Z}_2$ always a maximal ideal?
Let $(B, +, \cdot)$ be a ring (not necessarily unital!) with the property that every $x \in B$ satisfies $x \cdot x = x$.
How does one show that the kernel of any non-zero homomorphism of rings $h:B\rightarrow \mathbb{Z}_2$ is a maximal ideal of $(B, +, \cdot)$?
I'm looking for an elementary proof, not requiring anything more than the definitions of a ring, of an ideal, and of $(B, +, \cdot)$, and, if necessary, the easily shown facts that $x + x = 0$ and $x\cdot y = y\cdot x,\,\forall\, x,y \in B$.
-
Since $h:B \to \mathbb{Z}_2$ is a non-zero homomorphism, $\ker h$ is a proper ideal of $h.$ If $xy\in \ker h$ then $h(xy)=h(x)h(y)=0$ in $\mathbb{Z}_2$ so either $x$ or $y$ is in $\ker h,$ hence $\ker h$ is a prime ideal of $B.$
Now suppose $I$ is an ideal of $B$ that strictly contains $\ker h.$ Pick $x\in I\setminus \ker h.$ For any $y\in B$ we have $x(y-xy)=0\in \ker h$ and since $\ker h$ is a prime ideal we have $y-xy \in \ker h.$ Thus $y = (y-xy) +xy \in I$ and we conclude that $\ker h$ is maximal.
-
Nice proof. Thanks. My only quibble with it (and it's tiny) is that its reference to "prime ideals" was unnecessary, since it already contains the proof of $xy \in \ker h \Rightarrow x \in \ker h$ or $y \in \ker h$, which is all the rest of the proof needs. – kjo May 14 '13 at 10:15
@kjo The reference was a) to sum up both the implication $xy\in \ker h \implies x\in \ker h$ or $y\in \ker h$ and $\ker h \neq B$ (both properties being crucial) and b) to make it easier to see the natural generalization of the next paragraph (that any prime ideal of a Boolean rng is maximal). – Ragib Zaman May 14 '13 at 10:20
Thanks, I had not noticed (b). – kjo May 14 '13 at 11:20
Let $f:B\to\mathbb{Z}_2$ be a non-zero homomorphism, and let $I=\ker(f)$. Suppose $I\subsetneq J\subseteq B$. There exists an element $x\in J\setminus I$ (because $I\subsetneq J$), and we must have $f(x)=1$ (because there are only two elements of $\mathbb{Z}_2$ to go to), so that $$f(x-1_B)=f(x)-f(1_B)=1-1=0,$$ and hence $x-1_B\in \ker(f)=I\subset J$. Thus $$1_B=x-(x-1_B)\in J,$$ and therefore $J=B$. Thus $I$ is maximal.
The standard proof:
Any non-zero homomorphism of rings to $\mathbb{Z}_2$ is surjective; apply the first isomorphism theorem. Then use that an ideal $I$ of a ring $R$ is maximal $\iff$ $R/I$ is a field.
(None of this depended on any properties of $B$.)
-
Aw @Zev, you beat me to it. +1 – Stahl May 14 '13 at 0:22
Thanks, I'm looking for more elementary proofs... (I've clarified my question) – kjo May 14 '13 at 0:23
Thanks again, but I'm embarrassed to admit that there are several steps in your proof I can't follow. For one, I don't know how you know that $B$ has a unit, $1_B$. Second, I don't see why $x - (x - 1_B)$ has to belong to $J$, even if I assume that $J$ is an ideal. – kjo May 14 '13 at 1:27
@kjo I'm not sure about the unity part of your question, although it's usually a safe assumption: most rings we want to consider have unity. As for why $x - (x - 1_B)\in J$: $J$ is by assumption an ideal, and we assumed $x\in J$. However, the calculation Zev did in the proof showed that $x - 1_B\in\ker f$. Since $\ker f\subseteq J$, $x - 1_B\in J$ as well. Now, as ideals are abelian groups, we must have $x - (x - 1_B)\in J$, since $x, x - 1_B\in J$. – Stahl May 14 '13 at 1:42
@Stahl: thanks for the explanation. I did not realize that some definitions of a ring assume that the ring has a unit element. I have modified my question to explicitly deny this assumption. – kjo May 14 '13 at 9:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658295512199402, "perplexity": 213.8281509566973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997865523.12/warc/CC-MAIN-20140722025745-00156-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://johncarlosbaez.wordpress.com/2021/08/05/information-geometry-part-18/ | ## Information Geometry (Part 18)
Last time I sketched how two related forms of geometry, symplectic and contact geometry, show up in thermodynamics. Today I want to explain how they show up in probability theory.
For some reason I haven’t seen much discussion of this! But people should have looked into this. After all, statistical mechanics explains thermodynamics in terms of probability theory, so if some mathematical structure shows up in thermodynamics it should appear in statistical mechanics… and thus ultimately in probability theory.
I just figured out how this works for symplectic and contact geometry.
Suppose a system has $n$ possible states. We’ll call these microstates, following the tradition in statistical mechanics. If you don’t know what ‘microstate’ means, don’t worry about it! But the rough idea is that if you have a macroscopic system like a rock, the precise details of what its atoms are doing are described by a microstate, and many different microstates could be indistinguishable unless you look very carefully.
We’ll call the microstates $1, 2, \dots, n.$ So, if you don’t want to think about physics, when I say microstate I’ll just mean an integer from 1 to n.
Next, a probability distribution $q$ assigns a real number $q_i$ to each microstate, and these numbers must sum to 1 and be nonnegative. So, we have $q \in \mathbb{R}^n,$ though not every vector in $\mathbb{R}^n$ is a probability distribution.
I’m sure you’re wondering why I’m using $q$ rather than $p$ to stand for an observable instead of a probability distribution. Am I just trying to confuse you?
No: I’m trying to set up an analogy to physics!
Last time I introduced symplectic geometry using classical mechanics. The most important example of a symplectic manifold is the cotangent bundle $T^\ast Q$ of a manifold $Q.$ A point of $T^\ast Q$ is a pair $(q,p)$ consisting of a point $q \in Q$ and a cotangent vector $p \in T^\ast_q Q.$ In classical mechanics the point $q$ describes the position of some physical system, while $p$ describes its momentum.
So, I’m going to set up an analogy like this:
Classical Mechanics Probability Theory $q$ position probability distribution $p$ momentum ???
But what is to momentum as probability is to position?
A big clue is the appearance of symplectic geometry in thermodynamics, which I also outlined last time. We can use this to get some intuition about the analogue of momentum in probability theory.
In thermodynamics, a system has a manifold $Q$ of states. (These are not the ‘microstates’ I mentioned before: we’ll see the relation later.) There is a function
$f \colon Q \to \mathbb{R}$
describing the entropy of the system as a function of its state. There is a law of thermodynamics saying that
$p = (df)_q$
This equation picks out a submanifold of $T^\ast Q,$ namely
$\Lambda = \{(q,p) \in T^\ast Q : \; p = (df)_q \}$
Moreover this submanifold is Lagrangian: the symplectic structure $\omega$ vanishes when restricted to it:
$\displaystyle{ \omega |_\Lambda = 0 }$
This is very beautiful, but it goes by so fast you might almost miss it! So let’s clutter it up a bit with coordinates. We often use local coordinates on $Q$ and describe a point $q \in Q$ using these coordinates, getting a point
$(q_1, \dots, q_n) \in \mathbb{R}^n$
They give rise to local coordinates $q_1, \dots, q_n, p_1, \dots, p_n$ on the cotangent bundle $T^\ast Q.$ The $q_i$ are called extensive variables, because they are typically things that you can measure only by totalling up something over the whole system, like the energy or volume of a cylinder of gas. The $p_i$ are called intensive variables, because they are typically things that you can measure locally at any point, like temperature or pressure.
In these local coordinates, the symplectic structure on $T^\ast Q$ is the 2-form given by
$\omega = dp_1 \wedge dq_1 + \cdots + dp_n \wedge dq_n$
The equation
$p = (df)_q$
serves as a law of physics that determines the intensive variables given the extensive ones when our system is in thermodynamic equilibrium. Written out using coordinates, this law says
$\displaystyle{ p_i = \frac{\partial f}{\partial q_i} }$
It looks pretty bland here, but in fact it gives formulas for the temperature and pressure of a gas, and many other useful formulas in thermodynamics.
Now we are ready to see how all this plays out in probability theory! We’ll get an analogy like this, which goes hand-in-hand with our earlier one:
Thermodynamics Probability Theory $q$ extensive variables probability distribution $p$ intensive variables ???
This analogy is clearer than the last, because statistical mechanics reveals that the extensive variables in thermodynamics are really just summaries of probability distributions on microstates. Furthermore, both thermodynamics and probability theory have a concept of entropy.
So, let’s take our manifold $Q$ to consist of probability distributions on the set of microstates I was talking about before: the set $\{1, \dots, n\}.$ Actually, let’s use nowhere vanishing probability distributions:
$\displaystyle{ Q = \{ q \in \mathbb{R}^n : \; q_i > 0, \; \sum_{i=1}^n q_i = 1 \} }$
I’m requiring $q_i > 0$ to ensure $Q$ is a manifold, and also to make sure $f$ is differentiable: it ceases to be differentiable when one of the probabilities $q_i$ hits zero.
Since $Q$ is a manifold, its cotangent bundle is a symplectic manifold $T^\ast Q.$ And here’s the good news: we have a god-given entropy function
$f \colon Q \to \mathbb{R}$
namely the Shannon entropy
$\displaystyle{ f(q) = - \sum_{i = 1}^n q_i \ln q_i }$
So, everything I just described about thermodynamics works in the setting of plain old probability theory! Starting from our manifold $Q$ and the entropy function, we get all the rest, leading up to the Lagrangian submanifold
$\Lambda = \{(q,p) \in T^\ast Q : \; p = (df)_q \}$
that describes the relation between extensive and intensive variables.
For computations it helps to pick coordinates on $Q.$ Since the probabilities $q_1, \dots, q_n$ sum to 1, they aren’t independent coordinates on $Q.$ So, we can either pick all but one of them as coordinates, or learn how to deal with non-independent coordinates, which are already completely standard in projective geometry. Let’s do the former, just to keep things simple.
These coordinates on $Q$ give rise in the usual way to coordinates $q_i$ and $p_i$ on the cotangent bundle $T^\ast Q.$ These play the role of extensive and intensive variables, respectively, and it should be very interesting to impose the equation
$\displaystyle{ p_i = \frac{\partial f}{\partial q_i} }$
where $f$ is the Shannon entropy. This picks out a Lagrangian submanifold $\Lambda \subseteq T^\ast Q.$
So, the question becomes: what does this mean? If this formula gives the analogue of momentum for probability theory, what does this analogue of momentum mean?
Here’s a preliminary answer: $p_i$ says how fast entropy increases as we increase the probability $q_i$ that our system is in the ith microstate. So if we think of nature as ‘wanting’ to maximize entropy, the quantity $p_i$ says how eager it is to increase the probability $q_i.$
Indeed, you can think of $p_i$ as a bit like pressure—one of the most famous intensive quantities in thermodynamics. A gas ‘wants’ to expand, and its pressure says precisely how eager it is to expand. Similarly, a probability distribution ‘wants’ to flatten out, to maximize entropy, and $p_i$ says how eager it is to increase the probability $q_i$ in order to do this.
But what can we do with this concept? And what does symplectic geometry do for probability theory?
I will start tackling these questions next time.
One thing I’ll show is that when we reduce thermodynamics to probability theory using the ideas of statistical mechanics, the appearance of symplectic geometry in thermodynamics follows from its appearance in probability theory.
Another thing I want to investigate is how other geometrical structures on the space of probability distributions, like the Fisher information metric, interact with the symplectic structure on its cotangent bundle. This will integrate symplectic geometry and information geometry.
I also want to bring contact geometry into the picture. It’s already easy to see from our work last time how this should go. We treat the entropy $S$ as an independent variable, and replace $T^\ast Q$ with a larger manifold $T^\ast Q \times \mathbb{R}$ having $S$ as an extra coordinate. This is a contact manifold with contact form
$\alpha = -dS + p_1 dq_i + \cdots + p_n dq_n$
This contact manifold has a submanifold $\Sigma$ where we remember that entropy is a function of the probability distribution $q,$ and define $p$ in terms of $q$ as usual:
$\Sigma = \{(q,p,S) \in T^\ast Q \times \mathbb{R} : \; S = f(q), \; p = (df)_q \}$
And as we saw last time, $\Sigma$ is a Legendrian submanifold, meaning
$\displaystyle{ \alpha|_{\Sigma} = 0 }$
But again, we want to understand what these ideas from contact geometry really do for probability theory!
For all my old posts on information geometry, go here:
### 2 Responses to Information Geometry (Part 18)
1. Toby Bartels says:
It seems to me that if we want to know what to call $p_i$, then we should calculate it:
$p_i := \partial S/\partial q_i = \partial(-\sum_i q_i \ln q_i)/\partial q_i =$
$-d(q_i \ln q_i)/d q_i = -\ln q_i - 1.$
Now, $-\ln q$ is often called the surprisal; it tells you how surprised you should be if an event of probability $q$ occurs (from no surprise if the event is certain to infinite surprise if the event is impossible). For example, the entropy is the expected surprisal. And so $p_i$ is basically the surprisal of microstate $i$, only we subtract $1$ (the surprisal associated with a probability of $1/e$) for some reason.
But actually, there’s a flaw in my calculation, because I forgot that there are only $n - 1$ independent variables, so I need to add on $\partial (-q_n \ln q_n)/\partial q_i$, where $q_n = 1 - \sum_{i < n} q_i$, so that $\partial q_n/\partial q_i = -1:$
$\partial(-q_n \ln q_n)/\partial q_i = (d(-q_n \ln q_n)/d q_n) (\partial q_n/\partial q_i) =$
$(-\ln q_n - 1) (-1) = \ln q_n + 1.$
Therefore, the correct value of $p_i$ is $\ln q_n - \ln q_i$, the relative surprisal of microstate $i$ relative to microstate $n$ (the state whose probability we arbitrarily chose not to include as an independent variable). At least the mysterious $1$s cancelled.
2. Jeremy Schmitt says:
Great post and a fascinating topic! I did my thesis on symplectic integrators ( the same type that power Hamiltonian Monte Carlo), and the connection with information geometry is really intriguing. My advisor at UCSD had one related paper that attempted to connect symplectic and information geometry in a discrete setting by connecting divergence functions and a generating function.
https://www.mdpi.com/1099-4300/19/10/518
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 100, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9416313767433167, "perplexity": 332.619528766693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300997.67/warc/CC-MAIN-20220118182855-20220118212855-00089.warc.gz"} |
https://socratic.org/questions/what-is-the-exponent-rule-of-logarithms | Precalculus
Topics
# What is the exponent rule of logarithms?
Sep 16, 2016
${\log}_{a} \left({m}^{n}\right) = n {\log}_{a} \left(m\right)$
#### Explanation:
Consider the logarithmic number ${\log}_{a} \left(m\right) = x$:
${\log}_{a} \left(m\right) = x$
Using the laws of logarithms:
$\implies m = {a}^{x}$
Let's raise both sides of the equation to $n$th power:
$\implies {m}^{n} = {\left({a}^{x}\right)}^{n}$
Using the laws of exponents:
$\implies {m}^{n} = {a}^{x n}$
Let's separate $x n$ from $a$:
$\implies {\log}_{a} \left({m}^{n}\right) = x n$
Now, we know that ${\log}_{a} \left(m\right) = x$.
Let's substitute this in for $x$:
$\implies {\log}_{a} \left({m}^{n}\right) = n {\log}_{a} \left(m\right)$
##### Impact of this question
845 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653278350830078, "perplexity": 2619.366038355644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538431.77/warc/CC-MAIN-20210123191721-20210123221721-00040.warc.gz"} |
http://cms.math.ca/cmb/msc/47L25?fromjnl=cmb&jnl=CMB | location: Publications → journals
Search results
Search: MSC category 47L25 ( Operator spaces (= matricially normed spaces) [See also 46L07] )
Expand all Collapse all Results 1 - 4 of 4
1. CMB 2012 (vol 57 pp. 166)
Öztop, Serap; Spronk, Nico
On Minimal and Maximal $p$-operator Space Structures We show that for $p$-operator spaces, there are natural notions of minimal and maximal structures. These are useful for dealing with tensor products. Keywords:$p$-operator space, min space, max spaceCategories:46L07, 47L25, 46G10
2. CMB 2011 (vol 54 pp. 654)
Forrest, Brian E.; Runde, Volker
Norm One Idempotent $cb$-Multipliers with Applications to the Fourier Algebra in the $cb$-Multiplier Norm For a locally compact group $G$, let $A(G)$ be its Fourier algebra, let $M_{cb}A(G)$ denote the completely bounded multipliers of $A(G)$, and let $A_{\mathit{Mcb}}(G)$ stand for the closure of $A(G)$ in $M_{cb}A(G)$. We characterize the norm one idempotents in $M_{cb}A(G)$: the indicator function of a set $E \subset G$ is a norm one idempotent in $M_{cb}A(G)$ if and only if $E$ is a coset of an open subgroup of $G$. As applications, we describe the closed ideals of $A_{\mathit{Mcb}}(G)$ with an approximate identity bounded by $1$, and we characterize those $G$ for which $A_{\mathit{Mcb}}(G)$ is $1$-amenable in the sense of B. E. Johnson. (We can even slightly relax the norm bounds.) Keywords:amenability, bounded approximate identity, $cb$-multiplier norm, Fourier algebra, norm one idempotentCategories:43A22, 20E05, 43A30, 46J10, 46J40, 46L07, 47L25
3. CMB 2005 (vol 48 pp. 97)
Katavolos, Aristides; Paulsen, Vern I.
On the Ranges of Bimodule Projections We develop a symbol calculus for normal bimodule maps over a masa that is the natural analogue of the Schur product theory. Using this calculus we are easily able to give a complete description of the ranges of contractive normal bimodule idempotents that avoids the theory of J*-algebras. We prove that if $P$ is a normal bimodule idempotent and $\|P\| < 2/\sqrt{3}$ then $P$ is a contraction. We finish with some attempts at extending the symbol calculus to non-normal maps. Categories:46L15, 47L25
4. CMB 2003 (vol 46 pp. 632)
Runde, Volker
The Operator Amenability of Uniform Algebras We prove a quantized version of a theorem by M.~V.~She\u{\i}nberg: A uniform algebra equipped with its canonical, {\it i.e.}, minimal, operator space structure is operator amenable if and only if it is a commutative $C^\ast$-algebra. Keywords:uniform algebras, amenable Banach algebras, operator amenability, minimal, operator spaceCategories:46H20, 46H25, 46J10, 46J40, 47L25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9895210862159729, "perplexity": 1137.7032582883621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268660.14/warc/CC-MAIN-20140728011748-00261-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://m-studying-english.hatenablog.com/entry/2018/04/26/004158 | # 強勢でない音はどう発音するか?in English
When we study English accent ourselves, we typically focus on study on consonant(子音) and vowl(母音). However, more important thing to learn seems to be how to pronunce letters which are not stress.
Let's take a look at an example
PRONUNCIATION
Most of the readers of this blog are very familiar with this word.
Please read this loud this once. I guess you said "プロナンシェ―ション" or something similar to this.
However, the IPA of this word is
prə-ˌnən(t)-sē-ˈā-shən.
The stress of this word is "ciAtion" part. Apart from the stress, let's look at first two syllables.
\prə-ˌnən
Here, you clearly see schwa ə sounds, instead of "o", "ou" or something else.
Therefore, vowls of "pronun" part may sound very similar. In practice, it can soud different because speaker's tongue and lip prepares for a coming consonant. Thefore, there are (probably) some variations among speakers accent how it sounds.
We should, however, keep in mind that the IPA of "pre nun" part is the same.
When it comes to long syllable words, this happens frequently. Here are some examples:
intonation: \ ˌin-tə-ˈnā-shən
characteristics: \ ˌker-ik-tə-ˈris-tik
There are some exeptions. But you see that there are many letters which are reduced to schwa sounds.
This is all for today. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8210744857788086, "perplexity": 4208.060058246859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213666.61/warc/CC-MAIN-20180818114957-20180818134957-00417.warc.gz"} |
https://proofwiki.org/wiki/Axiom:Axiom_of_Swelledness | # Axiom:Axiom of Swelledness
## Axiom
Let $V$ be a basic universe.
$V$ is a swelled class.
That is, every subclass of a set which is an element of $V$ is a set in $V$.
Briefly:
Every subclass of a set is a set.
## Also see
• Results about the Axiom of Swelledness can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8823040127754211, "perplexity": 1031.6841369402287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00306.warc.gz"} |
http://mathhelpforum.com/calculus/203244-math-word-problem-rational-functions.html | # Math Help - Math Word Problem: Rational Functions
1. ## Math Word Problem: Rational Functions
Experiments conducted by A.J. Clark suggest that the response R(x) of a frog's heart muscle to the injection of x units of acetylcholine (as a percent of the maximum possible effect of the drug) may be appromixated by the rational function
R(x)=100x/b+x (x greater than or equal to 0)
where b is a positive constant that depends on the particular frog.
a. If concentration of 40 units of acetylcholine produces a response of 50% for a certain frog, find the "response function" for this frog.
I am so confused by this problem. For my answer I have 98.77.
Help would be greatly appreciated.
2. ## Re: Math Word Problem: Rational Functions
Originally Posted by AntoninaFinn
Experiments conducted by A.J. Clark suggest that the response R(x) of a frog's heart muscle to the injection of x units of acetylcholine (as a percent of the maximum possible effect of the drug) may be appromixated by the rational function R(x)=100x/b+x (x greater than or equal to 0)
where b is a positive constant that depends on the particular frog.
a. If concentration of 40 units of acetylcholine produces a response of 50% for a certain frog, find the "response function" for this frog. I am so confused by this problem. For my answer I have 98.77.
b = 400.
3. ## Re: Math Word Problem: Rational Functions
I don't think that's right. Aren't they asking for a function?
And what is with all of those x's?
It said to find the "response function". But thanks anyway.
4. ## Re: Math Word Problem: Rational Functions
Originally Posted by AntoninaFinn
I don't think that's right. Aren't they asking for a function?
And what is with all of those x's?
It said to find the "response function". But thanks anyway.
R(x)=100x/b+x where b=400
5. ## Re: Math Word Problem: Rational Functions
Sorry, Max but that still isn't helping me.
How did you get the 400?
6. ## Re: Math Word Problem: Rational Functions
How does the 50% play into the whole function? They said to find the response function, so are you saying that the response function is R(x)=100x/b+x ?
And for the 40 units of acetylcholine, do I substitute into the function that they give me, such as R(40)=100(40)/b+40.
7. ## Re: Math Word Problem: Rational Functions
Since part (b) of the problem says that I have to use the model that I found in part (a) to find the response of the frog's heart muscle when 60 units of acetylcholine are administered.
8. ## Re: Math Word Problem: Rational Functions
Yes, just calculate R(60)=?%
? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8520532250404358, "perplexity": 1960.9275685697546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927767.46/warc/CC-MAIN-20150521113207-00169-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://advances.sciencemag.org/content/2/8/e1600709 | Research ArticleMATERIALS SCIENCE
# Visualizing weakly bound surface Fermi arcs and their correspondence to bulk Weyl fermions
See allHide authors and affiliations
Vol. 2, no. 8, e1600709
## Abstract
Fermi arcs are the surface manifestation of the topological nature of Weyl semimetals, enforced by the bulk-boundary correspondence with the bulk Weyl nodes. The surface of tantalum arsenide, similar to that of other members of the Weyl semimetal class, hosts nontopological bands that obscure the exploration of this correspondence. We use the spatial structure of the Fermi arc wave function, probed by scanning tunneling microscopy, as a spectroscopic tool to distinguish and characterize the surface Fermi arc bands. We find that, as opposed to nontopological states, the Fermi arc wave function is weakly affected by the surface potential: it spreads rather uniformly within the unit cell and penetrates deeper into the bulk. Fermi arcs reside predominantly on tantalum sites, from which the topological bulk bands are derived. Furthermore, we identify a correspondence between the Fermi arc dispersion and the energy and momentum of the bulk Weyl nodes that classify this material as topological. We obtain these results by introducing an analysis based on the role the Bloch wave function has in shaping quantum electronic interference patterns. It thus carries broader applicability to the study of other electronic systems and other physical processes.
Keywords
• Weyl semimetals
• fermi arcs
• topological materials
• scanning tunneling microscopy
## INTRODUCTION
Topological states of matter harbor strikingly unique boundary states, such as the chiral edges of the quantum Hall effect (1), the surface states of topological insulators (2, 3), and the Majorana end modes of topological superconductors. The properties of these surface states, such as gapless surface spectrum, relativistic dynamics, and evasion of localization by disorder, are determined by the topological nature of the bulk and are protected by the energy gap in the bulk’s spectrum. These states cannot be realized as stand-alone systems without the coupling to the topological bulk. Surprisingly, these states exist even on two-dimensional surfaces of three-dimensional Weyl semimetals (49), despite the absence of a bulk energy gap (10). The defining characteristic of these states is the Fermi arcs, which may not be realized on stand-alone two-dimensional systems. Whereas in two-dimensional systems, lines of constant energy must form closed contours in momentum space, Fermi arcs are open contours that emanate and end in states associated with bulk Dirac cones whose nodes are termed Weyl nodes. Here, we use scanning tunneling microscopy (STM) and spectroscopy to study surface states in the Weyl semimetal tantalum arsenide (TaAs). Bulk and surface band structures of TaAs have been modeled (79) and mapped by photoemission spectroscopy (1113), and its unique electrodynamics (1418) have been probed in magnetotransport (19, 20). Previous STM studies of this material have identified scattering processes among the nontopological surface bands (21) and between these surface bands and a Fermi arc band, where absence of other scattering processes involving Fermi arc states was attributed to their connectivity with topological bulk bands (22). Here, we report an exhaustive visualization of the scattering processes available among the different surface bands, including intra-arc scatterings. Processes involving Fermi arc states are identified by their distinctive attributes and real-space structure. This provides a comprehensive characterization of the unusual properties of the Fermi arc states and their unique correspondence to topological bulk bands.
## RESULTS
High-quality single crystals of TaAs (see Materials and Methods) were cold-cleaved at 80 K under ultrahigh vacuum, exposing a fresh (001) surface that was measured at 4.2 K in a commercial STM (UNISOKU). Quasiparticle interference (QPI) patterns that elastically scattered electrons embed in the local density of states were measured in differential conductance (dI/dV) mappings. We reveal four distinct attributes of the Fermi arcs by measuring different aspects of their scattering processes: (i) their relatively isotropic QPI profile, revealed by scattering off atomic vacancies (2326); (ii) the linear energy dispersion and its relation with bulk Weyl nodes, by scattering off a crystallographic step edge; (iii) their localization on the Ta layer, by tracing the spatial origin of their QPI patterns with subatomic resolution; and (iv) the weak coupling to the surface atomic structure, as opposed to the strongly coupled trivial states, deduced from the manifestation of their wave function structure in the QPI pattern.
### Visualizing the Fermi arc contour
An atomically resolved topographic image with several vacancies is shown in Fig. 1A. The QPI patterns seen in the dI/dV map (Fig. 1B) appear around those vacancies. In TaAs, these QPI patterns are superimposed on a spatial density modulation, commensurate with the lattice structure (inset). Fourier decomposition of the QPI pattern at a fixed energy (Fig. 1C) separates surface scattering processes according to their transferred momentum, q, between incoming and scattered electronic waves. We recognize three prominent QPI patterns: ellipse-shaped patterns around Go and G±Y, half of bowtie-shaped patterns at G±X, and portions of rounded square-shaped patterns on the four corners of the central zone (dashed lines). These QPI patterns result from inter- and intraband scattering of nontopological surface states (21). To associate these QPI patterns with particular scattering processes, we plot in Fig. 1D the spin-selective scattering probability (SSP) calculated (see the Supplementary Materials) for the As-terminated Fermi surface of TaAs, based on its previously extracted dispersion (9, 13) and spin texture (Fig. 1E) (13, 27). By comparing the SSP with the central zone in the QPI map, we identify the ellipse QPI with scattering within the ellipse-shaped bands (blue arrow in Fig. 1E), and the square patterns (green in the SSP) with bowtie-to-ellipse scattering (green arrow). The detected splitting of the latter into two concentric squares originates from scatterings among the spin-split copies of the ellipse and bowtie bands and thus directly reflects the strong spin-orbit coupling in TaAs (28). Absence of double ellipse and bowtie QPI patterns manifests the scattering protection provided by these bands’ approximate helical spin textures (27, 29).
Excellent agreement between the measured QPI and SSP as well as between the measured spectrum and the calculated density of states for As-terminated surface (Fig. 1G) confirms our identification of the As surface termination. We note that intraband scattering within the bowtie band (yellow in SSP) around Go is hardly observed in the measured QPI at that energy. Its absence (addressed below) enables first detection of the Fermi arcs around Go (magnified in Fig. 1F, left). We find two leaf-like features that peak beyond the ellipse (see also the Supplementary Materials). Quantitative agreement with the calculated SSP (Fig. 1F, right) identifies these features with the scattering processes between the Fermi arcs that emanate from the W2 Weyl node (defined in Fig. 1E) and the states located at their fine-structured tail (Q1 in Fig. 1E). Their arc-like shape directly reflects the contour of the Fermi arcs.
### Fermi arc dispersion and its correspondence to the bulk Weyl nodes
The energy dispersion of the Fermi arcs is measured by electron-scattering off a crystallographic step edge that is oriented 49° with respect to the crystal axis (Fig. 2A). Accordingly, the interference pattern forms approximately along the Go-GM direction (Fig. 2B). The measured dI/dV line cut (Fig. 2C) displays clear dispersing interference patterns superimposed on commensurate (inset) nondispersing modulations. Fourier transforming this map (Fig. 2D) reveals the energy evolution of the QPI along Go-GM. The dispersing ellipse- and square-shaped QPI (blue and green arrows, respectively) are identified by comparison to SSP (Fig. 2E); between these two, we observe (red arrow) a scattering signature among the two Fermi arcs (Q2 in Fig. 1E). Upon increasing the energy toward W2, the extent of each arc shrinks (79, 1113), resulting in a linear increase of the inter-arc separation, which corresponds to an average velocity of ~105 m/s per arc. At the W2 energy (2 meV above Fermi energy), the inter-arc separation becomes the inter–Weyl node separation (see its evolution in the Supplementary Materials) and equals 5.4 ± 0.1 nm−1, demonstrating a quantitative correspondence between the surface Fermi arc and the bulk Weyl node location in momentum space. Both values are consistent with our band structure modeling (9) and photoemission spectroscopy (1113).
### Common origin of surface Fermi arc and bulk Weyl bands
In contrast to trivial states that are bound to the surface by the local surface potential, the Fermi arcs’ existence is guaranteed by the bulk topology. We examined the distribution of the two types of bands with respect to the topmost As layer. To this end, we decompose (see the Supplementary Materials) the dI/dV map to submaps measured on As sites (Fig. 2F) and on Ta sites, located one monolayer deeper (Fig. 2G). The Fourier transforms of the two submaps display distinct patterns. QPI on the As layer matches that of the ellipse band (blue arrows in Fig. 2, E and H). In contrast, on the Ta layer, we find two opposite V-shaped curves (red arrows in Fig. 2I) that agree with the SSP of intra–Fermi arc processes (Q3 in Fig. 2J). Intriguingly, because of the shrinking extent of the Fermi arcs in momentum space toward the Weyl node, the upper dispersing branch extrapolates to the energy of the W2 Weyl node, again demonstrating the bulk-surface correspondence in energy and momentum. Furthermore, the distinct QPI patterns assert that the Fermi arc’s wave function is profoundly distinct from the nontopological dangling bond bands. Whereas the latter are confined to the As termination layer, the Fermi arc states, which relate to the bulk Weyl-cone Ta states (Fig. 2, H and I, insets), indeed reside on the Ta sites and extend further into the bulk.
### Structure of the surface wave function and their topological classification
We now show that the Fermi arc bands differ from nontopological bands, also in their structure, parallel to the surface within the unit cell. Figure 3 shows QPI maps at three different energies (right panels), alongside dI/dV maps in a vacancy-free region (left panels). At −300 meV (below Fermi energy), the vacancy-free dI/dV map and the corresponding QPI map are approximately symmetric to 90° rotations, and the QPI patterns are concentrated around Go. In contrast, at 85 meV, 130 meV, and the Fermi energy (Fig. 1B, inset), the vacancy-free dI/dV shows a clear chain structure that changes its crystallographic orientation with energy. The QPI features at corresponding energies are strongly replicated along that modulation direction. Modulation in a vacancy-free region ought to be attributed to the structure of the wave function. The correlation demonstrated in Fig. 3 is established by detailed comparison of the energy evolution of the intensities of the vacancy-free Bragg peaks (Fig. 4A) with that of the QPI scattering peaks (Fig. 4B). The Bragg peak intensity is extracted directly from the Fourier transform of a dI/dV map (Fig. 4A, inset) that is taken in a vacancy-free region, and the QPI intensities are extracted from the average intensity of the various QPI patterns (Fig. 4B, inset, and the Supplementary Materials). We note that small-momentum structure, which often arises from long-wavelength inhomogeneities, is completely absent in the vacancy-free image and hence cannot account for any features detected in the QPI at the zone center. Direct comparison of the two measures reveals that the intensity of vacancy-free dI/dV modulations along Γ-Y is fully correlated with the Go-GY replications of the ellipse’s QPI; the same is observed for the bowtie along the Γ-X direction. The strong correlation between the two seemingly unrelated phenomena extends to all energies and suggests that both are dictated by the structure of the wave function rather than the details of the scatterer. In contrast to the trivial bands, the Fermi arcs’ QPI does not show any detectable replications, implying their relatively uniform distribution within the unit cell.
Aiming at using this distinction to separate the different states, we note that both the dI/dV map in the vacancy-free regions and the QPI near vacancies reflect the coupling of the electrons to the periodic potential on the surface plane. The Bloch theorem constrains a state with a crystal momentum k to be a superposition of momenta k + G, where G is a vector in the two-dimensional reciprocal lattice, . Consequently, the local density of states in a vacancy-free region becomes ∑gAgeigr, where is the amplitude of the Bragg peak that corresponds to g = GG′, r is the position, E is the energy, and Ek is the energy of the state with momentum k. A state with multiple substantial Bloch coefficients has a fine structure within the unit cell, which translates to multiple Bragg peaks. A vacancy violates the periodicity and adds a potential V(r), whose Fourier transform is Vq. The vacancy may scatter an electron between states Ψk(r) and Ψk′(r) through any momentum transfer qg satisfying qg = kk′ + g. The amplitude for each of these processes is proportional, within the Born approximation, to . Hence, the multiple substantial coefficients result in replicas of the QPI around multiple Bragg peaks (30), limited by the ability of the potential to provide the required momentum transfer (see the Supplementary Materials).
Accordingly, the replicated QPI observed in Fig. 3 originates from bands whose wave functions include several substantial Bloch components. We attempt to eliminate these states from the QPI map by subtracting their scaled replicated signals from the Go signal. In Fig. 4C, we show the outcome of subtracting the ellipse QPI around G±Y from that around Go at EF (dashed and solid orange squares, respectively, in the inset of Fig. 4B). Whereas the ellipse is eliminated, the Γ-Y Fermi arcs’ QPI signature remains unchanged. This elimination further exposes a signature of the Γ-X Fermi arcs (compare to SSP of Fermi arcs shown in Fig. 4D). This observation indicates that the Fermi arc wave function on the surface differs from that on nontopological bands because it is composed of a single dominant term (g = 0) or a combination of terms whose momentum difference (g = GG) is larger than our resolution.
## DISCUSSION
The topological nature of Weyl semimetals is manifested by the bulk Weyl nodes, their Berry flux, and the essential surface Fermi arcs that accompany them. The correspondence between the bulk and surface states gives rise to various physical phenomena that characterize the topological semimetals and their unique electrodynamics (1418). However, in real systems, there are also nontopological surface states that overlap in space and energy with the topological Fermi arcs. These states, which may originate, for instance, from dangling bonds, are ubiquitous. Their effects on phenomena that involve the Fermi arcs, such as the cyclotron frequency of cyclotron orbits that connect opposite surfaces, are not determined by topological considerations alone; rather, it is affected by the combined energy-momentum dispersion of both types of states, by the wave functions of both types of states, and by impurity-induced scattering between the two types of states that we visualize.
The measurements we report here provide information on the interplay between the Fermi arc states and the nontopological ones as well as on their correspondence with the bulk Weyl nodes. We visualized scattering processes among the Fermi arc surface bands, processes that scatter Fermi arc states to trivial states, and processes that scatter between trivial states. The two processes that involve only topological states were found to be correlated with the energy-momentum location of the bulk Weyl nodes. The intra-arc scattering channel (Fig. 2, I and J) extrapolates to the momentum separation of a Weyl pair, whereas the momentum transfer of the inter-arc scattering channel (Fig. 2, D and E) entails the momentum separation between Weyl nodes of adjacent pairs. We stress that this quantitative correspondence between the topologically classified bulk dispersion and the momentum extent of the Fermi arcs is unique to semimetallic topology classes. All previously studied topological electronic phases have a gapped bulk spectrum, which is thus spectrally featureless. Bulk-surface correspondence is also evident by the structure of the Fermi arc wave function that resides predominantly on the subjacent Ta sites, from which the bulk Weyl cones are also derived (22).
We further showed that the lateral spatial structure of the Fermi arc wave functions within the unit cell is rather uniform and resembles a plane wave. It stands in stark contrast to the intricate structure of the nontopological surface bands, as captured by their strongly replicated QPI patterns. This observation demonstrates that the topologically derived Fermi arc states are fairly oblivious to the surface potential, which is a property that is not shared by the nontopological ones. The method of analysis that we developed and implemented, in which the replicated structure of QPI patterns is used to separate overlapping features in the pattern, will have further applicability in future studies of Fermi arcs in Weyl semimetals and in other electronic systems. Many topological surface states in different materials did not exhibit any clear replications in their QPI signatures (29, 31, 32), possibly signifying their surface resilience. A counter example that calls for a closer examination is that of topological crystalline insulators whose Dirac surface states’ QPI signatures were found to be replicated (33). Strongly correlated electronic systems may also be probed in a similar fashion. For instance, QPI patterns in high-temperature superconductors (34, 35), in which charge order has been recently reported, also exhibit replications. It would be enticing to apply our method of analysis to characterize the structure of the Bloch wave functions in such systems and to possibly unveil hidden spectroscopic features. On a yet broader scope, our resolution of the detailed structure of the Bloch wave function in local density of states and QPI measurements suggests that it will further affect other physical processes that involve quantum electronic interference. Among these are Friedel oscillations and their signature in transport, and surface state–mediated RKKY interactions. The role of the structure of the Bloch wave function in these processes calls for further theoretical elucidation alongside experimental verification.
## MATERIALS AND METHODS
### Sample synthesis
The single crystals of TaAs were grown using the chemical vapor transport method in a two-zone furnace on the basis of the precursor of polycrystalline samples, which were prepared by mixing high-purity (>99.99%) Ta and As elements. Both the polycrystalline TaAs powder and 0.46 mg cm−3 of iodine were loaded into a 24-mm-diameter quartz tube and then sealed under vacuum. Two ends of the tube were kept at 1150°C (charged part) and 1000°C for 21 days. The synthesized single crystals can be as large as 0.5 to 1 mm in size.
### Spin-selective scattering probability
In the absence of a spin texture, measured QPI patterns are commonly compared to the calculated joint density of states (JDOS). The JDOS is the autocorrelation of the density of states across the Fermi surface ρE(k) and accounts for the summed amplitude of all available scattering processes of wave vectors (q) among the bandsA spin texture of a band will further attenuate otherwise available scattering processes on the basis of the spin overlap of initial and final statesTrigonometric identities can be used to cast this into a form that can be written as an autocorrelation.
## SUPPLEMENTARY MATERIALS
Extended q-space map
dI/dV maps: Raw data and symmetrization
Fermi arc scattering signature
Agreement between vacancy- and step edge–induced QPI
Fermi arc dispersion
Correlation between scatterer-free dI/dV modulations and replications of QPI patterns
Correspondence between QPI patterns and Bloch wave function
Band structure calculations
Extracting the intensity of QPI features
Splitting the line-cut dI/dV into submaps
fig. S1. Extended q-space map.
fig. S2. dI/dV maps: Raw data and symmetrization.
fig. S3. QPI pattern involving Fermi arc scattering from a different vacancy distribution.
fig. S4. Agreement between vacancy- and step edge–induced QPI.
fig. S5. Calculated Fermi arc dispersion.
fig. S6. Structure of the Bloch wave function and its correspondence to QPI.
fig. S7. Wave function distribution.
fig. S8. Extraction of QPI feature intensities.
Reference (36)
This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.
## REFERENCES AND NOTES
Acknowledgments: Funding: H.B. acknowledges support from the European Research Council (ERC) (Starter Grant no. 678702, “TOPO-NW”), the Israel Science Foundation, and the United States–Israel Binational Science Foundation (BSF). C.F. acknowledges support from ERC (Advanced Grant no. 291472, “Idea Heusler”). A.S. acknowledges support from ERC under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC Project MUNATOP, the Minerva Foundation, and the United States–Israel BSF. Author contributions: R.B., N.M., and N.A. acquired and analyzed the data; H.B. and N.A. conceived the experiments; H.B., N.A., and A.S. wrote the manuscript, with substantial contributions from all authors; B.Y. and Y.S. modeled the system; and M.S. grew the material in C.F.’s group. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors.
View Abstract | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8513107895851135, "perplexity": 2084.9882528187845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00541.warc.gz"} |
https://www.physicsforums.com/threads/would-a-magnetic-charge-have-the-same-strength-as-a-electric-charge.683014/ | # Would a magnetic charge have the same strength as a electric charge?
• Start date
• #1
9
0
## Main Question or Discussion Point
If magnetic charges existed, would the strength of the field be the same as a electric charge? Would you be able to plug it in to the equation of coulomb's law? If so, what would the constant be? The same?
Related Classical Physics News on Phys.org
• #2
Well if they don't exist, then who's to say that they would have the same strength as the E.M.F.?
• #3
34
1
If magnetic charges existed, would the strength of the field be the same as a electric charge? Would you be able to plug it in to the equation of coulomb's law? If so, what would the constant be? The same?
You would need to use the permeability of free space rather than the permittivity, but otherwise yes.
• #4
vanhees71
Gold Member
2019 Award
15,339
6,731
Using quantum theory, Dirac has shown that the existence of a magnetic monopole implies the quantization of electrical charges. This would be great, because there is no explanation for a quantization of charges from any fundamental principle within the standard model of elementary particles yet (despite the fact that the charge pattern is restricted by the demand of an anomaly free chiral gauge group for the electroweak sector). Dirac's analysis shows that the strength of the magnetic monopole would be given by the then quantized electric charge of elementary particles. This rule reads (in Gaussian units)
$$e g_n =\frac{n}{2} \hbar c,$$
where $e$ is the elementary electric charge and $g_n$ possible values for the magnetic charge with $n \in \mathbb{Z}$.
• Last Post
Replies
11
Views
23K
• Last Post
Replies
2
Views
2K
• Last Post
Replies
103
Views
2K
• Last Post
Replies
7
Views
14K
• Last Post
Replies
12
Views
2K
• Last Post
Replies
0
Views
466
• Last Post
Replies
1
Views
6K
• Last Post
Replies
3
Views
3K
• Last Post
Replies
1
Views
2K
• Last Post
Replies
5
Views
12K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9195988774299622, "perplexity": 558.897350553208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738573.99/warc/CC-MAIN-20200809192123-20200809222123-00310.warc.gz"} |
https://www.physicsforums.com/threads/commutativity-of-tensor-field-multiplication.371729/ | # Commutativity of Tensor Field Multiplication
1. Jan 22, 2010
### Kreizhn
It may seem like a very simple question, but I just want to clarify something:
Is tensor field multiplication non-commutative in general?
For example, if I have two tensors $A_{ij}, B_k^\ell$ then in general, is it true that
$$A_{ij} B_k^\ell \neq B_k^\ell A_{ij}$$
I remember them being non-commutative, but I want to make sure.
Last edited: Jan 22, 2010
2. Jan 22, 2010
### bcrowell
Staff Emeritus
The example you wrote down should have an equals sign. Just write out the sum without the Einstein summation convention; all you're doing is reversing the order of two real numbers being multiplied in each term.
On the other hand, $A_i^j B_{jk} \ne B_i^j A_{jk}$, because, e.g., in a metric with +++ signature, this would just be a way of writing ordinary matrix multiplication, which is noncommutative.
Another example would be that the covariant derivative acts like a tensor, in the sense that you can raise and lower indices on it, but covariant derivatives don't commute with each other -- their commutator is the Riemann tensor.
3. Jan 22, 2010
### bapowell
Yes, in general tensor multiplication is non-commutative. Matrix multiplication is an example.
Last edited: Jan 22, 2010
4. Jan 22, 2010
### Kreizhn
This is what I thought, though I would like to clarify a bit.
So for fixed indices, if the (mathematical) field commutes then the product as I've written it commutes since these are just field elements. However, the moment we introduce a summation we cannot guarantee commutativity? Scalars being the exception.
5. Jan 22, 2010
### bcrowell
Staff Emeritus
This is incorrect. His example is not an example of reversing the order of multiplication of two matrices. See my #2. If all you do is reverse the order of the two factors, written in Einstein summation convention, that isn't the same as reversing the order of multiplication of two matrices; you have to change the arrangement of the indices with respect to the two tensors, or else you're just writing another expression that's equivalent to the original expression.
6. Jan 22, 2010
### Kreizhn
So the noncommutativity really arises from the indices, not the order of the representations.
7. Jan 22, 2010
### bcrowell
Staff Emeritus
The issue isn't that there's a summation. Your example includes a summation (an implied Einstein summation), and is an equality. My example in #2 includes a summation, and is an inequality. The issue is that you didn't rearrange the indices in the way you'd have to in order to represent something like commutation of matrix multiplication.
8. Jan 22, 2010
### bcrowell
Staff Emeritus
Right :-)
9. Jan 22, 2010
### bapowell
Agreed. I didn't look closely enough at his example. Tried to edit my post but, alas, too late! Apologies.
10. Jan 23, 2010
### George Jones
Staff Emeritus
There seems to be some confusion in thread, so I am going to try to contribute further confusion.
Suitably interpreted, the answer to the question "Is tensor multiplication commutative?" is "No." , and this agrees with everything that bcrowell wrote.
I think (but I could be wrong, and apologies if so) that Kreizhn and bapowell mean "tensor product" when they write "tensor multiplication," and the tensor product of two tensors is non-commutative, that is, if $\mathbf{A}$ and $\mathbf{B}$ are two tensors, then it is not generally true that $\mathbf{A} \otimes \mathbf{B} = \mathbf{B} \otimes \mathbf{A}$.
Consider a simpler example. Let $V$ be a finite-dimensional vector space, and let $\mathbf{u}$ and $\mathbf{v}$ both be non-zero vectors in $V$. Form the tensor product space $V \otimes V$. To see when
$$0 = \mathbf{u} \otimes \mathbf{v} - \mathbf{v} \otimes \mathbf{u},$$
introduce a basis $\left\{ \mathbf{e}_i \right\}$ for $V$ so that $\left\{ \mathbf{e}_i \otimes \mathbf{e}_j \right\}$ is a basis for $V \otimes V$. Then,
$$\begin{equation*} \begin{split} 0 &= \mathbf{u} \otimes \mathbf{v} - \mathbf{v} \otimes \mathbf{u} \\ &= \left(u^i v^j - u^j v^i \right) \mathbf{e}_i \otimes \mathbf{e}_j . \end{split} \end{equation*}$$
Because the basis elements are linearly independent,
$$u^i v^j = u^j v^i$$
for all possible $i$ and $j$. WLOG, assume that all the components of $\mathbf{u}$ are non-zero. Consequently,
$$\frac{v^j}{u^j} = \frac{v^i}{u^i}$$
(no sum) for all possible $i$ and $j$, i.e., $\mathbf{u}$ and $\mathbf{v}$ are parallel.
Thus, if non-zero $\mathbf{u}$ and $\mathbf{v}$ are not parallel,
$$\mathbf{u} \otimes \mathbf{v} \ne \mathbf{v} \otimes \mathbf{u}.$$
In component form, this reads
$$u^i v^j \ne u^j v^i$$
for some $i$ and $j$. As bcrowell emphasized, placement of indices is crucial.
In the original post, I think (again, I could be wrong) that Kreizhn was trying to formulate the property of non-commutativity of tensor products in the abstract-index approach advocated by, for example, Penrose and Wald. In this approach, indices do *not* refer to components with respect to a basis (no basis is chosen) and indices do *not* take on numerical values (like 0, 1, 2, 3), indices pick out copies of the vector space $V$. The index $i$ on $v^i$ indicates the copy of $V$ in which $v^i$ resides. Vectors $v^i$ and $v^j$ live in different copies of $V$. Vectors $v^i$ and $u^i$ live in the same copy of $V$.
In the component approach, $v^i u^j = u^j v^i$ because multiplication of real numbers is commutative. In the abstract-index approach, $v^i u^j = u^j v^i$ because on each side $v^i$ lives in the same copy of $V$, and on each side $u^j$ lives in the same (different) copy of $V$.
In the abstract index approach, non-commutativity of tensor products is indicated by, for example, $v^i u^j \ne v^i u^j$.
11. Jan 23, 2010
### bcrowell
Staff Emeritus
I'm not sure that it really matters which way you interpret Kreizhn's original post. Let's say he wrote down the conjecture
$$A_{ij} B_k^\ell \neq B_k^\ell A_{ij}$$
with abstract index notation in mind. Then one way to test the conjecture is like this. We know by the definition of manifolds that the manifold is locally compatible with coordinate systems, so since coordinate systems exist, let's arbitrarily fix one. Rewrite the equation with Greek indices to show that they refer to these coordinates, rather than being abstract indices.
$$A_{\mu \nu} B_\kappa^\lambda \neq B_\kappa^\lambda A_{\mu \nu}$$
By the axioms of the real numbers we can see that this is actually an equality, not an inequality. Since the equality held regardless of any assumption about which particular coordinates we chose, it follows that the original inequality, in abstract index notation, should also be an equality.
In other words, the rules of tensor gymnastics don't change just because you're using abstract index notation.
This seems fine to me, but I would emphasize that, as in the example I gave above, you don't need to forswear the manipulation of symbols according to the ordinary axioms of the real number system just because you're using abstract index notation. All you have to forswear is invocation of any special properties of a particular set of coordinates.
Here you've really lost me. I think this must just be a typo or something, because both sides of the inequality are written using identical symbols, so it must be an equality.
12. Jan 23, 2010
### Altabeh
In another thread, Kreizhn again brings up the same question that there I answered fairly clearly the whole thing:
But now I want to wash some confusions people have made here from my own point of view.
bcrowell says
and Goerge does confirm this answer by saying
Rest assured that both are correct. I know where exactly the question arises from. I think Kreizhn is active in some zone of Physics that deals with tensors in Physicist's standpoint. I'm saying this because they usually don't clarify what based on their mathematical approaches are and maybe they are afraid of using symbols like $$\otimes$$. I mean that one can picture two senarios in his mind as soon as something like $$A_{ij} B_k^\ell$$ is seen for the first time in a textbook: First, one can assume that this is a tensor product which is really possible to come to your mind even if you are a very professional expert. Because we use symbols like $$g_{ab}$$ as a second-rank 4-by-4 matrix --basically metric tensor--, for instance, in GR while it is indicative of components of an unseen matrix so I could expect $$g_{ab}v^a$$ to be taken as a tensor product by someone and here va is a 4-by-1 tensor (matrix). Speaking of which, the senario is somehow logical and what the OP is concerned about its truth now proceeds to be completely rational that $$A_{ij} B_k^\ell \neq B_k^\ell A_{ij}$$.
Remember that to avoid confusions, mathematicians use bold-faced Latin alphabet to show a second-rank tensor or matrix, for example, so they are not worried about anything coming out of the use of componential representation of multiplication of matrices $$g_{ab}v^a$$. Yes, this is the second senario that $$g_{ab}v^a$$ is a componential representation of matrix multiplication or, rationally, it is the componential multiplication translated into simple words as the usual multiplication and as I quoted above from my own post, the componential multiplication is commutative. But this senario besides being logical, is TRUE and this is what makes it distinctive from the first one.
And the last thing to be recalled is that in the componential approach one CAN consider the multiplication as being non-commutative but this requires some huge observations that all multiplications must be meant to be of the nature of matrix multiplication.
AB
Last edited: Jan 23, 2010 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9488356113433838, "perplexity": 470.7996587725359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865181.83/warc/CC-MAIN-20180623190945-20180623210945-00033.warc.gz"} |
https://brilliant.org/weekly-problems/2018-03-05/advanced/ | # Problems of the Week
Contribute a problem
You have 2016 sticks of the same length in a box. You pick a stick at random, break it into two equal halves, and put them back in for a total of 2017 sticks. You repeat this process of random picking and breaking indefinitely.
What is the maximum value of $x$ such that, at any time during this process, you are guaranteed to have at least $x$ sticks of the same length?
We have a partition of 2018. If the maximum value of the product of the numbers in the partition can be expressed as $a \times b^c,$ where $a$ and $b$ are primes and $c$ is an integer, then what is $a+b+c?$
The first diagram shows 3 circles inscribed in a semicircle. The orange line shows their centers connected together with the endpoints of the diameter of the semicircle.
If we inscribe infinitely many circles, not just 3, the resulting orange curve is part of what kind of curve?
This problem can be viewed as the 3D analog of Marion's theorem.
Imagine that each edge of a tetrahedron is trisected. Then, through each of these 12 points and its two opposite vertices, a plane is constructed for a total of 12 planes.
Now, let $V$ denote the volume of the tetrahedron, and $V_M$ the volume of the 3D figure carved out by the 12 planes inside the tetrahedron. If $V_M=\frac{A}{B}V,$ where $A$ and $B$ are coprime positive integers, find $A+B.$
The 3D figure in question is shown below:
Let $S_n=\dfrac{1}{1^n}+\dfrac{1+\frac{1}{2}}{2^n}+\dfrac{1+\frac{1}{2}+\frac{1}{3}}{3^n}+\cdots$.
Then, for positive even numbers $m$, there is a beautiful relationship between $S_{m}$ and the Riemann zeta function $\zeta(\cdot)$: $\begin{array} { l r c } S_2 &=& 2\zeta(3) \\ S_4 &=& {3\zeta(5)} \quad {-\zeta(2)\zeta(3)} \\ S_6 &=& {4\zeta(7)}\quad {-\zeta(2)\zeta(5)} \quad {-\zeta(3)\zeta(4)} \\ S_8 &=& {5\zeta(9)}\quad {-\zeta(2)\zeta(7)} \quad {-\zeta(3)\zeta(6)} \quad {-\zeta(4)\zeta(5)} \\ & \vdots & \\ S_{m} &=& \displaystyle \frac{m+2}2 \zeta(m+1) - \sum_{k=2}^{\frac m2} \zeta(k) \zeta(m-k). \end{array}$ However, there is also a relationship between positive odd numbers $m$ and the Riemann zeta function. Find this relationship and submit your answer as $\dfrac{\pi^4}{S_3}.$
Bonus: Prove the pattern shown above.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 20, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.854220986366272, "perplexity": 164.20400894486266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500331.13/warc/CC-MAIN-20200331053639-20200331083639-00486.warc.gz"} |
https://www.tuitionwithjason.sg/2018/12/additional-math-differentiation-equations-with-cot-x-and-cosec-x/ | $\displaystyle y={{\cot }^{2}}x-5+\cos ecx$
It is difficult to differentiate cot x and cosec x directly. You need to change to a form where you can differentiate cot x = $\displaystyle \frac{1}{{\tan x}}$ and cosec x = $\displaystyle \frac{1}{{\sin x}}$.
$\displaystyle y=\frac{1}{{{{{\tan }}^{2}}x}}-5+\frac{1}{{\sin x}}$
$\displaystyle y={{\tan }^{{-2}}}x-5+{{\sin }^{{-1}}}x$
Use chain rule to differentiate the equation. Don’t forget to differentiate tan x to $\displaystyle {{\sec }^{2}}x$ and sin x to $\displaystyle \cos x$
$\displaystyle \frac{{dy}}{{dx}}=-2{{\tan }^{{-3}}}x\times {{\sec }^{2}}x-{{\sin }^{{-2}}}x\times \cos x$
$\displaystyle \frac{{dy}}{{dx}}=\frac{{-2}}{{{{{\tan }}^{3}}x}}\times \frac{1}{{{{{\cos }}^{2}}x}}-\frac{1}{{{{{\sin }}^{2}}x}}\times \cos x$
$\displaystyle \frac{{dy}}{{dx}}=\frac{{-2}}{{{{{\tan }}^{3}}x{{{\cos }}^{2}}x}}-\frac{{\cos x}}{{{{{\sin }}^{2}}x}}$
Additional Math and Combine Science (Physics/ Chemistry) Tuition at Woodlands, Choa Chu Kang, Yew Tee, Sembawang and Yishun. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9472438097000122, "perplexity": 4334.729283018801}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565376.63/warc/CC-MAIN-20210125061144-20210125091144-00792.warc.gz"} |
https://dsp.stackexchange.com/questions/16497/transformed-direct-form-ii-filter-z-transform-reduce-multiplications | # Transformed Direct Form II filter: Z transform: reduce multiplications
In the diagram below there are three multiplication operations required to compute each filter output sample. I'd like to draw a block diagram of an equivalent filter but requiring fewer than three multiplies per filter output sample.
I need to transform the filter structure to Transformed Direct Form II, that it can be seen that number of multiplication can be reduced to 2. But, how to do that
UPDATE:
As per the filter, are we talking about something like the picture below? If that's the case, what's up with that unmarked node? What is he, addition? just redirection? I've never seen that before.
I'm trying to extract some generalizable rules about how to construct those neat equations you've made about the output and A & B but I'm not really able to put it into words, because I have only to foggiest notion of what you've done there, would you help me to do that?
It's instructive to look at the actual difference equation implemented by the system. If you define a sequence $w[n]$ as the output of the first (left-hand side) adder you get
$$w[n] = x[n] + Aw[n-1]\\ y[n] = Bw[n] + Bw[n-1]$$
Applying the $\mathcal{Z}$-transform gives
$$W(z) = X(z) + A \ W(z)z^{-1}\\ Y(z) = B \ W(z)\left(1 + z^{-1}\right)$$
which results in
$$W(z) = \frac{X(z)}{1-Az^{-1}}\\ Y(z) = \frac{B(1+z^{-1})}{1-Az^{-1}}X(z)$$
In the time-domain this last equation is equivalent to
$$y[n] = Ay[n-1]+B \left( x[n] + x[n-1] \right) \tag{1}$$
Equation (1) shows that this system can obviously be implemented with only two multiplications. If you want to use the Transposed Direct-Form II structure then just use the structure shown here (bottom figure), set $a_1=-A$, remove the branches with multipliers $a_2$ and $b_2$ and the lower delay element (because you only have a first order filter), remove the multipliers $b_0$ and $b_1$, and multiply the input signal directly with $B$.
EDIT: (in reaction to your updated question)
There are a few mistakes in your figure. Note that in the figure I linked to, the input is on the right-hand side. This might have caused some confusion. Anyway, in your figure where you multiply by $B$, just multiply by $A$ instead. Then remove the multiplication by $-A$ and add a multiplication by $B$ to the input signal $x[n]$, before the path splits. The node you refer to is simply an addition; they obviously forgot the 'plus' sign.
As for the equations, what I wanted to do is to get a difference equation for the output signal $y[n]$ in terms of $x[n]$ and of delayed versions of $x[n]$ and $y[n]$. This shows what the system actually does and in this way the description of the system becomes independent of the chosen filter structure. In order to do that, I used the $\mathcal{Z}$-transform which transforms difference equations into algebraic equations. So in the $\mathcal{Z}$-domain it is straightforward to eliminate the signal $w[n]$ and write $Y(z)$ directly in terms of $X(z)$. Transforming back to the time-domain yields the desired difference equation, which can then be implemented by any structure you like.
• Hi, I wanted to ask a question with a picture so I posted it as an answer, I know that's in bad taste, but the comments on this site are so maladapted I didn't really have any other choice! or did I? – user8769 May 29 '14 at 1:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8473908305168152, "perplexity": 303.59945816823466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401632671.79/warc/CC-MAIN-20200929060555-20200929090555-00699.warc.gz"} |
https://dsp.stackexchange.com/questions/38547/what-is-the-easiest-most-straight-forward-way-to-prove-this-about-minimum-phase | # What is the easiest, most straight-forward way to prove this about minimum-phase filters?
Using the "unitary" or "ordinary frequency" or "Hz" convention for the continuous Fourier Transform:
\begin{align} X(f) \triangleq \mathscr{F}\{x(t)\} &= \int\limits_{-\infty}^{\infty} x(t) \, e^{-j 2 \pi f t} \, dt \\ \\ x(t) = \mathscr{F}^{-1}\{X(f)\} &= \int\limits_{-\infty}^{\infty} X(f) \, e^{j 2 \pi f t} \, df \\ \end{align}
So we learn that the Hilbert transform maps a signal or function in the time domain to another in the same domain:
\begin{align} \hat{x}(t) \triangleq \mathscr{H}\{x(t)\} &= \frac{1}{\pi t} \circledast x(t) \\ \\ &= \int\limits_{-\infty}^{\infty} \frac{1}{\pi u} \, x(t-u) \, du \\ \\ &= \int\limits_{-\infty}^{\infty} \frac{1}{\pi (t-u)} \, x(u) \, du \\ \end{align}
and the Hilbert transformer is LTI, so we know that $$\hat{x}(t-\tau) = \mathscr{H}\{x(t-\tau)\}$$ . And, even though LTI, we know that a Hilbert transformer is not causal (but, given enough delay, we can realize an approximation to a Hilbert transformer as well, to a given non-zero error, as we want).
And we know that this LTI Hilbert transformer has frequency response
\begin{align} \hat{X}(f) & \triangleq \mathscr{F}\{\hat{x}(t)\} \\ \\ & = -j \, \operatorname{sgn}(f) X(f) \\ \\ & = \begin{cases} e^{-j \pi/2} \, X(f) \qquad & f>0 \\ 0 \qquad & f=0 \\ e^{+j \pi/2} \, X(f) \qquad & f<0 \\ \end{cases} \\ \end{align}
where, of course, $$X(f) \triangleq \mathscr{F}\{x(t)\}$$ . So all positive frequency components are shifted in phase by -90° and all negative frequency components are phase shifted by +90°. None of the amplitudes are affected except for DC, which is wiped out. That's fundamentally what a Hilbert transformer does.
From this we know about analytic signals:
\begin{align} x_\text{a}(t) & \triangleq x(t) + j\hat{x}(t) \\ \\ X_\text{a}(f) & = X(f) + j\hat{X}(f) \\ & = X(f) + j( -j \, \operatorname{sgn}(f) X(f) ) \\ & = (1 + \operatorname{sgn}(f)) \, X(f) \\ \\ & = \begin{cases} 2 X(f) \qquad & f>0 \\ X(f) \qquad & f=0 \\ 0 \qquad & f<0 \\ \end{cases} \\ \end{align}
So, if we have a complex-valued time-domain signal, $$x_\text{a}(t)$$ in which the real and imaginary parts of this signal form a Hilbert-transform pair, then in the frequency domain, all negative frequency components have zero-amplitude. Because of the symmetrical nature of the Fourier transform, we have duality and can reverse the roles of time $$t$$ and frequency $$f$$. This means, if we have a complex-valued frequency-domain spectrum, $$X(f)$$ in which the real and imaginary parts of this spectrum form a Hilbert-transform pair, then in the time domain, all negative time components have zero-amplitude.
Stated again, but substituting impulse response $$h(t)$$ for $$x(t)$$, and frequency response $$H(f)$$ for $$X(f)$$, we know
$$\Im\{h(t)\} = \mathscr{H}\big\{\Re\{h(t)\}\big\} \iff H(f) = 0 \quad \forall f<0$$
and similarly
$$\Im\{H(f)\} = -\mathscr{H}\big\{\Re\{H(f)\}\big\} \iff h(t) = 0 \quad \forall t<0$$
where $$H(f) \triangleq \mathscr{F}\{h(t)\}$$
An LTI system described by impulse response $$h(t)$$ that is zero for all $$t$$ that is negative, is what we call a "causal system", because the impulse response does not respond to the driving impulse until that driving impulse occurs in time. So for every realizable, real-time LTI system (which must be causal), the real and imaginary parts of the frequency response are a Hilbert pair in the frequency domain. None of this is particularly surprizing or special.
So (as Matt anticipated) there is something more about relating the real and imaginary parts of something regarding LTI systems that is a bit surprizing (or, at least, is not trivial). We have two definitions or descriptions of LTI systems or LTI filters that are in this class called "minimum-phase filters":
1. LTI filters with rational transfer functions (of which the numerator and denominator can be factored resulting in roots that are called zeros and poles, respectively) in which both poles and zeros lie in the left-half plane:
$$H\left( \frac{s}{j 2 \pi} \right) = A \frac{(s-q_1)(s-q_2)...(s-q_M)}{(s-p_1)(s-p_2)...(s-p_N)} \qquad \qquad M \le N$$
Required for stability: $$\Re\{p_n\} < 0$$ for all $$1 \le n \le N$$
Required for minimum phase: $$\Re\{q_m\} < 0$$ for all $$1 \le m \le M$$
These filters are called "minimum phase" because for any zero $$q_m$$ in the left half-plane, an All-pass filter having a pole at precisely the same location will cancel that zero and will reflect it to the right half-plane:
$$H_\text{AP}\left( \frac{s}{j 2 \pi} \right) = \frac{s+q_m}{s-q_m}$$
This all-pass filter has frequency response with magnitude of exactly 0 dB for all frequencies:
$$|H_\text{AP}(f)| = 1 \qquad \qquad \forall f$$
but the phase angle is not zero, this APF adds (negative) phase shift:
$$\arg \{ H_\text{AP}(f) \} = -2 \arctan\left(\frac{2 \pi f - \Im\{q_m\}}{-\Re\{q_m\}}\right)$$
The resulting cascaded filter $$H\left( \frac{s}{j 2 \pi} \right) \cdot H_\text{AP}\left( \frac{s}{j 2 \pi} \right)$$ with the zero $$q_m$$ reflected to the right half-plane has the same magnitude as the original filter (having all zeros in the left half-plane), but has more (negative) phase shift. More phase delay and more group delay. The "minimum-phase" filter is the only filter having exactly the same magnitude response that has less (negative) phase shift than any of the clones with APFs reflecting zeros to the right half-plane.
A "Maximum-phase" filter is one where all of the zeros live in the right half-plane or $$\Re\{ q_m \} \ge 0$$ .
So the second definition of a minimum-phase filter specifies exactly how this minimum-phase response is related to the magnitude response:
1. An LTI system or filter
$$H(f) = |H(f)| e^{j \arg\{H(f)\}} = |H(f)| e^{j \phi(f)}$$
is minimum phase if and only if the natural phase response, in radians, is the negative of the Hilbert transform of the natural logarithm of the magnitude response:
$$\phi(f) \triangleq \arg\{ H(f) \} = -\mathscr{H}\big\{ \ln( |H(f)| ) \big\}$$
since
\begin{align} H(f) & = |H(f)| e^{j \phi(f)} \\ & = e^{\ln(|H(f)|)} e^{j \phi(f)} \\ & = e^{\ln(|H(f)|) + j \phi(f)} \\ & = e^{\ln(H(f))} \\ \end{align}
this is relating the real and imaginary parts of the complex natural $$\log()$$ of the frequency response. Say we can construct a hypothetical LTI filter, $$G(f)$$ with complex frequency response equal to that complex logarithm
\begin{align} G(f) & = \ln(H(f)) \\ & = \ln(|H(f)|) + j \phi(f) \\ & = \Re\{G(f)\} + j \Im\{G(f)\} \\ \end{align}
\begin{align} \Im\{G(f)\} & = \phi(f) = -\mathscr{H}\big\{ \ln( |H(f)| ) \big\} \\ & = -\mathscr{H}\big\{ \Re\{G(f)\} \big\} \\ \end{align}
then the impulse response corresponding to $$G(f)$$ would be causal:
$$\mathscr{F}^{-1}\{ G(f) \} = g(t) = 0 \qquad \qquad \forall t<0$$
The purpose of this question is to resolve the two definitions of a minimum-phase filter. If, given the first definition, I don't see any direct reason why the hypothetical $$G(f) = \ln(H(f))$$ should have a causal impulse response $$g(t)$$.
The only way to resolve the two definitions directly is to consider:
$$H(f) = A \frac{(j2\pi f-q_1)(j2\pi f-q_2)...(j2\pi f-q_M)}{(j2\pi f-p_1)(j2\pi f-p_2)...(j2\pi f-p_N)}$$
(assume for the moment that $$A>0$$)
$$\ln(|H(f)|) = \ln(A) + \sum_{m=1}^{M} \ln(|j2\pi f-q_m|) - \sum_{n=1}^{N} \ln(|j2\pi f-p_n|)$$
$$\phi(f) \triangleq \arg\{H(f)\} = \sum_{m=1}^{M} \arg\{j2\pi f-q_m\} - \sum_{n=1}^{N} \arg\{j2\pi f-p_n\}$$
We know that the Hilbert transform of a constant function is zero, so
$$\mathscr{H}\big\{ \ln(A) \big\} = 0$$
then if we can prove that each remaining corresponding terms of the summations in $$\ln(|H(f)|)$$ and $$\arg\{H(f)\}$$ are Hilbert pairs, that is, if we can show
$$\arg\{j2\pi f-q_m\} = -\mathscr{H}\big\{ \ln(|j2\pi f-q_m|) \big\} \qquad 1 \le m \le M$$
and
$$\arg\{j2\pi f-p_n\} = -\mathscr{H}\big\{ \ln(|j2\pi f-p_n|) \big\} \qquad 1 \le n \le N$$
given that $$\Re\{q_m\} < 0$$ and $$\Re\{p_n\} < 0$$,
then we can show that
$$\phi(f) \triangleq \arg\{ H(f) \} = -\mathscr{H}\big\{ \ln( |H(f)| ) \big\}$$
We don't have to worry much about phase wrapping when considering a single first-order term. Since the form is the same for both zeros and poles, considering just a single zero
\begin{align} \arg\{j2\pi f - q_m\} & = \arg\{j2\pi f - (\Re\{q_m\} + j \Im\{q_m\})\} \\ & = \arg\{-\Re\{q_m\} + j(2\pi f - \Im\{q_m\})\} \\ & = \arctan\left(\frac{2\pi f - \Im\{q_m\}}{-\Re\{q_m\}}\right) \\ \end{align}
and
\begin{align} \ln(|j2\pi f-q_m|) & = \ln(|j2\pi f - (\Re\{q_m\} + j \Im\{q_m\})|) \\ & = \ln(|-\Re\{q_m\} + j(2\pi f - \Im\{q_m\})|) \\ & = \ln\left( \sqrt{(-\Re\{q_m\})^2 + (2\pi f - \Im\{q_m\})^2} \ \right) \\ & = \tfrac12 \ln\big( (-\Re\{q_m\})^2 + (2\pi f - \Im\{q_m\})^2 \big) \\ \end{align}
So now it becomes a task to show that
$$\arctan\left(\frac{2\pi f - \Im\{q_m\}}{-\Re\{q_m\}}\right) = -\mathscr{H}\Big\{ \tfrac12 \ln\big((-\Re\{q_m\})^2 + (2\pi f - \Im\{q_m\})^2 \big) \Big\}$$
Now remember that, in the time domain, the Hilbert transformer is LTI, so we know that $$\hat{x}(t-\tau) = \mathscr{H}\{x(t-\tau)\}$$ and it doesn't matter what $$\tau$$ is, it's just an offset to time $$t$$ in both input and output to the Hilbert transformer.
Here, in the frequency domain, the offset to frequency $$f$$ is $$\frac{\Im\{q_m\}}{2 \pi}$$, so without loss of generality, we can eliminate $$\Im\{q_m\}$$ from both sides:
$$\arctan\left(\frac{2\pi f}{-\Re\{q_m\}}\right) = -\mathscr{H}\Big\{ \tfrac12 \ln\big((-\Re\{q_m\})^2 + (2\pi f)^2 \big) \Big\}$$
This breaks the problem down to a single real pole and real zero, both in the left half-plane. Now we can normalize out $$-\Re\{q_m\}$$ and the $$2 \pi$$ with the substitution:
$$\omega \triangleq \frac{2\pi f}{-\Re\{q_m\}}$$
resulting in
\begin{align} \arctan(\omega) &= -\mathscr{H}\Big\{ \tfrac12 \ln\big((-\Re\{q_m\})^2 + (\omega \cdot (-\Re\{q_m\}))^2 \big) \Big\} \\ &= -\mathscr{H}\Big\{ \tfrac12 \ln\big((-\Re\{q_m\})^2 \cdot (1 + \omega^2) \big) \Big\} \\ &= -\mathscr{H}\Big\{ \ln(-\Re\{q_m\}) + \tfrac12 \ln(1 + \omega^2) \Big\} \\ &= -\mathscr{H}\Big\{ \tfrac12 \ln(1 + \omega^2) \Big\} \\ \end{align}
That last term $$\ln(-\Re\{q_m\})$$ is eliminated because the Hilbert transform of a constant is zero.
So now, the bottom line, to prove the equivalence of the two definitions of what a minimum-phase filter is, we "simply" need to prove the identity above (or below).
Can someone, without using Contour Integration or Residue Theory or results from complex variable analysis, prove this fact? :
\begin{align} \arctan(\omega) &= -\tfrac12 \mathscr{H}\big\{ \ln(1 + \omega^2) \big\} \\ \\ &= -\tfrac12 \int\limits_{-\infty}^{\infty} \frac{1}{\pi u} \, \ln(1 + (\omega-u)^2) \, du \\ \\ &= -\tfrac12 \int\limits_{-\infty}^{\infty} \frac{1}{\pi (\omega-u)} \, \ln(1 + u^2) \, du \\ \end{align}
• I guess this is going to be about the Hilbert transform relationship between log magnitude and phase of a minimum phase system ...? – Matt L. Mar 22 '17 at 7:33
• i'm getting there, @MattL. it's going to be about reconciling the two different definitions of a minimum-phase filter. and we hadn't yet gotten to the second definition (that you allude to). – robert bristow-johnson Mar 22 '17 at 23:38
• Wow @robertbristow-johnson! That last line and equation may be good to post in the mathematics site as well (with no need of the background there I don't think, only the definition of $\mathscr{H}$) – Dan Boschen Mar 23 '17 at 3:11
• something like that is my plan, @DanBoschen . just want to throw it out here first. maybe let Olli or MattL. take a whack at it. (i have an approach, and it is showing the derivatives of the two functions make a Hilbert pair.) – robert bristow-johnson Mar 23 '17 at 3:28
• so @DanBoschen, i have done exactly as you suggested. – robert bristow-johnson Mar 23 '17 at 5:51
The Hilbert transform $\mathcal{H}\left\{f(\omega)\right\}$ with
$$f(\omega)=-\frac12\log(1+\omega^2)\tag{1}$$
can be calculated in the following way. First, note that
$$\frac{df(\omega)}{d\omega}=-\frac{\omega}{1+\omega^2}\tag{2}$$
From this table we know that
$$\mathcal{H}\left\{\frac{1}{1+\omega^2}\right\}=\frac{\omega}{1+\omega^2}\tag{3}$$
We also know that
$$\mathcal{H}\{\mathcal{H}\{f\}\}=-f\tag{4}$$
Combining $(3)$ and $(4)$ we get
$$\mathcal{H}\left\{\frac{\omega}{1+\omega^2}\right\}=\mathcal{H}\left\{\mathcal{H}\left\{\frac{1}{1+\omega^2}\right\}\right\}=-\frac{1}{1+\omega^2}\tag{5}$$
So, using $(2)$,
$$\mathcal{H}\left\{\frac{df(\omega)}{d\omega}\right\}=\frac{1}{1+\omega^2}\tag{6}$$
Now we also know that the Hilbert transform operator and the differentiation operator commute:
$$\mathcal{H}\left\{\frac{df(\omega)}{d\omega}\right\}=\frac{d}{d\omega}\mathcal{H}\{f(\omega)\}\tag{7}$$
which yields
$$\frac{d}{d\omega}\mathcal{H}\{f(\omega)\}=\frac{1}{1+\omega^2}\tag{8}$$
Integrating $(8)$ finally gives
$$\mathcal{H}\{f(\omega)\}=\arctan(\omega)\tag{9}$$
Note that this result can also be obtained using Mathematica (which I don't have available). According to this thread, the command
Integrate[-1/2*Log[1 + (\[Tau]*\[Nu])^2]/(\[Nu] - \[Omega]), {\[Nu], -Infinity, Infinity},
PrincipalValue -> True, Assumptions -> \[Tau] >0 && \[Omega] >0, GenerateConditions -> False]/Pi
gives
-ArcTan[\[Tau] \[Omega]]
The negative sign comes from the different definition of the Hilbert transform, as can be seen in the denominator of the integral in the Mathematica command.
I would like to add that the causality of the inverse Fourier transform of $\log H(j\omega)$, i.e., the causality of the complex cepstrum for a minimum-phase system $H(j\omega)$ can also be understood intuitively. Note that any zero of $H(s)$ in the right half-plane causes a singularity in $\log H(s)$ in the right half-plane, and consequently, the corresponding inverse Fourier transform must be two-sided because the region of convergence is a strip including the imaginary axis. Only if there are no zeros in the right half-plane (i.e., the system is minimum-phase) will $\log H(s)$ have all its singularities in the left half-plane, and the inverse transform yields a right-sided causal function.
From $(4)$ we can see another nice property of the Hilbert transform, namely that the inverse transform is simply given by the (forward) transform with a negative sign:
$$\mathcal{H}^{-1}\{f\}=-\mathcal{H}\{f\}\tag{10}$$
That means that for every Hilbert transform pair that we find, we get another one for free:
$$\mathcal{H}\{f\}=g\Longrightarrow\mathcal{H}\{g\}=-f\tag{11}$$
Applying $(11)$ to $(9)$ we find
$$\mathcal{H}\{\arctan(\omega)\}=-f(\omega)=\frac12\log(1+\omega^2)\tag{12}$$
• that negative sign (with Mathematica) is still bothersome to me, Matt. just screw the definition of the Hilbert transform, it's an integral. Mathematica does not toss in a spurious sign change with their definition of an indefinite integral with Cauchy p.v., – robert bristow-johnson Mar 23 '17 at 6:13
• oh, it's the reversal of order of $\nu$ and $\omega$. – robert bristow-johnson Mar 23 '17 at 6:14
• @robertbristow-johnson: Yes, just look at the denominator, it's $\nu-\Omega$ and we're integrating over $\nu$. – Matt L. Mar 23 '17 at 6:16
• i am not on-board with the bottom causality argument. just because $H(s)$ is rational, doesn't mean that $\log(H(s))$ is. even so, putting the singularities of $\log(H(s))$ all into the left half-plane doesn't provide for causality, but provides for stability. not really the same thing. – robert bristow-johnson Mar 23 '17 at 7:07
• @robertbristow-johnson: $\log H(s)$ in indeed generally non-rational; I didn't claim that it was rational. Note that stability of $\log H(s)$ is implied by assuming that its inverse Fourier transform (the cepstrum) exists. So with stability implied, the locations of the singularities do determine causality. All singularities in the left half-plane means causal, in the right half-plane means anti-causal, and on both sides means two-sided (non-causal). It's generally not true that singularities in the LHP provide for stability, that only holds for causal systems. – Matt L. Mar 23 '17 at 8:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 70, "wp-katex-eq": 0, "align": 11, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999579191207886, "perplexity": 930.0049570282864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540543252.46/warc/CC-MAIN-20191212102302-20191212130302-00342.warc.gz"} |
http://stacks.math.columbia.edu/tag/0307 | # The Stacks Project
## Tag 0307
Lemma 10.35.11. Integral closure commutes with localization: If $A \to B$ is a ring map, and $S \subset A$ is a multiplicative subset, then the integral closure of $S^{-1}A$ in $S^{-1}B$ is $S^{-1}B'$, where $B' \subset B$ is the integral closure of $A$ in $B$.
Proof. Since localization is exact we see that $S^{-1}B' \subset S^{-1}B$. Suppose $x \in B'$ and $f \in S$. Then $x^d + \sum_{i = 1, \ldots, d} a_i x^{d - i} = 0$ in $B$ for some $a_i \in A$. Hence also $$(x/f)^d + \sum\nolimits_{i = 1, \ldots, d} a_i/f^i (x/f)^{d - i} = 0$$ in $S^{-1}B$. In this way we see that $S^{-1}B'$ is contained in the integral closure of $S^{-1}A$ in $S^{-1}B$. Conversely, suppose that $x/f \in S^{-1}B$ is integral over $S^{-1}A$. Then we have $$(x/f)^d + \sum\nolimits_{i = 1, \ldots, d} (a_i/f_i) (x/f)^{d - i} = 0$$ in $S^{-1}B$ for some $a_i \in A$ and $f_i \in S$. This means that $$(f'f_1 \ldots f_d x)^d + \sum\nolimits_{i = 1, \ldots, d} f^i(f')^if_1^i \ldots f_i^{i - 1} \ldots f_d^i a_i (f'f_1 \ldots f_dx)^{d - i} = 0$$ for a suitable $f' \in S$. Hence $f'f_1\ldots f_dx \in B'$ and thus $x/f \in S^{-1}B'$ as desired. $\square$
The code snippet corresponding to this tag is a part of the file algebra.tex and is located in lines 7243–7249 (see updates for more information).
\begin{lemma}
\label{lemma-integral-closure-localize}
Integral closure commutes with localization: If $A \to B$ is a ring
map, and $S \subset A$ is a multiplicative subset, then the integral
closure of $S^{-1}A$ in $S^{-1}B$ is $S^{-1}B'$, where $B' \subset B$
is the integral closure of $A$ in $B$.
\end{lemma}
\begin{proof}
Since localization is exact we see that $S^{-1}B' \subset S^{-1}B$.
Suppose $x \in B'$ and $f \in S$. Then
$x^d + \sum_{i = 1, \ldots, d} a_i x^{d - i} = 0$
in $B$ for some $a_i \in A$. Hence also
$$(x/f)^d + \sum\nolimits_{i = 1, \ldots, d} a_i/f^i (x/f)^{d - i} = 0$$
in $S^{-1}B$. In this way we see that $S^{-1}B'$ is contained in
the integral closure of $S^{-1}A$ in $S^{-1}B$. Conversely, suppose
that $x/f \in S^{-1}B$ is integral over $S^{-1}A$. Then we have
$$(x/f)^d + \sum\nolimits_{i = 1, \ldots, d} (a_i/f_i) (x/f)^{d - i} = 0$$
in $S^{-1}B$ for some $a_i \in A$ and $f_i \in S$. This means that
$$(f'f_1 \ldots f_d x)^d + \sum\nolimits_{i = 1, \ldots, d} f^i(f')^if_1^i \ldots f_i^{i - 1} \ldots f_d^i a_i (f'f_1 \ldots f_dx)^{d - i} = 0$$
for a suitable $f' \in S$. Hence $f'f_1\ldots f_dx \in B'$ and thus
$x/f \in S^{-1}B'$ as desired.
\end{proof}
There are no comments yet for this tag.
## Add a comment on tag 0307
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the lower-right corner). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.996107280254364, "perplexity": 124.31193707788537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607648.39/warc/CC-MAIN-20170523163634-20170523183634-00268.warc.gz"} |
https://ece4uplp.com/snr-in-dm-or-signal-to-quantization-noise-ratio-in-delta-modulation-system/ | # SNR in DM (or) Signal-to-Quantization Noise Ratio in Delta Modulation system
we know that signal-to-Noise Ratio is defined as
$SNR=\frac{S}{N}=\frac{Normalized\&space;signal\&space;power}{Normalized\&space;Noise&space;\&space;power}$.
let $x(t)$ is a given signal which is a single tone signal $x(t)&space;=A_{m}&space;\cos&space;2\pi&space;f_{m}t&space;\&space;where&space;\&space;\omega&space;_{m}&space;=&space;2\pi&space;f_{m}$.
The maximum value of RMS signal power is $P_{rms}&space;=&space;\frac{A_{m}^{2}}{2R}$.
Normalized signal power $P=\frac{A_{m}^{2}}{2}$ with R=1.
we know that slope overload distortion can be eliminated if and only if $A_{m}\leq&space;\frac{\Delta}{2\pi&space;f_{m}T_{s}}$.
let $A_{m}=&space;\frac{\Delta}{2\pi&space;f_{m}T_{s}}$
By substituting $A_{m}$ value in P then the power results to be $P=&space;\frac{(\frac{\Delta}{2\pi&space;f_{m}T_{s}})^{2}}{2}$ .
$P=&space;\frac{\Delta^{2}}{8\pi^{2}&space;f_{m}^{2}T_{s}^{2}}----EQN(1)$
Quantization Noise Power $\bg_black&space;N&space;\&space;(or&space;)&space;\&space;N_{Q}$ :-
if uniform (or) linear Quantization is used in DM system, during the approximation process of $x(t)$ with $x_{q}(t)&space;\&space;(or)&space;\&space;\widehat{x}(t)$ there exists some error between these two signals . This error is called as Quantization error (or) noise.
$x(t)&space;\approx&space;x_{q}(t)$ (approximation process)
In discrete time domain $e(nT_{S})&space;=&space;x_{q}(nT_{S})-x(nT_{S})$ .
Quantization error = Quantized signal- original signal.
Now to find out Quantization noise , assume it is uniformly distributed random variable $(+\Delta&space;,-\Delta&space;)$
now the Probability density function of this uniformly distributed random variable is $f_{\epsilon&space;}(\epsilon&space;)$
$f_{\epsilon&space;}(\epsilon&space;)&space;=&space;\left\{\begin{matrix}&space;\frac{1}{2\Delta&space;}&space;,&space;-\Delta&space;\leq0\leq&space;\Delta&space;.\\&space;0,otherwise.&space;\end{matrix}\right.$
Mean square value of this random variable is with zero mean
$E(\epsilon&space;^{2})&space;=&space;\int_{-\infty&space;}^{\infty&space;}&space;\epsilon&space;^{2}f_{\epsilon&space;}(\epsilon&space;)d\epsilon$.
$E(\epsilon&space;^{2})&space;=&space;\int_{\Delta&space;}^{\Delta&space;}\epsilon&space;^{2}&space;\frac{1}{2\Delta&space;}d\epsilon$ .
simplification gives $E(\epsilon&space;^{2})&space;=&space;\frac{\Delta&space;^{2}}{3}$.
Mean Square value= Quantization Noise Power.
$\therefore&space;N_{q}&space;=&space;\frac{\Delta&space;^{2}}{3}----EQN(2)$.
The M signal is passed through a reconstruction Low pass Filter at the output of a DM Receiver . The Band width of this filter is $f_{M}$ in such a way that $f_{M}\geq&space;f_{m}&space;\&space;and&space;\&space;f_{M}<&space;<&space;f_{s}$.
where $f_{s}$ is the sampling frequency of the signal.
now assume that Quantization noise is distributed over a frequency band up to $f_{s}$ and is given by $\frac{\Delta&space;^{2}}{3}$ .
then the noise power $N_{q}^{'}$ distributed over $f_{M}$ will be
$N_{q}^{'}&space;=&space;\frac{f_{M}}{f_{s}}\frac{\Delta&space;^{2}}{3}---EQN(3)$ .
$\therefore&space;SQR&space;\&space;in&space;\&space;DM&space;system=\frac{Signal&space;\&space;power}{Noise&space;\&space;power}$ .
$\frac{S}{N}&space;=&space;\frac{S}{N_q^{'}}&space;=&space;SQR=\frac{\frac{\Delta^{2}}{8\pi^{2}&space;f_{m}^{2}T_{s}^{2}}}{\frac{f_{M}}{f_{s}}\frac{\Delta&space;^{2}}{3}}$ .
$SQR_{DM}&space;=&space;\frac{3}{8\pi&space;^{2}f_{m}^{2}T_{s}^{3}f_{M}}&space;\&space;where&space;\&space;T_{s}=\frac{1}{f_{s}}$.
(1 votes, average: 5.00 out of 5) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 66, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604388475418091, "perplexity": 1639.094004875954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989115.2/warc/CC-MAIN-20210510064318-20210510094318-00272.warc.gz"} |
https://stats.hohoweiya.xyz/2017/09/08/Chain-Structured-Models/ | # Chain-Structured Models
##### Posted on Sep 08, 2017 (Update: Jan 30, 2019) 0 Comments
There is an important probability distribution used in many applications, the chain-structured model.
It has the following form:
where $\mathbf x=(x_0, x_1,\ldots, x_d)$. This type of model can be seen as having a “Markovian structure” because the conditional distribution $\pi(x_i\mid \mathbf x_{[-i]})$, where $\mathbf x_{[-i]}=(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_d)$ depends only on the two neighboring variables $x_{i-1}$ and $x_{i+1}$, that is,
## Dynamic Programming
Suppose each $x_i$ in $\mathbf x$ only takes values in set $\cal S=\{s_1,\ldots, s_k\}$. Maximizing the distribution $\pi(\mathbf x)$ is equivalent to minimizing its exponent
We can carry out the following recursive procedure:
• Define $m_1(x) = \underset{s_i\in \cal S}{\min}h_1(s_i, x),\; \mathrm{for}\; x=s_1,\ldots, s_k$.
• Recursively compute the function $m_t(x) = \underset{s_i\in\cal S}{\min}\{m_{t-1}(s_i)+h_t(s_i,x)\},\;for\; x=s_1,\ldots,s_k$
The optimal value of $H(\mathbf x)$ is obtained by $\underset{s\in\cal S}{\min}\;m_d(s)$
Then we can find out which $\mathbf x$ gives rise to the global minimum of $H(\mathbf x)$ by tracing backward as follows:
• Let $\hat x_d$ be the minimizer of $m_d(x)$; that is,
• For $t=d-1,d-2,\ldots, 1$, we let
Configuration $\hat{\mathbf x}=(\hat x_1,\ldots,\hat x_d)$ obtained by this method is the minimizer of $H(\mathbf x)$.
## Exact Simulation
In order to simulate, we can decompose the overall summation as
Then we can adopt the following recursions.
• Define $V_1(x)=\sum_{x_0\in \cal S}\exp(-h_1(x_0, x))$
• Compute recursively for $t=2,\ldots,d$
Then, the partition function is $Z=\sum_{x\in\cal S}V_d(x)$ and the marginal distribution of $x_d$ is $\pi(x_d)=V_d(x_d)/Z$.
To simulate $\mathbf x$ from $\pi$, we can do the following:
1. Draw $x_d$ from $\cal S$ with probability $V_d(x_d)/Z$
2. For $t=d-1,d-2,\ldots, 1$, we draw $x_t$ from distribution
The random sample $\mathbf x=(x_1,\ldots,x_d)$ obtained in this way follows the distribution $\pi(\mathbf x)$.
The simulation code and reproduced results can be found here.
## References
Liu, Jun S. Monte Carlo strategies in scientific computing. Springer Science & Business Media, 2008.
Published in categories Note | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9917782545089722, "perplexity": 501.42700300994704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257767.49/warc/CC-MAIN-20190524204559-20190524230559-00531.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-1-foundations-for-algebra-1-2-order-of-operations-and-evaluating-expressions-practice-and-problem-solving-exercises-page-14/47 | ## Algebra 1: Common Core (15th Edition)
We first plug 2 in for m and 6 in for n. The exponent tells us the number of times we multiply the number by itself. According to the order of operations, we simplify inside of parentheses, then we simplify powers, then we multiply and divide, and finally, we add and subtract. When we do this, we find: $3 \times 2^{2} - 6 = 3 \times 4 -6 = 12-6 =6$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9783966541290283, "perplexity": 412.68676367384194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.61/warc/CC-MAIN-20200813014639-20200813044639-00286.warc.gz"} |
http://mathhelpforum.com/calculus/137351-finding-average-velocity.html | # Math Help - Finding the average velocity
1. ## Finding the average velocity
Describe the difference in finding the average velocity over an interval given the position function, s(t), and the velocity function, v(t).
I'm having trouble understand thing problem.
2. Originally Posted by iyppxstahh
Describe the difference in finding the average velocity over an interval given the position function, s(t), and the velocity function, v(t).
I'm having trouble understand thing problem.
I think it's like this:
Let a = initial time, b = final time
average velocity is V = (1/(b-a))*Integral(v(t)dt)
where the limits of the integral are a to b.
When you're given s(t), your job is made easier because the above equation turns into
V = (1/(b-a))*Integral((ds/dt)dt) = (1/(b-a))(s(b)-s(a))
I hope everything is clear.. I've been finding LaTeX a little cumbersome lately so I sometimes use it, sometimes not.
Someone please correct me if this is wrong as I have not worked with these for a while.
3. Hmm....I'll have to look closer into it. Thanks again! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9783852696418762, "perplexity": 1240.8119981058276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037662910.19/warc/CC-MAIN-20140930004102-00234-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://www.math.iisc.ac.in/seminars/2019/2019-01-30-niranjan-balachandran-2.html | #### Algebra & Combinatorics Seminar
##### Venue: LH-1, Mathematics Department
Given a hypergraph $\mathcal{H}$ with vertex set $[n]:={1,\ldots,n}$, a bisecting family is a family $\mathcal{A}\subseteq\mathcal{P}([n])$ such that for every $B\in\mathcal{H}$, there exists $A\in\mathcal{A}$ with the property $|A\cap B|-|A\cap\overline{B}|\in{-1,0,1}$. Similarly, for a family of bicolorings $\mathcal{B}\subseteq {-1,1}^{[n]}$ of $[n]$ a family $\mathcal{A}\subseteq\mathcal{P}([n])$ is called a System of Unbiased Representatives for $\mathcal{B}$ if for every $b\in\mathcal{B}$ there exists $A\in\mathcal{A}$ such that $\sum_{x\in A} b(x) =0$.
The problem of optimal families of bisections and bicolorings for hypergraphs originates from what is referred to as the problem of Balancing Sets of vectors, and has been the source for a few interesting extremal problems in combinatorial set theory for about 3 decades now. We shall consider certain natural extremal functions that arise from the study of bisections and bicolorings and bounds for these extremal functions. Many of the proofs involve the use of polynomial methods (not the Polynomial Method, though!).
(Joint work with Rogers Mathew, Tapas Mishra, and Sudebkumar Prasant Pal.)
Contact: +91 (80) 2293 2711, +91 (80) 2293 2265 ; E-mail: chair.math[at]iisc[dot]ac[dot]in
Last updated: 22 Feb 2019 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8425575494766235, "perplexity": 624.1673371388854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249468313.97/warc/CC-MAIN-20190223041554-20190223063554-00554.warc.gz"} |
http://aas.org/archives/BAAS/v30n4/aas193/487.htm | AAS Meeting #193 - Austin, Texas, January 1999
Session 92. Clusters of Galaxies
Oral, Friday, January 8, 1999, 2:00-3:30pm, Room 9 (A and B)
## [92.04] Gravitationally Lensed Radio Sources Towards Galaxy Clusters
A.R. Cooray, J.E. Carlstrom (Univ. of Chicago)
Similar to luminous optical arcs, galaxy clusters are expected to gravitationally lens background sources at radio wavelengths. However, due to the smaller surface density of radio sources, compared to optical galaxies, such lensed events are rare. We present the expected number of strongly lensed radio sources due to foreground cluster potentials. At the flux density limit of the FIRST survey (~ 1 mJy), only a few sources are strongly lensed. At the \muJy level, however, the surface density of radio sources increases. For a flat cosmology with \Omegam=0.3 and \Omega\Lambda =0.7, we predict ~ 1500 lensed radio sources with flux densities greater than 10 \muJy at 1.4 GHz, and with amplifications due to lensing greater than 2. We discuss the possibility of detecting lensed \muJy radio sources towards clusters, and will present initial results from deep VLA observations of a sample of well known lensing clusters. Aided by amplification due to gravitational lensing, searches for lensed \muJy radio sources towards clusters are likely to recover star-forming galaxies at redshifts of 1 to 3. Since radio emission is unaffected by dust, a systematic catalog of such sources can be used to evaluate the star formation history of the universe. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9608522057533264, "perplexity": 3759.137172007177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657139314.4/warc/CC-MAIN-20140914011219-00049-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://zbmath.org/?q=an:0596.32016 | ×
# zbMATH — the first resource for mathematics
Cohomology of twisted holomorphic forms on Grassmann manifolds and quadric hypersurfaces. (English) Zbl 0596.32016
The author computes the cohomology of holomorphic q-forms $$\Omega^ q$$ twisted by powers of a very ample line bundle L, $$\Omega^ q(k):=\Omega^ q\cdot L^ k$$, on Grassmann manifolds and non-singular quadric hypersurfaces, generalizing certain theorems of R. Bott [Ann. Math. 66, 203-248 (1957; Zbl 0094.357)] and J. Le Potier [Math. Ann. 226, 257-270 (1977; Zbl 0356.32018)). A complete description is given when X is a quadric hypersurface: All of the groups $$H^ p(X,\Omega^ q(k))$$ (with one obvious exception) are irreducible G- modules. If $$1\leq k\leq q$$, then $$H^ p(X,\Omega^ q(k))\neq 0$$ iff $$k=2q-n$$ and $$p=n-q,$$ in which case it is one dimensional. If $$k>q$$ then $$\Omega^ q(k)$$ is spanned and has a highest weight of the form $$(k-q- 1)\mu_ 1+\mu_ i.$$ When X is a Grassmann manifold, an explicit algorithm is given for computing which of the groups $$H^ p(X,\Omega^ q(k))$$ vanish. Several general statements are possible:
Let X be the Grassmann manifold of s-planes in $${\mathbb{C}}^{m+1}$$, $$s\leq m-s+1$$, and let $$n=\dim X$$. Then: $$H^ p(X,\Omega^ q(1))=0$$ if $$p+q>0$$; $$H^ p(X,\Omega^ q(2))=0$$ except for $$(p,q)=(a(a- 1)/2),a(a+1)/2),a\leq s;$$ $$H^ p(X,\Omega^ q(k))=0$$ if sp$$\geq (s-1)q$$ or $$p>n-q$$ or $$q>n-s$$; $$\Omega^ q(k)$$ is spanned if $$k>q$$ or $$k>m.$$
Similar statements for the other compact Hermitian symmetric spaces will be presented in a sequel to this paper.
##### MSC:
32L20 Vanishing theorems 32M15 Hermitian symmetric spaces, bounded symmetric domains, Jordan algebras (complex-analytic aspects) 22E46 Semisimple Lie groups and their representations
##### Citations:
Zbl 0094.357; Zbl 0356.32018
Full Text:
##### References:
[1] Bott, R.: Homogeneous vector bundles. Ann Math.66, 203-248 (1957) · Zbl 0094.35701 [2] Demazure, M.: A very simple proof of Bott’s theorem. Invent math.33, 271-272 (1976) · Zbl 0383.14017 [3] Helgason, S.: Differential geometry and symmetric spaces. New York, San Francisco, London: Academic Press 1978 · Zbl 0451.53038 [4] Humphreys, J.: Introduction to lie algebras and representation theory. Berlin, Heidelberg, New York: Springer 1972 · Zbl 0254.17004 [5] Humphreys, J.: Linear algebraic groups. Berlin, Heidelberg, New York: Springer 1975 · Zbl 0325.20039 [6] Kostant, B.: Lie algebra cohomology and the generalized Borel Weil Theorem. Ann. Math.74, 329-387 (1961) · Zbl 0134.03501 [7] Le Potier, J.: Annulation de la cohomologie ? valeurs dans un fibr? vectoriel holomorphe positif de rang quelconque. Math. Ann.218, 35-53 (1975) · Zbl 0313.32037 [8] Le Potier, J.: Cohomologie de la Grassmannienne ? valeurs dans les puissances ext?rieures et symmetriques du fibr? universel. Math. Ann.266, 257-270 (1977) · Zbl 0356.32018 [9] Serre, J.P.: Repr?sentations lin?ares et espaces homog?nes K?hlerians des groupes de Lie compacts (d’apr?s Borel et Weil). Sem. Bourbaki, expos? 100, May 1954. New York: Benjamin 1966 [10] Shiffman, B., Sommese, A.: Vanishing theorems on complex manifolds. Boston, Basel, Stuttgart: Birkh?user 1985 · Zbl 0578.32055 [11] Snow, D.: On the ampleness of homogeneous vector bundles. Trans. Am. Math. Soc.294, 585-594 (1986) · Zbl 0588.32038 [12] Snow, D.: Vanishing theorems on compact hermitian symmetric spaces. To appear · Zbl 0631.32025
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175606727600098, "perplexity": 2881.0657573164617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363510.40/warc/CC-MAIN-20211208114112-20211208144112-00101.warc.gz"} |
https://www.math24.net/alternating-series | # Alternating Series
A series in which successive terms have opposite signs is called an alternating series.
### The Alternating Series Test (Leibniz’s Theorem)
This test is the sufficient convergence test. It’s also known as the Leibniz’s Theorem for alternating series.
Let $$\left\{ {{a_n}} \right\}$$ be a sequence of positive numbers such that
1. $${a_{n + 1}} \lt {a_n}$$ for all $$n$$;
2. $$\lim\limits_{n \to \infty } {a_n} = 0.$$
Then the alternating series $$\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^n}{a_n}}$$ and $$\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^{n – 1}}{a_n}}$$ both converge.
### Absolute and Conditional Convergence
A series $$\sum\limits_{n = 1}^\infty {{a_n}}$$ is absolutely convergent, if the series $$\sum\limits_{n = 1}^\infty {\left| {{a_n}} \right|}$$ is convergent.
If the series $$\sum\limits_{n = 1}^\infty {{a_n}}$$ is absolutely convergent then it is (just) convergent. The converse of this statement is false.
A series $$\sum\limits_{n = 1}^\infty {{a_n}}$$ is called conditionally convergent, if the series is convergent but is not absolutely convergent.
## Solved Problems
Click or tap a problem to see the solution.
### Example 1
Use the alternating series test to determine the convergence of the series $$\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^n}\large\frac{{{{\sin }^2}n}}{n}\normalsize}.$$
### Example 2
Determine whether the series $$\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^n}\large\frac{{2n + 1}}{{3n + 2}}\normalsize}$$ is absolutely convergent, conditionally convergent, or divergent.
### Example 3
Determine whether $$\sum\limits_{n = 1}^\infty {\large\frac{{{{\left( { – 1} \right)}^{n + 1}}}}{{n!}}\normalsize}$$ is absolutely convergent, conditionally convergent, or divergent.
### Example 4
Determine whether the alternating series $$\sum\limits_{n = 2}^\infty {\large\frac{{{{\left( { – 1} \right)}^{n + 1}}\sqrt n }}{{\ln n}}\normalsize}$$ is absolutely convergent, conditionally convergent, or divergent.
### Example 5
Determine the $$n$$th term and test for convergence the series
${\frac{2}{{3!}} – \frac{{{2^2}}}{{5!}} }+{ \frac{{{2^3}}}{{7!}} – \frac{{{2^4}}}{{9!}} + \ldots }$
### Example 6
Investigate whether the series $$\sum\limits_{n = 1}^\infty {\large\frac{{{{\left( { – 1} \right)}^{n + 1}}}}{{5n – 1}}\normalsize}$$ is absolutely convergent, conditionally convergent, or divergent.
### Example 7
Determine whether the alternating series $$\sum\limits_{n = 1}^\infty {\large\frac{{{{\left( { – 1} \right)}^n}}}{{\sqrt {n\left( {n + 1} \right)} }}\normalsize}$$ is absolutely convergent, conditionally convergent, or divergent.
### Example 1.
Use the alternating series test to determine the convergence of the series $$\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^n}\large\frac{{{{\sin }^2}n}}{n}\normalsize}.$$
Solution.
By the alternating series test we find that
${\lim\limits_{n \to \infty } \left| {{a_n}} \right| } = {\lim\limits_{n \to \infty } \left| {{{\left( { – 1} \right)}^n}\frac{{{{\sin }^2}n}}{n}} \right| } = {\lim\limits_{n \to \infty } \frac{{{{\sin }^2}n}}{n} = 0,}$
since $${\sin ^2}n \le 1.$$ Hence, the given series converges.
### Example 2.
Determine whether the series $$\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^n}\large\frac{{2n + 1}}{{3n + 2}}\normalsize}$$ is absolutely convergent, conditionally convergent, or divergent.
Solution.
We try to apply the alternating series test here:
${\lim\limits_{n \to \infty } \left| {{a_n}} \right| } = {\lim\limits_{n \to \infty } \frac{{2n + 1}}{{3n + 2}} } = {\lim\limits_{n \to \infty } \frac{{\frac{{2n + 1}}{n}}}{{\frac{{3n + 2}}{n}}} } = {\lim\limits_{n \to \infty } \frac{{2 + \frac{1}{n}}}{{3 + \frac{2}{n}}} }={ \frac{2}{3} \ne 0.}$
Since the $$n$$th term does not approach $$0$$ as $$n \to \infty,$$ the given series diverges.
Page 1
Problems 1-2
Page 2
Problems 3-7 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9697224497795105, "perplexity": 369.4638337445309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00188.warc.gz"} |
https://dsp.stackexchange.com/questions/57955/equation-of-wave-in-linear-frequency-modulation?noredirect=1 | # Equation of wave in Linear frequency modulation?
We know that we have the following equation for wave.
$$g(t)=A\cos(\omega t+\theta_0)=A\cos(2\pi ft+\theta_0)$$ The equation of frequency with respect to time will be:
$$f(t)=\frac{Bt}{\tau}+f_c-\frac{B}{2}$$ Then:
\begin{align} g(t)&=A\cos(2\pi (\frac{Bt}{\tau}+f_c-\frac{B}{2})t+\theta_0)\\ &=A\cos(2\pi(f_c-B/2)t+\mathbf{2\pi}(B/\tau)t^2+\theta_0) \end{align}
Then why that bold 2 is omitted in this answer? $$f(t)=A\cos(\theta(t))=A\cos(2\pi(f_c-B/2)t+\pi(B/\tau)t^2+\theta_0)$$
If you have a signal
$$g(t)=\cos(2\pi \hat{f}(t)t)\tag{1}$$
then the function $$\hat{f}(t)$$ is not the instantaneous frequency of $$g(t)$$ (unless $$\hat{f}(t)$$ is constant).
If you want an instantaneous frequency $$f(t)$$, then the equation
$$\frac{\phi'(t)}{2\pi}=f(t)\tag{2}$$
must be satisfied, where $$\phi(t)$$ is the phase of the signal $$g(t)$$. So in order to obtain the phase $$\phi(t)$$, you have to integrate the desired instantaneous frequency $$f(t)$$. For
$$f(t)=\frac{Bt}{\tau}+f_c-\frac{B}{2}\tag{3}$$
you get
$$\frac{\phi(t)}{2\pi}=\frac{B}{2\tau}t^2+(f_c-\frac{B}{2})t+\frac{\theta_0}{2\pi}\tag{4}$$
So the signal with the desired instantaneous frequency is
$$g(t)=\cos(\phi(t))=\cos\left[\pi(Bt^2/\tau+(2f_c-B)t)+\theta_0\right]\tag{5}$$
Also take a look at this related question and its answer. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851698875427246, "perplexity": 431.2556983162344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304872.21/warc/CC-MAIN-20220125190255-20220125220255-00412.warc.gz"} |
https://www.physicsforums.com/threads/falling-plank-problem.143560/ | # Falling plank problem
1. Nov 13, 2006
### ViolinIsLife
Could someone help me with this problem? I appreciate it!
A uniform plank of mass M and length 2L is resting on a frictionless floor and leaning against a frictionless vertical wall. It is held steady by a massless string connecting the lower end of the plank to the base of the wall. The angle between the floor and the plank is theta. Calculate the acceleration of the upper end of the plank immediately after the string is cut, and the angle theta at which the upper end of the plank first separates from the wall.
My prof. said that the plank would separate from the floor at about 2/3 of the original height where the plank was leaning against the wall.
Thank you very much in advance!!!
2. Nov 13, 2006
### OlderDan
Please post your attempt at solving this problem. Draw a free body diagram with the forces acting and apply Newton's second law to the motion of the center of mass and its rotational analog to the angular motion.
3. Nov 13, 2006
### ViolinIsLife
I have attached a diagram for this problem. Sorry, don't know why the jpg file doesn't show the forces that I have drawn with MS Paint on the diagram.
#### Attached Files:
• ###### clip_image002.jpg
File size:
5 KB
Views:
88
Last edited: Nov 13, 2006
4. Nov 13, 2006
### OlderDan
We cannot yet see the diagram, and probably do not need it to see if you are on the right track. Do you have the equations related to the diagram?
Similar Discussions: Falling plank problem | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8767200708389282, "perplexity": 579.2529382823348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188914.50/warc/CC-MAIN-20170322212948-00015-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.openaircraft.com/ccx-doc/cgx/node82.html | ## cut
'cut' <pnt|nod> [<pnt|nod> <pnt|nod>]
This keyword is used to define a cutting plane through elements to visualize internal results. The plane is either defined by three nodes or points, or, in case a dataset-entity of a vector was already selected, by just one node or point. The cutting plane is then determined by the direction of the vector (displacements, worstPS). The menu option ”Show Elements With Light” or the commands ”ucut”, ”view surf” or”view volu” will display the whole model again and will delete the plane. This command is intended for batch-mode. See ”qcut” for the cursor controlled command. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8063597083091736, "perplexity": 3166.927509566016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663011588.83/warc/CC-MAIN-20220528000300-20220528030300-00577.warc.gz"} |
https://en.wikibooks.org/wiki/Econometric_Theory/Probability_Density_Function_(PDF) | Econometric Theory/Probability Density Function (PDF)
Probability Mass Function of a Discrete Random Variable
A probability mass function f(x) (PMF) of X is a function that determines the probability in terms of the input variable x, which is a discrete random variable (rv).
A pmf has to satisfy the following properties:
• ${\displaystyle f(x)={\begin{cases}P(X=x_{i})&{\mbox{for }}i=1,2,\cdots ,n\\0&{\mbox{for }}x\neq x_{i}\end{cases}}}$
• The sum of PMF over all values of x is one:
${\displaystyle \sum _{i}f(x_{i})=1.}$
Probability Density Function of a Continuous Random Variable
The continuous PDF requires that the input variable x is now a continuous rv. The following conditions must be satisfied:
• All values are greater than zero.
${\displaystyle f(x)\geq 0}$
• The total area under the PDF is one
${\displaystyle \int _{-\infty }^{\infty }f(x)\,dx=1}$
• The area under the interval [a, b] is the total probability within this range
${\displaystyle \int _{a}^{b}f(x)\,dx=P(a\leq x\leq b)}$
Joint Probability Density Functions
Joint pdfs are ones that are functions of two or more random variables. The function
{\displaystyle {\begin{aligned}f(X\in A,Y\in B)&=\int _{A}\,\int _{B}f(x,y)\,dx\,dy\\&=0,{\mbox{if }}x\notin A{\mbox{ and }}y\notin B\\\end{aligned}}}
is the continuous joint probability density function. It gives the joint probability for x and y.
The function
{\displaystyle {\begin{aligned}p(X\in A,Y\in B)&=\sum _{X\in A}\sum _{Y\in B}p(x,y)\\&=0,{\mbox{if }}x\notin A{\mbox{ and }}Y\notin y\\\end{aligned}}}
is similarly the discrete joint probability density function
Marginal Probability Density Function
The marginal PDFs are derived from the joint PDFs. If the joint pdf is integrated over the distribution of the X variable, then one obtains the marginal PDF of y, ${\displaystyle f(y)}$. The continuous marginal probability distribution functions are:
${\displaystyle f(x)=\int _{y}^{B}f(x,y)dy}$
${\displaystyle f(y)=\int _{x}^{A}f(x,y)dx}$
and the discrete marginal probability distribution functions are
${\displaystyle p(x)=\sum _{y\in B}p(x,y)}$
${\displaystyle p(y)=\sum _{x\in A}p(x,y)}$
Conditional Probability Density Function
${\displaystyle f(x\mid y)=P(X=x,Y=y)={\frac {f(x,y)}{f(y)}}}$
${\displaystyle f(y\mid x)=P(Y=y,X=x)={\frac {f(x,y)}{f(x)}}}$
Statistical Independence
• Gujarati, D.N. (2003). Basic Econometrics, International Edition - 4th ed.. McGraw-Hill Higher Education. pp. 870-877. ISBN 0-07-112342-3. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9847394824028015, "perplexity": 1817.8830575122165}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00195-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://siliconintelligence.com/people/binu/perception/node26.html | SiliconIntelligence
# 4.5 Architectural Implications
A basic understanding of the acoustic and language models is necessary to understand the architectural implications and scaling characteristics of speech recognition. The lexical tree is a complex data structure that results in considerable pointer chasing at run time. The nodes that will be accessed depend very much on the sentences being spoken. The size of the tree depends on the vocabulary size. However there is scope for architectural optimization. The opportunity stems from the fact that acoustic vectors are evaluated successively and on evaluating an HMM for the current vector, if the HMM generates a probability above a certain threshold, the successors of the HMM will be evaluated in the next time step. Thus there is always a list of currently active HMMs/lextree nodes and a list of nodes that will be active next. Evaluating each HMM takes a deterministic number of operations and thus a fixed number of clock cycles. This information can be used to prefetch nodes ahead of when they are evaluated.
Given the fact that the number of triphones and words in a language are relatively stable, it might appear that the workload will never expand. In reality this is not the case due to the probability density function . In the past, speech recognizers used subvector quantized models, which are easy to compute. These methods use a code book to store reference acoustic vectors. Acoustic vectors obtained from the front end are compared against the code book to find the index of the closest match. The probability density function then reduces to a table lookup of the form . While this is computationally efficient, the discretization of observation probability leads to excessive quantization error and thereby poor recognition accuracy.
To obtain better accuracy, modern systems use a continuous probability density function and the common choice is a multivariate mixture Gaussian in which case the computation may be represented as:
(4.4)
Here, is the mean and the variance of the Gaussian mixture and is the weight of the mixture. For The Hub-4 speech database used for this research was obtained from CMU and they chose and to be 8 and 39 respectively. Note that the outer denotes an addition in the logarithmic domain. Normally the inner term involves exponentiation to compute a weighted Mahalanobis-like distance, but it is reduced to simple arithmetic operators by keeping all the parameters in the logarithmic domain [91,111]. Therefore the outer summation needs to be done in the logarithmic domain. This may be implemented using table lookup based extrapolation. This strategy is troublesome if the processor's L1 D-cache is not large enough to contain the lookup table.
If each HMM state uses a separate probability density function, then the system is said to be fully continuous. Thus the peak workload for an English speech recognizer would correspond to the evaluation of about 60,000 probability density functions and HMMs, as well as an associated lextree traversal that is proportional to the number of words in the vocabulary. Fully continuous models are not popular for two reasons:
1. Their computational complexity makes them orders of magnitude slower than real time on current processors.
2. Their parameter estimation problem and sparse training sets lead to low recognition accuracy.
The parameter estimation problem is particularly difficult. For and Equation 4.4 needs parameters for the values of , and . For a total of 60,000 triphones this adds up to 113.7 million parameters. The training data is often insufficient to estimate that many parameters, so the use of continuous models leads to increased word error rate. The usual solution is to cluster together HMM states and share a probability density function among several states. Such clustering methods are an area of active research. A speech recognition system that uses clustered probability density functions is called a semicontinuous or tied-mixture system. Almost all advanced large vocabulary speech recognizers currently fall in this category. The Hub-4 speech model used to evaluate Sphinx 3.2 contains approximately 6000 probability density functions representing an average of 30 HMM states sharing a single function. This ratio could change when a model is trained on a larger data set leading to proportionately increased compute complexity. Another possibility is an increase in , the number of mixtures per function, which will again proportionately increase the compute cycles. A third possibility is increasing the size of the context from triphones to quinphones (five phones, one current phone and two left and two right neighbors). The use of quinphones will lead to an increase in the number of probability density functions that need to be evaluated. This will be further multiplied by the number of quinphones in the language vs the number of triphones.
Though traditional speech recognizers couple the evaluation of HMMs and Gaussians tightly, in the interest of extracting greater levels of thread parallelism, it is possible to decouple HMM and Gaussian evaluation, an approach that will be further investigated in Chapter 5.
Binu Mathew | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8237206339836121, "perplexity": 623.5418783695046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530040.33/warc/CC-MAIN-20190420200802-20190420222802-00377.warc.gz"} |
http://www.tdt.edu.vn/index.php/cac-bai-bao-cong-b-qu-c-t/257-cac-bai-bao-cong-b-nam-2014/1322-nguyen-diep-phan-van-tri-existence-and-uniqueness-of-solutions-to-set-valued-control-integro-differential-equations-institute-of-advanced-scientific-research-vol-6-issue-2-2014-pp-58-81-journal-paper | Các bài báo công bố quốc tế
Nguyen Diep, Phan Van Tri; Existence and Uniqueness of Solutions to Set-valued Control Integro-differential Equations; Institute of Advanced Scientific Research, Vol. 6, Issue 2 (2014), pp. 58-81 (Journal Paper).
Abstract
In the paper, we present existence, uniqueness and comparison of solutions of set control integro-differential equations (SCIDEs) under the form \\ \indent \hfill $D_H X\left( t \right) = F\left( {t,X\left( t \right),U(t),\int\limits_{t_0 }^t {G\left( {t,s,X\left( s \right),U(s)} \right)ds} } \right)$ \hfill \indent \\ with some suitable conditions. Where $U(t) \in K_{CC}(\mathbb R^d)$ is different controls, inclusion: admissible control, feedback control and contraction control.
Keywords. Set-valued differential equations; Set-valued integro-differential equations. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990050196647644, "perplexity": 4590.517013553671}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424623.68/warc/CC-MAIN-20170723222438-20170724002438-00520.warc.gz"} |
http://www.zora.uzh.ch/36174/ | Measuring the time stability of prospect theory preferences
Zeisberger, Stefan; Vrecko, Dennis; Langer, Thomas (2012). Measuring the time stability of prospect theory preferences. Theory and Decision, 72(3):359-386.
Abstract
Prospect Theory is widely regarded as the most promising descriptive model for decision mak-ing under uncertainty. Various tests have corroborated the validity of the characteristic fourfold pattern of risk attitudes implied by the combination of probability weighting and value transformation. But is it also safe to assume stable Prospect Theory preferences at the individual level? This is not only an empirical but also a con-ceptual question. Measuring the stability of preferences in a multi-parameter decision model such as Prospect Theory is far more complex than evaluating single-parameter models such as Expected Utility Theory under the assumption of constant relative risk aversion. There exist considerable interdependencies among parameters such that allegedly diverging parameter combinations could in fact produce very similar preference structures. In this paper, we provide a theoretic framework for measuring the (temporal) stability of Prospect Theory parame-ters. To illustrate our methodology, we further apply our approach to 86 subjects for whom we elicit Prospect Theory parameters twice, with a time lag of one month. While documenting remarkable stability of parameter estimates at the aggregate level, we find that a third of the subjects show significant instability across sessions.
Prospect Theory is widely regarded as the most promising descriptive model for decision mak-ing under uncertainty. Various tests have corroborated the validity of the characteristic fourfold pattern of risk attitudes implied by the combination of probability weighting and value transformation. But is it also safe to assume stable Prospect Theory preferences at the individual level? This is not only an empirical but also a con-ceptual question. Measuring the stability of preferences in a multi-parameter decision model such as Prospect Theory is far more complex than evaluating single-parameter models such as Expected Utility Theory under the assumption of constant relative risk aversion. There exist considerable interdependencies among parameters such that allegedly diverging parameter combinations could in fact produce very similar preference structures. In this paper, we provide a theoretic framework for measuring the (temporal) stability of Prospect Theory parame-ters. To illustrate our methodology, we further apply our approach to 86 subjects for whom we elicit Prospect Theory parameters twice, with a time lag of one month. While documenting remarkable stability of parameter estimates at the aggregate level, we find that a third of the subjects show significant instability across sessions.
Citations
19 citations in Web of Science®
19 citations in Scopus®
Altmetrics
Detailed statistics
Item Type: Journal Article, refereed, further contribution 03 Faculty of Economics > Department of Banking and Finance 330 Economics English 2012 15 Nov 2010 17:06 05 Apr 2016 14:16 Springer 0040-5833 The original publication is available at www.springerlink.com https://doi.org/10.1007/s11238-010-9234-3
Permanent URL: https://doi.org/10.5167/uzh-36174 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8749439716339111, "perplexity": 1537.0557727419869}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540915.89/warc/CC-MAIN-20161202170900-00193-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/quantum-mechanics-a-non-normalizable-state.245679/ | # Quantum Mechanics: a non-normalizable state
1. Jul 18, 2008
### jalobo
1. The problem statement, all variables and given/known data
At a given moment, the wave function of a particle is in a non-normalizable state $$\Psi$$(x) = 1 + sin²(kx). By measuring its kinetic energy, what values are possible and with what probability?.
2. Relevant equations
3. The attempt at a solution
I can't calculate the probability because the integral that define it is unbounded.
($$\Psi$$, $$\widehat{K}\Psi$$) = $$\infty$$
2. Jul 18, 2008
### Dick
You can express that state as a sum of states with definite momentum and hence definite energy. What are their relative amplitudes? Hint,
sin(kx)=(exp(ikx)-exp(-ikx))/2i.
3. Jul 20, 2008
### jalobo
$$\Psi$$(x) = exp{i(2k)x} - $$\frac{1}{4}$$exp{i(-2k)x} + (1+i)exp{i(0)x},
that is, I have expressed the wave function as a linear combination of self-states of the linear momentum.
So, we have:
$$\left\|\Psi\right\|$$² = 1 + $$\frac{1}{16}$$+2 $$\Rightarrow\left\|\Psi\right\|$$ = $$\frac{7}{4}$$
and, using the same notation, the normalized wave function is
$$\Psi$$(x) = $$\frac{4}{7}$$ exp{i(2k)x} - $$\frac{1}{7}$$ exp{i(-2k)x} + $$\left(\frac{4}{7} + \frac{4}{7} i \right)$$ exp{i(0)x}.
Hence,
Pr($$p_{x}$$ = 2k) = $$\frac{16}{49}$$,
Pr($$p_{x}$$ = -2k) = $$\frac{1}{49}$$,
Pr($$p_{x}$$ = 0) = $$\frac{32}{49}$$,
and therefore, denoting the kinetic energy by T, we finally have:
Pr(T = 2k²/2m) = $$\frac{17}{49}$$,
Pr(T = 0) = $$\frac{32}{49}$$,
as the kinetic energy is related to linear momentum through the expression
T = $$\frac{p^{2}_{x}}{2m}$$.
Is it right?.
4. Jul 20, 2008
### Dick
Something went wrong with the very first line. I don't see how e^(2ikx) and e^(-2ikx) can have different coefficients. Did you miss a bracket in
sin(kx)=[exp(ikx)-exp(-ikx)]/(2i)? Your general approach is right, though.
5. Jul 20, 2008
### jalobo
Oh, yes, I missed a bracket in sin(kx)=[exp(ikx)-exp(-ikx)]/(2i). Thank you very much.
Let's see if now is entirely correct.
Taking into account the identity sin(kx) = $$\frac{exp(ikx)-exp(-ikx)}{2i}$$,
the non-normalizable wave function $$\Psi (x)$$ = 1 + sin²(x) can be expressed as a linear combination of self-states of the linear momentum,
$$\Psi (x)$$ = $$\frac{3}{2}$$ exp{i(0)x} - $$\frac{1}{4}$$ exp{i(2k)x} - $$\frac{1}{4}$$ exp{i(-2k)x},
So, we have:
$$\left\|\Psi\right\|$$² = $$\frac{19}{8}\Rightarrow\left\|\Psi\right\|$$ = $$\frac{\sqrt{38}}{4}$$
and the normalized wave function, using the same notation, is
$$\Psi$$(x) = $$\frac{6}{\sqrt{38}}$$ exp{i(0k)x} - $$\frac{1}{\sqrt{38}}$$ exp{i(2k)x} - $$\frac{1}{\sqrt{38}}$$ exp{i(-2k)x}
Thus, by measuring the observable linear momentum, the only possible values are those relating to these three self-states of the linear momentum, because in the process of measuring the wave function is projected onto one of these (collapse or reduction of the state), and the probabilities for each value of the linear momentum is given by the square of the amplitude of the corresponding self-functions.
Hence,
Pr($$p_{x} = 0)$$ = $$\frac{36}{38}$$,
Pr($$p_{x} = 2k)$$ = $$\frac{1}{38}$$,
Pr($$p_{x} = -2k)$$ = $$\frac{1}{38}$$,
and therefore, denoting the kinetic energy by T, we finally have:
Pr(T = 0) = $$\frac{18}{19}$$,
Pr(T = 2k²/m) = $$\frac{1}{19}$$,
as the kinetic energy is related to linear momentum through the expression
T = $$\frac{p^{2}_{x}}{2m}$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641197323799133, "perplexity": 1407.2458546197668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00358-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/317116/manipulating-log-function/317130 | Manipulating log function
I'm trying to find the value of a constant for $\ y(0) = 0$ in the following differential equation.
$$\ 2\ln(2x+3y-1) - {2x+3y \over 2} = 2x+3y + k$$
Of course when plugging in the values, I get $\ 2\ln(-1) = k$ which errors. When entering this into Wolfram Alpha, they suggest rearranging the equation from this format,
$$\ 2\ln(x-1) - {x \over 2} + constant$$
to
$$\ - {x \over 2} + 2\ln(1-x) + {1 \over 2} + constant$$
"Which is the equivalent for restricted x values", which indeed I have. This would leave me with a positive $\ln(1)$, which would solve my problem... but...
My question, how does this manipulation work? I've never seen this before. I don't understand how they've made that leap (or if it's even accurate).
Can anyone educate me?
-
Note that for any constants $a,b,c\in\mathbb{R}$, $a\ln(b(x+c))=a\ln(b)+a\ln(x+c)=\ln(b^{a})+\ln((x+c)^{a})$
Therefore, in this case we have:
$$2\ln(x-1)-\frac{x}{2}+c=2\ln(-1(1-x))-\frac{x}{2}+c=\ln((-1)^{2}(1-x)^{2})-\frac{x}{2}+c$$
And using the fact that $(-1)^{2}=1$, and $\ln(1)=0$ we have:
$$2\ln(x-1)-\frac{x}{2}+c=2\ln(1-x)-\frac{x}{2}+c$$
And then as amWhy points out, you can have $c=\frac{1}{2}+k$ for restricted values of $x$.
-
Brilliant! That's what I was looking for. Thank you. – AdrianR Feb 28 '13 at 20:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9099830985069275, "perplexity": 157.54162642142953}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450252.23/warc/CC-MAIN-20141017005730-00196-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/224881-p-value-t.html | # Math Help - P-Value and T
1. ## P-Value and T
so again with another p-value problem. but this time, gotta solve for t as well.
The sample mean and standard deviation from a random sample of 10 observations from a normal population were computed as x-bar = 23 and s = 9.
Calculate the value of the test statistic (3 decimals) and the p-value (4 decimals) of the test required to determine whether there is enough evidence to infer at the 5% significance level that the population mean is greater than 20.
t =
p-value =
so i get 1.054 for t which is correct:
23-20 / 9 / sqrt 10 = 1.054
now the p-value is where i have the problem...
not even sure where to start.
2. ## Re: P-Value and T
Hey arcticreaver.
You need to either use a computer program or a set of statistical tables to get the p-value. In R the command should be pt(). Have you used t-tables before? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.951260507106781, "perplexity": 838.636853026505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095494.6/warc/CC-MAIN-20150627031815-00192-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/mass-term-in-wave-equation.181383/ | # Mass term in wave equation
1. Aug 23, 2007
### jostpuur
I know how to write down solutions of wave equation
$$\partial^2_t u(t,x) = \partial^2_x u(t,x)$$
for given initial $u(0,x)$ and $\partial_t u(0,x)$ like this
$$u(t,x) = \frac{1}{2}\Big( u(0,x+t) + u(0,x-t) + \int\limits^{x+t}_{x-t} \partial_t u(0,y) dy\Big),$$
$$\partial^2_t u(t,x) = \partial^2_x u(t,x) - mu(t,x)$$
where m is some constant? Is there similar formula for this?
Last edited: Aug 23, 2007
2. Aug 23, 2007
### genneth
The equation is the Klein-Gordon equation. It has (relativistic) plane wave solutions, and is linear.
3. Aug 23, 2007
### jostpuur
I'm aware of this, and that is where the motivation comes. But for example for m=0 the plane wave solutions don't tell the same things as the expression I wrote in the OP. The fact that u(t,x) depends only on what is in the interval [x-t, x+t] at time zero is quite manifest now, but plane waves don't tell that directly.
Last edited: Aug 23, 2007
4. Aug 23, 2007
### genneth
In the m=0 case, the plane wave solutions may be used to build up the solution, to any (reasonable) initial condition.
5. Aug 23, 2007
### jostpuur
My point was that the expression for solution in my OP tells things that the plane wave solutions don't tell.
6. Aug 23, 2007
### genneth
Such as?
7. Aug 23, 2007
### jostpuur
$u(t,x)$ depends on $u(0,x)$ and $\partial_t u(0,x)$ in the interval $[x-t,x+t]$. Initial configuration outside this interval doesn't matter.
8. Aug 23, 2007
### George Jones
Staff Emeritus
The equivalence of the general (m = 0) plane wave solution (i.e., solution using Fourier transforms) to the solution in your original post is posed as an exercise (with hints) in my favourite Fourier analysis book.
Last edited: Aug 23, 2007
9. Aug 25, 2007
### Chris Hillman
Recommend two books
Just thought I'd add that the solution of the one-dimensional wave equation, given in terms of an integral over initial data, which Jostpuur mentioned in his post, represents the tip of a rather large iceberg. As many here probably already know, one of the most illuminating approaches to either ODEs or PDEs involves Dirac deltas, Green's functions, integral transforms, and so on. Jostpuur's example illustrates one reason why this approach is so important: comparing analogous solutions for higher dimensional wave equations yields valuable insight into "accidents of low dimensions". It turns out that the behavior of solutions of the wave equation in dimensions 1,2,3 have completely different characters! (Hint: consider the light cone.)
I have yet to see a book covering Dirac deltas for PDEs which I really like, but a superb book from the applied perspective is Dean G. Duffy, Transform Methods for Solving Partial Differential Equations, CRC Press, 1994. Among textbooks providing readable introductions to the equations of mathematical physics generally, I like Guenther and Lee, Partial Differential Equations of Mathematical Physics and Integral Equations, Dover, 1996 (reprint of 1988 original), which is notable for a wise selection of the most useful topics from this vast subject. Both of these books should help readers gain insight into the effects of adding additional terms to say the wave equation.
Last edited: Aug 25, 2007
10. Aug 28, 2007
### MikeL#
Jostpurr,
I wonder if you have being using these equations to grapple with the physics of our moving through time?
Definitions:
$\tau$ = proper time
$\tau_c$ = proper time at centre of a mass distribution
$m = 1/ \tau_0$ = mass
Propositions:
A: $$u(t,x) e^{ims} = v(s,t,x)$$ is a more general idea so that:
$$0 = (\partial^2_t - \partial^2_x + m^2) v = (\partial^2_t - \partial^2_x - \partial^2_s) v$$
where $s = \tau$. So mass m lives throughout time s.
The time of space-time is sitting out there and mass may have a large temporal extent (width).
B: $$u(t,x) e^{-(ms)^2/2} = w(s,t,x)$$
$$0 = (\partial^2_t - \partial^2_x - \partial^2_s) w = (\partial^2_t - \partial^2_x + m^2[1 - (ms)^2]) w$$
where $s = \tau - \tau_c; \ and \ |s| < \tau_0$. So mass m only lives within the tiny temporal extent of |s| trapped by a matter constraining SHM potential like [1 - (ms)^2]. So we move through a temporal surface of space-time with a tiny temporal extent.
A & B are at extreme ends of visualising our temporal extent as we travel through time.
I think A is the way I was brought up as undergraduate student just learning SR and time-independent QM & statistical mechanics. The spatial extent of gas molecules in a box is the box itself (similarly for electrons in a metal) - so by analogy the temporal extent is like a 4th dimension of the ‘box’.
Of course B is a bit of a hotch-potch. The middle way to visualise the extent of the mass is by considering A and the inter-related gaussian distributions of (E, p & m) in (t, x, s). So mass does not drift forever back into the past and future but peaks at the ‘now’.
$$m = m = m = m = m = m = m = m = m = m = m = m = m = m = m = m = m$$
Last edited: Aug 28, 2007
11. Aug 30, 2007
### jostpuur
MikeL#, your post was confusing and interesting. At quick glance, to me it seems that solutions of
$$(\partial_t^2 - \partial_x^2 + m^2)u(t,x) = 0$$
could be written as fourier transforms of some solutions of
$$(\partial_t^2 - \partial_x^2 - \partial_m^2)u(t,x,m)=0$$
As if n-dimensional Klein-Gordon equation could be reduced to (n+1)-dimensional wave equation withouth the mass term. I haven't got into detail of this yet, but will probably do it at some point.
12. Aug 31, 2007
### Chris Hillman
I for one would appreciate it, MikeL, if you would confine this kind of speculation to this subforum:
https://www.physicsforums.com/forumdisplay.php?f=146
TIA!
Jostpuur, are you still interested in the mathematics of wave equations? One topic which hasn't been mentioned is factoring the wave operator by passing to a larger algebra.
13. Oct 16, 2007
### jostpuur
I know how to arrive to the solution of the wave equation with a factorization $\partial_t^2 - \partial_x^2=(\partial_t - \partial_x)(\partial_t + \partial_x)$, but I don't what to do with the mass term. How do you factorize sum of three squares?
$$A^2-B^2+C^2 = (A+\sqrt{B^2-C^2})(A-\sqrt{B^2-C^2}) = (A+B+C)(A-B+C) - 2AC$$
both seem quite useless here at least.
Last edited: Oct 17, 2007
14. Oct 17, 2007
### jostpuur
hmhm... I was paying attention to the "factoring" part, and missed the "larger algebra". Did you actually mean the Dirac's equation, or some kind of lower dimensional analogue to it?
15. Oct 18, 2007
### jostpuur
Okey.
$$\left[\begin{array}{cc} \partial_t^2-\partial_x^2+m^2 & 0 \\ 0 & \partial_t^2-\partial_x^2+m^2 \\ \end{array}\right] =\left[\begin{array}{cc} m & \partial_t-\partial_x \\ -\partial_t-\partial_x & m \\ \end{array}\right] \left[\begin{array}{cc} m & -\partial_t+\partial_x \\ \partial_t + \partial_x & m \\ \end{array}\right]$$
hmhm....
hm
Well this looks like it is going to lead somewhere, but it's probably not something that I could see in few seconds. That is a modified transport equation, after all. Let's see how this goes...
Last edited: Oct 18, 2007
16. Oct 22, 2007
### jostpuur
mass term in transport equation
For given initial configuration u(0,x) the solution of the transport equation
$$\partial_t u(t,x) - a\partial_x u(t,x) = 0$$
is given trivially by
$$u(t,x)=u(0,at+x)$$
From a book I could learn that the non-homogenous transport equation
$$\partial_t u(t,x) - a\partial_x u(t,x) = f(t,x)$$
for fixed f(t,x) is given by
$$u(t,x) = u(0,at+x) + \int\limits_0^t f(s,x+(t-s)a)ds$$
Now I want to find solution to the transport equation with a mass term,
$$\partial_t u(t,x) - a\partial_x u(t,x) = mu(t,x)$$
I could substitute $f(t,x)=mu(t,x)$, but that doesn't give the solution to the initial value problem.
Any clues what to do with this?
17. Oct 22, 2007
### jostpuur
It seems that an attempt
$$u(t,x)=u(0,x+at)e^{v(t,x)}$$
solves it.
18. Oct 28, 2007
### jostpuur
I think I found a way to write down the solution to my original problem using some infinite series of some strange integral expressions. It could be, that it should be called a perturbation series with respect to m. But the stuff got quite laborious, and I haven't started fighting with the details yet.
19. Nov 20, 2007
### Gaenn
$$C_D=\frac{D}{\frac{1}{2}\roh\upsilon^2\c}$$
20. Nov 20, 2007
### Graviton
Differential equations
Hi,if i could try to solve for the wavefunction in schrodingers equation in 3 dimension,would it take me time?
Similar Discussions: Mass term in wave equation | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932404100894928, "perplexity": 1167.2890335471645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189734.17/warc/CC-MAIN-20170322212949-00147-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://www.ck12.org/arithmetic/Fractions-as-Percents/lesson/Write-Percents-as-Fractions-MSM8/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Fractions as Percents
## Convert fractions to percents.
Estimated5 minsto complete
%
Progress
Practice Fractions as Percents
MEMORY METER
This indicates how strong in your memory this concept is
Progress
Estimated5 minsto complete
%
Write Percents as Fractions
Apparently, if you can jump 2 m on Earth, you can jump 12 m on the moon. What percent of the height of a jump on Earth is the height of a jump on the moon?
In this concept, you will learn to write percent as fractions and fractions as percent.
### Percents as Fractions
Percent can be written as ratios with a denominator of 100 or they can be written as decimals. Well, if they can be written as a ratio with a denominator of 100, then those ratios can be simplified as you would simplify any fraction. Likewise, any fraction can be written as a percent using reverse operations.
To write a percent as a fraction rewrite it as a fraction with a denominator of 100. Then reduce the fraction to its simplest form.
Let’s look at an example.
Write 22% as a fraction.
First, write this as a fraction with a dominator of 100.
22%=22100\begin{align*}22\% = \frac{22}{100}\end{align*}
Next, simplify.
22100=1150\begin{align*}\frac{22}{100} = \frac{11}{50}\end{align*}
The answer is 1150\begin{align*}\frac{11}{50}\end{align*}.
To convert a fraction to a percent, first, you need to be sure that the fraction is being compared to a quantity of 100.
Let’s look at one.
28100=?%\begin{align*}\frac{28}{100} = ? \%\end{align*}
This means that you have 28 out of 100.
Next, this fraction is being compared to 100, so you can simply change it to a percent.
28100=28%\begin{align*}\frac{28}{100} = 28\% \end{align*}
Here is another one.
Convert 35\begin{align*}\frac{3}{5}\end{align*} into a percent.
First, this fraction is not being compared to 100. It is being compared to 5. You have 3 out of 5. To convert this fraction to a percent, you need to rewrite it as an equal ratio out of 100. You can use proportions to do this. Write this ratio compared to a second ratio out of 100.
35=?100\begin{align*}\frac{3}{5} = \frac{?}{100}\end{align*}
Next, use multiplication to create equal ratios or a proportion.
5×2003×20==10060\begin{align*}\begin{array}{rcl} 5 \times 200 &=& 100 \\ 3 \times 20 &=& 60 \end{array}\end{align*}
Next, put these together.
35=60100\begin{align*}\frac{3}{5} = \frac{60}{100}\end{align*}
Then, change the fraction to a percent.
60100=60%\begin{align*}\frac{60}{100} = 60\%\end{align*}
### Examples
#### Example 1
Earlier, you were given a problem about the lunar jump.
You know that you can jump 2 m on the earth and 12 m on the moon. You are trying to find the percent of the height on earth compared to the height on the moon.
First, write a fraction to represent this problem.
EarthMoon=212\begin{align*}\frac{Earth}{Moon} = \frac{2}{12}\end{align*}
Next, write this as a proportion with a denominator of 100. Start by writing the fraction with x\begin{align*}x\end{align*} as the unknown numerator out of 100.
212=x100\begin{align*}\frac{2}{12} = \frac{x}{100}\end{align*}
Next, cross multiply to solve for the unknown variable.
2(100)=12(x)\begin{align*}2(100) = 12(x)\end{align*}
Then, solve for “x\begin{align*}x\end{align*}” by dividing both sides of the equation by 12.
2(100)20020012x====12(x)12x12x1216.67\begin{align*}\begin{array}{rcl} 2(100) &=& 12(x) \\ 200 &=& 12x \\ \frac{200}{12} &=& \frac{12x}{12} \\ x &=& 16.67 \end{array}\end{align*}
The jump on earth is 16.67% of the jump you can make on the moon.
#### Example 2
Jack’s baseball team won 9 out of 12 games. What percent of the games played did the team win? What percent did of the games played did the team lose?
First, write the number of games won as a fraction.
912\begin{align*}\frac{9}{12}\end{align*}
Next, write this as a proportion with a denominator of 100. Start by writing the fraction with x\begin{align*}x\end{align*} as the unknown numerator out of 100.
912=x100\begin{align*}\frac{9}{12} = \frac{x}{100}\end{align*}
Then, cross multiply to solve for the unknown variable.
9(100)=12x\begin{align*}9(100) = 12x\end{align*}
Then, solve for “x\begin{align*}x\end{align*}” by dividing both sides of the equation by 12.
9(100)90090012x====12(x)12x12x1275\begin{align*}\begin{array}{rcl} 9(100) &=& 12(x) \\ 900 &=& 12x \\ \frac{900}{12} &=& \frac{12x}{12} \\ x &=& 75 \end{array}\end{align*}
The team wins 75% of the time.
The team then loses 25% of the time.
#### Example 3
Write 44100\begin{align*}\frac{44}{100}\end{align*} as a percent.
First, remember that a percent means that the denominator is 100. Since this fraction is already out of 100, you can convert it to a percent.
44100=44%\begin{align*}\frac{44}{100} = 44\%\end{align*}
#### Example 4
Write 12\begin{align*}\frac{1}{2}\end{align*} as a percent.
First, this fraction is not being compared to 100. It is being compared to 2. You need to rewrite it as an equal ratio out of 100. You can use proportions to do this. Write this ratio compared to a second ratio out of 100.
12=?100\begin{align*}\frac{1}{2} = \frac{?}{100}\end{align*}
Next, use multiplication to create equal ratios or a proportion.
2×501×50==10050\begin{align*}\begin{array}{rcl} 2 \times 50 &=& 100 \\ 1 \times 50 &=& 50 \\ \end{array}\end{align*}
Next, put these together.
12=50100\begin{align*}\frac{1}{2} = \frac{50}{100}\end{align*}
Then, change the fraction to a percent.
50100=50%\begin{align*}\frac{50}{100} = 50\%\end{align*}
#### Example 5
Write 57\begin{align*}\frac{5}{7}\end{align*} as a percent.
First, write this as a proportion with a denominator of 100. Start by writing the fraction with x\begin{align*}x\end{align*} as the unknown numerator out of 100.
57=x100\begin{align*}\frac{5}{7} = \frac{x}{100}\end{align*}
Next, cross multiply to solve for the unknown variable.
5(100)=7(x)\begin{align*}5(100) = 7(x)\end{align*}
Then, solve for “x\begin{align*}x\end{align*}” by dividing both sides of the equation by 7.
5(100)5005007x====7(x)7x7x771.4\begin{align*}\begin{array}{rcl} 5(100) &=& 7(x) \\ 500 &=& 7x \\ \frac{500}{7} &=& \frac{7x}{7} \\ x &=& 71.4 \end{array}\end{align*}
### Review
Write the following percent values as fractions in simplest form.
1. 16%\begin{align*}16\%\end{align*}
2. 40%\begin{align*}40\%\end{align*}
3. 2%\begin{align*}2\%\end{align*}
4. 4%\begin{align*}4\%\end{align*}
5. 45%\begin{align*}45\%\end{align*}
6. 20%\begin{align*}20\%\end{align*}
7. 18%\begin{align*}18\%\end{align*}
8. 10%\begin{align*}10\%\end{align*}
Write the following fractions as a percent. Round when necessary.
9. 23\begin{align*}\frac{2}{3}\end{align*}
10. 2330\begin{align*}\frac{23}{30}\end{align*}
11. 475\begin{align*}\frac{4}{75}\end{align*}
12. 212\begin{align*}\frac{21}{2}\end{align*}
13. 45\begin{align*}\frac{4}{5}\end{align*}
14. 610\begin{align*}\frac{6}{10}\end{align*}
15. 325\begin{align*}\frac{3}{25}\end{align*}
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
### Vocabulary Language: English
TermDefinition
Decimal In common use, a decimal refers to part of a whole number. The numbers to the left of a decimal point represent whole numbers, and each number to the right of a decimal point represents a fractional part of a power of one-tenth. For instance: The decimal value 1.24 indicates 1 whole unit, 2 tenths, and 4 hundredths (commonly described as 24 hundredths).
fraction A fraction is a part of a whole. A fraction is written mathematically as one value on top of another, separated by a fraction bar. It is also called a rational number.
Percent Percent means out of 100. It is a quantity written with a % sign.
Proportion A proportion is an equation that shows two equivalent ratios. | {"extraction_info": {"found_math": true, "script_math_tex": 50, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997715353965759, "perplexity": 1246.8738172592898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189088.29/warc/CC-MAIN-20170322212949-00052-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://worldwidescience.org/topicpages/p/particle+induced+reactions.html | #### Sample records for particle induced reactions
1. Analysis of charged particle induced reactions for beam monitor applications
Energy Technology Data Exchange (ETDEWEB)
Surendra Babu, K. [IOP, Academia Sinica, Taipe, Taiwan (China); Lee, Young-Ouk [Nuclear Data Evaluation Laboratory, Korea Atomic Energy Research Institute (Korea, Republic of); Mukherjee, S., E-mail: [email protected] [Department of Physics, Faculty of Science, M.S. University of Baroda, Vadodara 390 002 (India)
2012-07-15
The reaction cross sections for different residual nuclides produced in the charged particle (p, d, {sup 3}He and {alpha}) induced reactions were calculated and compared with the existing experimental data which are important for beam monitoring and medical diagnostic applications. A detailed literature compilation and comparison were made on the available data sets for the above reactions. These calculations were carried out using the statistical model code TALYS up to 100 MeV, which contains Kalbach's latest systematic for the emission of complex particles and complex particle-induced reactions. All optical model calculations were performed by ECIS-03, which is built into TALYS. The level density, optical model potential parameters were adjusted to get the better description of experimental data. Various pre-equilibrium models were used in the present calculations with default parameters.
2. Nuclear reactions induced by high-energy alpha particles
Science.gov (United States)
Shen, B. S. P.
1974-01-01
Experimental and theoretical studies of nuclear reactions induced by high energy protons and heavier ions are included. Fundamental data needed in the shielding, dosimetry, and radiobiology of high energy particles produced by accelerators were generated, along with data on cosmic ray interaction with matter. The mechanism of high energy nucleon-nucleus reactions is also examined, especially for light target nuclei of mass number comparable to that of biological tissue.
3. Charged particle induced thermonuclear reaction rates: a compilation for astrophysics
International Nuclear Information System (INIS)
Grama, C.
1999-01-01
We report on the results of the European network NACRE (Nuclear Astrophysics Compilation of REaction rates). The principal reason for setting up the NACRE network has been the necessity of building up a well-documented and detailed compilation of rates for charged-particle induced reactions on stable targets up to Si and on unstable nuclei of special significance in astrophysics. This work is meant to supersede the only existing compilation of reaction rates issued by Fowler and collaborators. The main goal of NACRE network was the transparency in the procedure of calculating the rates. More specifically this compilation aims at: 1. updating the experimental and theoretical data; 2. distinctly identifying the sources of the data used in rate calculation; 3. evaluating the uncertainties and errors; 4. providing numerically integrated reaction rates; 5. providing reverse reaction rates and analytical approximations of the adopted rates. The cross section data and/or resonance parameters for a total of 86 charged-particle induced reactions are given and the corresponding reaction rates are calculated and given in tabular form. Uncertainties are analyzed and realistic upper and lower bounds of the rates are determined. The compilation is concerned with the reaction rates that are large enough for the target lifetimes shorter than the age of the Universe, taken equal to 15 x 10 9 y. The reaction rates are provided for temperatures lower than T = 10 10 K. In parallel with the rate compilation a cross section data base has been created and located at the site http://pntpm.ulb.ac.be/nacre..htm. (authors)
4. Activation cross-section data for -particle-induced nuclear reactions ...
B M ALI
2018-02-20
particle-induced nuclear reactions on natural vanadium up to 20 MeV. It should be mentioned that this study represents a part of (a supplement) systematical study of charged particles-induced nuclear reactions. Earlier studies were.
5. {alpha}-particle induced reactions on yttrium and terbium
Energy Technology Data Exchange (ETDEWEB)
Mukherjee, S.; Kumar, B.B. [School of Studies in Physics, Vikram University, Ujjain-456010 (India); Rashid, M.H. [Variable Energy Cyclotron Center, 1/AF, Bidhan Nagar, Calcutta (India); Chintalapudi, S.N. [Inter-University Consortium for DAE Facilities, 3/LB, Bidhan Nagar, Calcutta (India)
1997-05-01
The stacked foil activation technique has been employed for the investigation of {alpha}-particle induced reactions on the target elements yttrium and terbium up to 50 MeV. Six excitation functions for the ({alpha},xn) type of reactions were studied using high-resolution HPGe {gamma}-ray spectroscopy. A comparison with Blann{close_quote}s geometric dependent hybrid model has been made using the initial exciton number n{sub 0}=4(4p0h) and n{sub 0}=5(5p0h). A broad general agreement is observed between the experimental results and theoretical predictions with an initial exciton number n{sub 0}=4(4p0h). {copyright} {ital 1997} {ital The American Physical Society}
6. α-particle induced reactions on yttrium and terbium
International Nuclear Information System (INIS)
Mukherjee, S.; Kumar, B.B.; Rashid, M.H.; Chintalapudi, S.N.
1997-01-01
The stacked foil activation technique has been employed for the investigation of α-particle induced reactions on the target elements yttrium and terbium up to 50 MeV. Six excitation functions for the (α,xn) type of reactions were studied using high-resolution HPGe γ-ray spectroscopy. A comparison with Blann close-quote s geometric dependent hybrid model has been made using the initial exciton number n 0 =4(4p0h) and n 0 =5(5p0h). A broad general agreement is observed between the experimental results and theoretical predictions with an initial exciton number n 0 =4(4p0h). copyright 1997 The American Physical Society
7. Report of the workshop on 'light particle-induced reactions'
International Nuclear Information System (INIS)
1992-01-01
The study meeting on light particle (mass number = 3 - 11)-induced reation was held for three days from December 5 to 7, 1991 at the Research Center for Nuclear Physics, Osaka University. This book records the report based on the lectures presented at the meeting. In the new facility of the RCNP, the experiment on the nuclear reaction using 400 MeV polarized protons and 200 MeV polarized deuterons is about to begin. When the acceleration of polarized He-3 beam which is being developed at present becomes feasible, by combining it with the high resolution spectrometer GRAND RAIDEN, it is expected that the unique, high accuracy research using the polarized He-3 having intermediate energy (540 MeV) becomes possible. At this time, by focusing attention to what new physics is developed by the nuclear reaction induced by the composite particles having the intermediate energy of mass number 3 - 11, this study meeting was planned and held. As the results, 29 lectures collected in this book were to cover wide fields, and active discussion was carried out. (K.I.)
8. A facility for low energy charged particle induced reaction studies
International Nuclear Information System (INIS)
Vilaithong, T.; Singkarat, S.; Yu, L.D.; Intarasiri, S.; Tippawan, U.
2000-01-01
In Chiang Mai, a highly stable low energy ion accelerator (0 - 350 kV) facility is being established. A subnano-second pulsing system will be incorporated into the beam transport line. The detecting system will consist of a time-of-flight charged particle spectrometer and a high resolution gamma-ray system. The new facility will be used in the studies of low energy heavy ion backscattering and charged particle induced cross section measurement in the interests of material characterization and nucleosynthesis. (author)
9. Cross sections of nuclear reactions induced by protons, deuterons, and alpha particles. Pt.6. Phosphorus
International Nuclear Information System (INIS)
Tobailem, Jacques.
1981-11-01
Cross sections are reviewed for nuclear reactions induced by protons, deuterons, and alpha particles on phosphorus targets. When necessary, published experimental data are corrected, and, when possible, excitation functions are proposed [fr
10. Light particle emission in light heavy-ion induced reactions
International Nuclear Information System (INIS)
Bozek, E.; Cassagnou, Y.; Dayras, R.
1982-01-01
A detailed study was made of the different processes which may compete with fusion in the energy domain where the cross section for fusion deviates from the reaction cross section. Both reactions 14 N + 12 C and 16 O + 10 B were used to form the compound nucleus 26 Al at the same excitation energy of 44 MeV
11. Measurement and analysis of $\\alpha$ particle induced reactions on yttrium
CERN Document Server
Singh, N L; Chintalapudi, S N
2000-01-01
Excitation functions for /sup 89/Y[( alpha ,3n); ( alpha ,4n); ( alpha , p3n); ( alpha , alpha n); ( alpha , alpha 2n)] reactions were measured up to 50 MeV using stacked foil activation technique and HPGe gamma ray spectroscopy method. The experimental data were compared with calculations considering equilibrium as well as preequilibrium reactions according to the hybrid model of Blann (ALICE/90). For ( alpha , xnyp) type of reactions, the precompound contributions are described by the model. There seems to be indications of direct inelastic scattering effects in ( alpha , alpha xn) type of reactions. To the best of our knowledge, the excitation functions for ( alpha ,4n), ( alpha , p3n), ( alpha , alpha n) and ( alpha , alpha 2n) reactions were measured for the first time. (23 refs).
12. Measurement and analysis of alpha particle induced reactions on yttrium
Energy Technology Data Exchange (ETDEWEB)
Singh, N.L.; Gadkari, M.S. [Baroda Univ. (India). Dept. of Physics; Chintalapudi, S.N. [IUC-DAEF Calcutta Centre, Calcutta (India)
2000-05-01
Excitation functions for {sup 89}Y[({alpha},3n);({alpha},4n);({alpha},p3n);({alpha},{alpha}n);({alpha},{alpha}2n)] reactions were measured up to 50 MeV using stacked foil activation technique and HPGe gamma ray spectroscopy method. The experimental data were compared with calculations considering equilibrium as well as preequilibrium reactions according to the hybrid model of Blann (ALICE/90). For ({alpha},xnyp) type of reactions, the precompound contributions are described by the model. There seems to be indications of direct inelastic scattering effects in ({alpha},{alpha}xn) type of reactions. To the best of our knowledge, the excitation functions for ({alpha},4n), ({alpha},p3n), ({alpha},{alpha}n) and ({alpha},{alpha}2n) reactions were measured for the first time. (orig.)
13. SCALP: Scintillating ionization chamber for ALPha particle production in neutron induced reactions
Science.gov (United States)
Galhaut, B.; Durand, D.; Lecolley, F. R.; Ledoux, X.; Lehaut, G.; Manduci, L.; Mary, P.
2017-09-01
The SCALP collaboration has the ambition to build a scintillating ionization chamber in order to study and measure the cross section of the α-particle production in neutron induced reactions. More specifically on 16O and 19F targets. Using the deposited energy (ionization) and the time of flight measurement (scintillation) with a great accuracy, all the nuclear reaction taking part on this project will be identify.
14. light charged particles induced nuclear reaction on some medium weight nuclei for particles applications
International Nuclear Information System (INIS)
Mohsena, B.M.A.M.
2011-01-01
The radioisotopes of indium, cadmium and tin have many practical and medical applications. Their standard routes for production are proton or deuteron induced reactions on natural or enriched cadmium or tin. The production via 3 He induced reactions on natural or enriched cadmium was rarely discussed. In this study 3 He induced reactions on natural cadmium were measured utilizing the stacked-foil technique. The primary incident beam energy was 27 MeV extracted from the MGC- 20E cyclotron, Debrecen, Hungary.the exciatation functions for the reactions n atCd( 3 He,x) 115g,111m Cd, 117m,g,116m,115m,114m,113m,111g,110m,g,109g,108g,107g In and 117m,113,111,110 Sn were evaluated. The data were compared with the available literature data.Different theoretical nuclear reaction models were also used to predict the cross sections for those reactions. The used models were ALICE-IPPE, TALYS-1.2 and EMPIRE-03. The experimental data were compared also to the theoretical model calculations. The theoretical models did not describe most of the experimental results.The isomeric cross section ratios for the isomeric pairs 117m,g In and 110m,g In were calculated. The isomeric cross section ratio depends on the spins of the states of the interested isomeric pair. The calculated isomeric ratios helped to identify the mechanisms of the reactions involved.The integral yields for some medically relevant isotopes were calculated using the excitation function curves
15. Pre-equilibrium decay process in alpha particle induced reactions on thulium and tantalum
International Nuclear Information System (INIS)
Mohan, Rao, A.V.; Chintalapudi, S.N.
1994-01-01
Alpha particle induced reactions on the target elements Thulium and Tantalum were investigated upto 60 MeV using stacked foil activation technique and Ge(Li) gamma ray spectroscopy method. Excitation functions for six reactions of 169 Tm(α,xn); x=1-4 and 181 Ta(α,xn); x=2,4 were studied. The experimental results were compared with the updated version of Hybrid model (ALICE/90) using initial exciton configuration n 0 =4(4pOh). A general agreement was found for all the reactions with this option. (author)
16. Pre-equilibrium decay process in alpha particle induced reactions on thulium and tantalum
Energy Technology Data Exchange (ETDEWEB)
Mohan, Rao, A.V.; Chintalapudi, S.N. (Inter Univ. Consortium for Dept. of atomic Energy Facilities, Calcutta (India))
1994-01-01
Alpha particle induced reactions on the target elements Thulium and Tantalum were investigated upto 60 MeV using stacked foil activation technique and Ge(Li) gamma ray spectroscopy method. Excitation functions for six reactions of [sup 169]Tm([alpha],xn); x=1-4 and [sup 181]Ta([alpha],xn); x=2,4 were studied. The experimental results were compared with the updated version of Hybrid model (ALICE/90) using initial exciton configuration n[sub 0]=4(4pOh). A general agreement was found for all the reactions with this option. (author).
17. Charged-particle induced thermonuclear reaction rates: a compilation for astrophysics
International Nuclear Information System (INIS)
Grama, Cornelia; Angulo, C.; Arnould, M.
2000-01-01
The rapidly growing wealth of nuclear data becomes less and less easily accessible to the astrophysics community. Mastering this volume of information and making it available in an accurate and usable form for incorporation into stellar evolution or nucleosynthesis models become urgent goals of prime necessity. we report on the results of the European network NACRE (Nuclear Astrophysics Compilation of REaction rates). The principal motivation for the setting-up of the NACRE network has been the necessity of building up a well-documented and detailed compilation of rates for charged-particle induced reactions on stable targets up to Si and on unstable nuclei of special significance in astrophysics. This work is meant to supersede the only existing compilation of reaction rates issued by Fowler and collaborators. The cross section data and/or resonance parameters for a total of 86 charged-particle induced reactions are given and the corresponding reaction rates are calculated and given in tabular form. When cross section data are not available in the whole needed range of energies, the theoretical predictions obtained in the framework of the Hauser-Feshbach model is used. Uncertainties are analyzed and realistic upper and lower bounds of the rates are determined. Reverse reaction rates and analytical approximations of the adopted rates are also provided. (authors)
18. Charged-particle induced thermonuclear reaction rates: a compilation for astrophysics
International Nuclear Information System (INIS)
Grama, Cornelia
1999-01-01
The rapidly growing wealth of nuclear data becomes less and less easily accessible to the astrophysics community. Mastering this volume of information and making it available in an accurate and usable form for incorporation into stellar evolution or nucleosynthesis models become urgent goals of prime necessity. We report on the results of the European network NACRE (Nuclear Astrophysics Compilation of REaction rates). The principal motivation for the setting-up of the NACRE network has been the necessity of building up a well-documented and detailed compilation of rates for charged -particle induced reactions on stable targets up to Si and on unstable nuclei of special significance in astrophysics. This work is meant to supersede the only existing compilation of reaction rates issued by Fowler and collaborators. The cross section data and/or resonance parameters for a total of 86 charged-particle induced reactions are given and the corresponding reaction rates are calculated and given in tabular form. When cross section data are not available in the whole needed range of energies the theoretical predictions obtained in the framework of the Hauser-Feshbach model are used. Uncertainties are analyzed and realistic upper and lower bounds of the rates are determined. Reverse reaction rates and analytical approximations of the adopted rates are also provided. (author)
19. Sequential charged particle reaction
International Nuclear Information System (INIS)
Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo
2004-01-01
The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)
20. Charged-particle magnetic-quadrupole spectrometer for neutron induced reactions
International Nuclear Information System (INIS)
Haight, R.C.; Grimes, S.M.; Tuckey, B.J.; Anderson, J.D.
1975-01-01
A spectrometer has been developed for measuring the charged particle production cross sections and spectra in neutron-induced reactions. The spectrometer consists of a magnetic quadrupole doublet which focuses the charged particles onto a silicon surface barrier detector telescope which is 2 meters or more from the irradiated sample. Collimators, shielding, and the large source-to-detector distance reduce the background enough to use the spectrometer with a 14-MeV neutron source producing 4 . 10 12 n/s. The spectrometer has been used in investigations of proton, deuteron, and alpha particle production by 14-MeV neutrons incident on various materials. Protons with energies as low as 1.1 MeV have been measured. The good resolution of the detectors has also made possible an improved measurement of the neutron- neutron scattering length from the 0 0 proton spectrum from deuteron breakup by 14-MeV neutrons
1. Cross section measurement of alpha particle induced nuclear reactions on natural cadmium up to 52 MeV
OpenAIRE
Ditrói, F.; Takács, S.; Haba, H.; Komori, Y.; Aikawa, M.
2016-01-01
Cross sections of alpha particle induced nuclear reactions have been measured on thin natural cadmium targets foils in the energy range from 11 to 51.2 MeV. This work was a part of our systematic study on excitation functions of light ion induced nuclear reactions on different target materials. Regarding the cross sections, the alpha induced reactions are not deeply enough investigated. Some of the produced isotopes are of medical interest, others have application in research and industry. Th...
2. Effect of free-particle collisions in high energy proton and pion-induced nuclear reactions
International Nuclear Information System (INIS)
Jacob, N.P. Jr.
1975-07-01
The effect of free-particle collisions in simple ''knockout'' reactions of the form (a,aN) and in more complex nuclear reactions of the form (a,X) was investigated by using protons and pions. Cross sections for the 48 Ti(p,2p) 47 Sc and the 74 Ge(p,2p) 73 Ga reactions were measured from 0.3 to 4.6 GeV incident energy. The results indicate a rise in (p,2p) cross section for each reaction of about (25 +- 3) percent between the energies 0.3 and 1.0 GeV, and are correlated to a large increase in the total free-particle pp scattering cross sections over the same energy region. Results are compared to previous (p,2p) excitation functions in the GeV energy region and to (p,2p) cross section calculations based on a Monte Carlo intranuclear cascade-evaporation model. Cross section measurements for (π/sup +-/, πN) and other more complex pion-induced spallation reactions were measured for the light target nuclei 14 N, 16 O, and 19 F from 45 to 550 MeV incident pion energy. These measurements indicate a broad peak in the excitation functions for both (π,πN) and (π,X) reactions near 180 MeV incident energy. This corresponds to the large resonances observed in the free-particle π + p and π - p cross sections at the same energy. Striking differences in (π,πN) cross section magnitudes are observed among the light nuclei targets. The experimental cross section ratio sigma(π - ,π - n)/sigma(π + ,πN) at 180 MeV is 1.7 +- 0.2 for all three targets. The experimental results are compared to previous pion and analogous proton-induced reactions, to Monte Carlo intranuclear cascade-evaporation calculations, and to a semi-classical nucleon charge exchange model. (108 references) (auth)
3. Production of medically useful bromine isotopes via alpha-particle induced nuclear reactions
Science.gov (United States)
Breunig, Katharina; Scholten, Bernhard; Spahn, Ingo; Hermanne, Alex; Spellerberg, Stefan; Coenen, Heinz H.; Neumaier, Bernd
2017-09-01
The cross sections of α-particle induced reactions on arsenic leading to the formation of 76,77,78Br were measured from their respective thresholds up to 37 MeV. Thin sediments of elemental arsenic powder were irradiated together with Al degrader and Cu monitor foils using the established stacked-foil technique. For determination of the effective α-particle energies and of the effective beam current through the stacks the cross-section ratios of the monitor nuclides 67Ga/66Ga were used. This should help resolve discrepancies in existing literature data. Comparison of the data with the available excitation functions shows some slight energy shifts as well as some differences in curve shapes. The calculated thick target yields indicate, that 77Br can be produced in the energy range Eα = 25 → 17 MeV free of isotopic impurities in quantities sufficient for medical application.
4. Production of medically useful bromine isotopes via alpha-particle induced nuclear reactions
Directory of Open Access Journals (Sweden)
Breunig Katharina
2017-01-01
Full Text Available The cross sections of α-particle induced reactions on arsenic leading to the formation of 76,77,78Br were measured from their respective thresholds up to 37 MeV. Thin sediments of elemental arsenic powder were irradiated together with Al degrader and Cu monitor foils using the established stacked-foil technique. For determination of the effective α-particle energies and of the effective beam current through the stacks the cross-section ratios of the monitor nuclides 67Ga/66Ga were used. This should help resolve discrepancies in existing literature data. Comparison of the data with the available excitation functions shows some slight energy shifts as well as some differences in curve shapes. The calculated thick target yields indicate, that 77Br can be produced in the energy range Eα = 25 → 17 MeV free of isotopic impurities in quantities sufficient for medical application.
5. Investigation of the production of slow particles in 60 A GeV 16O induced nuclear emulsion reaction
International Nuclear Information System (INIS)
Zhang Donghai
2001-01-01
The multiplicity distributions and correlations of grey track producing particles (N g ), black track producing particles (N b ) and heavy track producing particles (N h ) have been studied in 60 A GeV 16 O induced nuclear emulsion reaction. The multiplicity distributions of grey particles, black particles and heavy track producing particles can be reproduced by FRITIOF (version 1.7) taking cascade mechanism in to account and DTUNUC2.0 with an incident energy of 200 A GeV. The mean multiplicity of black particles (N b ) increases with the number of grey particle N g up to 10 and then exhibits a saturation for peripheral, central and mini-bias events; the average values of grey particles g > (heavy track producing particles h > increase with increasing values of black particle N b (grey particle N g )
6. Gas-to-particle conversion in the atmospheric environment by radiation-induced and photochemical reactions
International Nuclear Information System (INIS)
Vohra, K.G.
1975-01-01
During the last few years a fascinating new area of research involving ionizing radiations and photochemistry in gas-to-particle conversion in the atmosphere has been developing at a rapid pace. Two problems of major interest and concern in which this is of paramount importance are: (1) radiation induced and photochemical aerosol formation in the stratosphere and, (2) role of radiations and photochemistry in smog formation. The peak in cosmic ray intensity and significant solar UV flux in the stratosphere lead to complex variety of reactions involving major and trace constituents in this region of the atmosphere, and some of these reactions are of vital importance in aerosol formation. The problem is of great current interest because the pollutant gases from industrial sources and future SST operations entering the stratosphere could increase the aerosol burden in the stratosphere and affect the solar energy input of the troposphere with consequent ecological and climatic changes. On the other hand, in the nuclear era, the atmospheric releases from reactors and processing plants could lead to changes in the cloud nucleation behaviour of the environment and possible increase in smog formation in the areas with significant levels of radiations and conventional pollutants. A review of the earlier work, current status of the problem, and conventional pollutants. A review of the earlier work, current status of the problem, and some recent results of the experiments conducted in the author's laboratory are presented. The possible mechanisms of gas-to-particle conversion in the atmosphere have been explained
7. Alpha-particle energy spectra measured at forward angles in heavy-ion-induced reactions
International Nuclear Information System (INIS)
Borcea, C.; Cierlic, E.; Kalpakchieva, R.; Oganessian, Yu.Ts.; Penionzhkevich, Yu.E.
1980-01-01
Energy spectra have been measured for α-particles emitted in the bombardment of 159 Tb, 181 Ta, 197 Au, and 232 Th nuclei by 20 Ne, 22 Ne, and 40 Ar projectiles. The reaction products emitted in the angular range (0+-2)deg relative to the beam direction were analyzed using a magnetic spectrometer and detected by means of a semiconductor ΔE-E telescope. It was found that in all cases the experimentally measured maximum α-particle energy almost amounts to the maximum possible value calculated from the reaction energy balance for a two-body exit channel. A correlation was found between the measured absolute cross section in different target-projectile combinations and the α-particle binding energy in the target nuclei. On the basis of the obtained results a conclusion has been drawn that the α-particles are emitted in the early stage of the reaction
8. Emission of high-energy charged particles at 00 in Ne-induced reactions
International Nuclear Information System (INIS)
Borcea, C.; Gierlik, E.; Kalinin, A.M.; Kalpakchieva, R.; Oganessia, Yu.Ts.; Pawlat, T.; Penionzhkevich, Yu.E.; Ryakhlyuk, A.V.
1982-01-01
Inclusive energy spectra have been measured for light charged particles emitted in the bombardment of 232 Th, 181 Ta, sup(nat)Ti and 12 C targets by 22 Ne ions at 178 MeV and sup(nat)Ti target by 20 Ne ions at 196 MeV. The reaction products were analysed and detected by means of a ΔE-E telescope placed in the focal plane of a magnetic spectrometer located at an angle of 0 deg with respect to the beam direction. In all the reactions studied light charged particles with an energy close to the respective calculated kinematic limit for a two-body exit channel are produced with relatively great probability. The results obtained make it possible to draw some conclusions about the reaction mechanism involving the emission of light charged particles
9. Thresholds and Q values of nuclear reactions induced by neutrons, protons, deuterons, tritons, 3He ions, alpha particles, and photons
International Nuclear Information System (INIS)
Howerton, R.J.
1981-01-01
The 1977 Wapstra and Bos nuclear mass data tables were used to derive tables for thresholds and Q values of nuclear reactions induced by neutrons, protons, deuterons, tritons, 3 He ions, alpha particles, and photons. The tables are displayed on microfiche included with the report
10. Charged particle spectra in oxygen-induced reactions at 14. 6 and 60 GeV/Nucleon
Energy Technology Data Exchange (ETDEWEB)
Adamovich, M I; Aggarwal, M M; Arora, R; Alexandrov, Y A; Azimov, S A; Badyal, S K; Basova, E; Bhalla, K B; Bahsin, A; Bhatia, V S; Bomdarenko, R A; Burnett, T H; Cai, X; Chernova, L P; Chernyavski, M M; Dressel, B; Friedlander, E M; Gadzhieva, S I; Ganssauge, E R; Garpman, S; Gerassimov, S G; Gill, A; Grote, J; Gulamov, K G; Gulyamov, V G; Gupta, V K; Hackel, S; Heckman, H H; Jakobsson, B; Judek, B; Katroo, S; Kadyrov, F G; Kallies, H; Karlsson, L; Kaul, G L; Kaur, M; Kharlamov, S P; Kohli, J; Kumar, V; Lal, P; Larionova, V G; Lindstrom, P J; Liu, L S; Lokanathan, S; Lord, J; Lukicheva, N S; Mangotra, L K; Maslennikova, N V; Mitta, I S; Monnand, E; Mookerjee, S; Mueller, C; Nasyrov, S H; Nvtny, V S; Orlova, G I; Otterlund, I; Peresadko, N G; Persson, S; Petrov, N V; Qian, W Y; Raniwala, R; Raniwala, S; Rao, N K; Rhee, J Y; Shaidkhanov, N; Salmanova, N G; Schulz, W; Schussler, F; Shukla, V S; Skelding, D; Soederstroe,
1989-10-01
Multiplicity distributions and pseudo-rapidity distributions of charged particles from oxygen-induced nuclear reactions at 14.6 and 60 GeV/nucleon are presented. The data were taken from the EMU{minus}01 emulsion stacks and compared to simulations from the Lund Monte Carlo Model (FRITIOF).
11. Reaction list for charged-particle-induced nuclear reactions: Z = 1 to Z = 98 (H to Cf), July 1973--September 1974
International Nuclear Information System (INIS)
McGowan, F.K.; Milner, W.T.
1975-01-01
This Reaction List for charged-particle-induced nuclear reactions has been prepared from the journal literature for the period from July 1973 through September 1974. Each published experimental paper is listed under the target nucleus in the nuclear reaction with a brief statement of the type of data in the paper. The nuclear reaction is denoted by A(a,b)B, where the mass of a is greater than or equal to (one nucleon mass). There is no restriction on energy. Nuclear reactions involving mesons in the outgoing channel are not included. Theoretical papers which treat directly with the analysis of nuclear reaction data and results are included in the Reaction List. The cutoff date for literature was September 30, 1974. (U.S.)
12. Photo induced multiple fragmentation of atoms and molecules: Dynamics of Coulombic many-particle systems studied with the COLTRIMS reaction microscope
International Nuclear Information System (INIS)
Czasch, A.; Schmidt, L.Ph.H.; Jahnke, T.; Weber, Th.; Jagutzki, O.; Schoessler, S.; Schoeffler, M.S.; Doerner, R.; Schmidt-Boecking, H.
2005-01-01
Many-particle dynamics in atomic and molecular physics has been investigated by using the COLTRIMS reaction microscope. The COLTRIMS technique visualizes photon and ion induced many-particle fragmentation processes in the eV and milli-eV regime. It reveals the complete momentum pattern in atomic and molecular many-particle reactions comparable to the bubble chamber in nuclear physics
13. Excitation Functions for Charged Particle Induced Reactions in Light Elements at Low Projectile Energies
International Nuclear Information System (INIS)
Lorenzen, J.; Brune, D.
1973-01-01
The present chapter has been formulated with the aim of making it useful in various fields of nuclear applications with emphasis on charged particle activation analysis. Activation analysis of light elements using charged particles has proved to be an important tool in solving various problems in analytical chemistry, e g those associated with metal surfaces. Scientists desiring to evaluate the distribution of light elements in the surface of various matrices using charged particle reactions require accurate data on cross sections in the MeV-region. A knowledge of cross section data and yield-functions is of great interest in many applied fields involving work with charged particles, such as radiological protection and health physics, material research, semiconductor material investigations and corrosion chemistry. The authors therefore decided to collect a limited number of data which find use in these fields. Although the compilation is far from being complete, it is expected to be of assistance in devising measurements of charged particle reactions in Van de Graaff or other low energy accelerators
14. Excitation Functions for Charged Particle Induced Reactions in Light Elements at Low Projectile Energies
Energy Technology Data Exchange (ETDEWEB)
Lorenzen, J; Brune, D
1973-07-01
The present chapter has been formulated with the aim of making it useful in various fields of nuclear applications with emphasis on charged particle activation analysis. Activation analysis of light elements using charged particles has proved to be an important tool in solving various problems in analytical chemistry, e g those associated with metal surfaces. Scientists desiring to evaluate the distribution of light elements in the surface of various matrices using charged particle reactions require accurate data on cross sections in the MeV-region. A knowledge of cross section data and yield-functions is of great interest in many applied fields involving work with charged particles, such as radiological protection and health physics, material research, semiconductor material investigations and corrosion chemistry. The authors therefore decided to collect a limited number of data which find use in these fields. Although the compilation is far from being complete, it is expected to be of assistance in devising measurements of charged particle reactions in Van de Graaff or other low energy accelerators
15. Identification and spectrometry of charged particles produced in reactions induced by 14 MeV neutrons. II
International Nuclear Information System (INIS)
Sellem, C.; Perroud, J.P.; Loude, J.F.
1975-01-01
A counter telescope consisting of gas proportional counters, a thin semiconductor detector and a thick one has been built and used for the study of the angular differential cross sections of (n, charged particles) reactions induced by 14 MeV neutrons. Detection of the α-particles emitted in the neutron production reaction 3 H(d,n) 4 He gives a time reference for the measurement of the time of flight of the charged particles and allows a precise monitoring of the intensity of the neutron beam. High energy protons, deuterons and tritons are identified by their energy losses in the thin semiconductor detector and in the thick one and by their time of flight. Low energy protons, deuterons, tritons and all α-particles stop in the thin semiconductor detector and are identified by their energy losses in this detector and in one gas proportional counter as well as by their time of flight. It is possible to identify and to measure the energy of all charged particles in the energy range of 2 to 15 MeV: a very low background results from the use of the time of flight. (Auth.)
16. Improved single particle potential for transport model simulations of nuclear reactions induced by rare isotope beams
International Nuclear Information System (INIS)
Xu Chang; Li Baoan
2010-01-01
Taking into account more accurately the isospin dependence of nucleon-nucleon interactions in the in-medium many-body force term of the Gogny effective interaction, new expressions for the single-nucleon potential and the symmetry energy are derived. Effects of both the spin (isospin) and the density dependence of nuclear effective interactions on the symmetry potential and the symmetry energy are examined. It is shown that they both play a crucial role in determining the symmetry potential and the symmetry energy at suprasaturation densities. The improved single-nucleon potential will be useful for more accurate simulation of nuclear reactions induced by rare-isotope beams within transport models.
17. Excitation function of alpha-particle-induced reactions on {sup nat}Ni from threshold to 44 MeV
Energy Technology Data Exchange (ETDEWEB)
Uddin, M.S. [Atomic Energy Research Establishment, Tandem Accelerator Facilities, Institute of Nuclear Science and Technology, Savar, Dhaka (Bangladesh); Kim, K.S.; Nadeem, M.; Kim, G.N. [Kyungpook National University, Department of Physics, Buk-gu, Daegu (Korea, Republic of); Sudar, S. [Debrecen University, Institute of Experimental Physics, Debrecen (Hungary)
2017-05-15
Excitation functions of the {sup nat}Ni(α,x){sup 62,63,65}Zn, {sup nat}Ni(α,x){sup 56,57}Ni and {sup nat}Ni(α,x){sup 56,57,58m+g}Co reactions were measured from the respective thresholds to 44MeV using the stacked-foil activation technique. The tests for the beam characterization are described. The radioactivity was measured using HPGe γ-ray detectors. Theoretical calculations on α-particles-induced reactions on {sup nat}Ni were performed using the nuclear model code TALYS-1.8. A few results are new, the others strengthen the database. Our experimental data were compared with results of nuclear model calculations and described the reaction mechanism. (orig.)
18. Study of the Particle Production in $^{12}$C Induced Heavy Ion Reactions at 86 MeV/N
CERN Multimedia
2002-01-01
The aim of this experiment is to study various characteristics of light and heavy particle production in |1|2C induced reactions if possible over the whole unexplored energy region 50-86~MeV/N. In particular we want to investigate how the correlations in the multiparticle events can help us to distinguish bet existing models. \\\\ \\\\ Two-proton large-angle correlations and correlations between two heavier (Z~=~1 or 2) particles are studied with scintillator +~NaI and range telescopes, complemented with a 24 telescope scintillator wall for projectile fragments. Thereby we receive information about the reaction plane and the impact parameter in coincidence with the two-particle correlation spectra. Small @Dp correlations can also be studied. The inclusive @p|+ and @p|- production has been followed far below the nucleon-nucleon threshold. Pions are thereby identified from @DE-E correlations and the @p|+ decay in plastic range telescopes. These results are now followed up by @p-projectile fragment and @p-p correlat...
19. Cross section measurement of alpha particle induced nuclear reactions on natural cadmium up to 52MeV.
Science.gov (United States)
Ditrói, F; Takács, S; Haba, H; Komori, Y; Aikawa, M
2016-12-01
Cross sections of alpha particle induced nuclear reactions have been measured on thin natural cadmium targets foils in the energy range from 11 to 51.2MeV. This work was a part of our systematic study on excitation functions of light ion induced nuclear reactions on different target materials. Regarding the cross sections, the alpha induced reactions are not deeply enough investigated. Some of the produced isotopes are of medical interest, others have application in research and industry. The radioisotope 117m Sn is a very important theranostic (therapeutic + diagnostic) radioisotope, so special care was taken to the results for that isotope. The well-established stacked foil technique followed by gamma-spectrometry with HPGe gamma spectrometers were used. The target and monitor foils in the stack were commercial high purity metal foils. From the irradiated targets 117m Sn, 113 Sn, 110 Sn, 117m,g In, 116m In, 115m In, 114m In, 113m In, 111 In, 110m,g In, 109m In, 108m,g In, 115g Cd and 111m Cd were identified and their excitation functions were derived. The results were compared with the data of the previous measurements from the literature and with the results of the theoretical nuclear reaction model code calculations TALYS 1.8 (TENDL-2015) and EMPIRE 3.2 (Malta). From the cross section curves thick target yields were calculated and compared with the available literature data. Copyright © 2016 Elsevier Ltd. All rights reserved.
20. Verification of nuclear data for DT neutron induced charged-particle emission reaction of light nuclei
International Nuclear Information System (INIS)
Kondo, K.; Murata, I.; Ochiai, K.; Kubota, N.; Miyamaru, H.; Takagi, S.; Shido, S.; Konno, C.; Nishitani, T.
2007-01-01
Double-differential cross-section (DDX) for emitted charged particles is necessary to estimate material damage, gas production and nuclear heating in a fusion reactor. Detailed measurements of the cross-sections for beryllium, carbon and fluorine, which are among the composition materials of expected fusion blankets and first walls, were carried out with a charged-particle spectrometer using a pencil-beam DT neutron source. As verification of the cross-sections evaluated in three nuclear libraries (JENDL-3.3, ENDF/B-VI and JEFF-3.1), our measured data were compared with the data evaluated in the libraries. From the comparison, the following problems were pointed out: Beryllium: Remarkable differences in energy and angular distribution for α-particles were observed between the measured data and the libraries. The estimated total cross-section for α-particle production well agreed with the libraries. Carbon: There was a discrepancy of about 20% between JENDL-3.3 and ENDF/B-VI (JEFF-3.1) for α-particle production cross-section, and no DDX for α-particles is given in the libraries. Our obtained total cross-section for α-particle production was rather consistent with ENDF/B-VI (JEFF-3.1), and the value evaluated in JENDL-3.3 seemed too large. Fluorine: The remarkable differences for DDX of protons and α-particles were observed between the obtained result and JENDL-3.3, although detailed DDX was stored only in JENDL. The obtained total cross-sections mostly supported the evaluation of ENDF/B-VI (JEFF-3.1)
1. Excitation functions for alpha-particle-induced reactions with natural antimony
Energy Technology Data Exchange (ETDEWEB)
Singh, N. L.; Shah, D. J.; Mukherjee, S.; Chintalapudi, S. N. [Vadodara, M. S. Univ. of Baroda (India). Fac. of Science. Dept. of Physics
1997-07-01
Stacked-foil activation technique and {gamma} - rays spectroscopy were used for the determination of the excitation functions of the {sup 121}Sb [({alpha}, n); ({alpha}, 2n); ({alpha},4 n); ({alpha}, p3n); ({alpha}, {alpha}n)]; and Sb [({alpha}, 3n); ({alpha}, 4n); ({alpha}, {alpha}3n)] reactions. The excitation functions for the production of {sup 124}I, {sup 123}I, {sup 121}I, {sup 121}Te and {sup 120}Sb were reported up to 50 MeV. The reactions {sup 121} Sb ({alpha}, {alpha}n) + {sup 123} Sb ({alpha}, {alpha}3n) are measured for the first time. Since natural antimony used as the target has two odd mass stable isotopes of abundances 57.3 % ({sup 121}Sb), their activation in some cases gives the same product nucleus through different reaction channels but with very different Q-values. In such cases, the individual reaction cross-sections are separated with the help of theoretical cross-sections. The experimental cross-sections were compared with the predictions based on hybrid model of Blann. The high-energy part of the excitation functions are dominated by the pre-equilibrium reaction mechanism and the initial exciton number n{sub 0} = 4 (4 p 0 h) gives fairly good agreement with presently measured results.
2. SOS reaction kinetics of bacterial cells induced by ultraviolet radiation and α particles
International Nuclear Information System (INIS)
Bonev, M.; Kolev, S.
2000-01-01
It is the purpose of the work to apply the SOS lux test for detecting α particles, as well as to study the SOS system kinetics. Two strains with plasmid pPLS-1 are used: wild type C600 lux and its isogen lysogen with α prophage one. Irradiation is done on dacron nuclear filters. The source of α particles is Am 241 with power 5 Gy/min, and the ultraviolet source - a lamp emitting rays with wave length 254 nm. The light yield is measured by installations made up of scintilometer VA-S-968, High-voltage electric power, and one channel analyzer Strahlugsmessgerat 20046. The SOS lux text is based on the recombinant plasmid pPLS-1 which is a derivative of pBR322 where the lux gene is set under the control of an SOS promoter. E coly recA + strains containing the construction produce considerable amount of photons in the visible zone following treatment with agents damaging the DNA of cells. The kinetic curves of SOS response are obtained after irradiation with α particles and UV rays. DNA damaging agents cause an increase in the initial SOS response rate in the range od smaller doses, and a decrease reaching to block of the one in the high doses range. The light yield of lysogenic cells is lower. As compared to nonelysogene ones. DNA damage caused by α particles are more difficult to repair as compared to pyrimidine dimers. (author)
3. Special features of isomeric ratios in nuclear reactions induced by various projectile particles
Energy Technology Data Exchange (ETDEWEB)
Danagulyan, A. S.; Hovhannisyan, G. H., E-mail: [email protected]; Bakhshiyan, T. M.; Martirosyan, G. V. [Yerevan State University (Armenia)
2016-05-15
Calculations for (p, n) and (α, p3n) reactions were performed with the aid of the TALYS-1.4 code. Reactions in which the mass numbers of target and product nuclei were identical were examined in the range of A = 44–124. Excitation functions were obtained for product nuclei in ground and isomeric states, and isomeric ratios were calculated. The calculated data reflect well the dependence of the isomeric ratios on the projectile type. A comparison of the calculated and experimental data reveals, that, for some nuclei in a high-spin state, the calculated data fall greatly short of their experimental counterparts. These discrepancies may be due to the presence of high-spin yrast states and rotational bands in these nuclei. Calculations involving various level-density models included in the TALYS-1.4 code with allowance for the enhancement of collective effects do not remove the discrepancies in the majority of cases.
4. Experimental Study on Impact-Induced Reaction Characteristics of PTFE/Ti Composites Enhanced by W Particles
Directory of Open Access Journals (Sweden)
Yan Li
2017-02-01
Full Text Available Metal/fluoropolymer composites are a category of energetic structural materials that release energy through exothermic chemical reactions initiated under highly dynamic loadings. In this paper, the chemical reaction mechanism of PTFE (polytetrafluoroethylene/Ti/W composites is investigated through thermal analysis and composition analysis. These composites undergo exothermic reactions at 510 °C to 600 °C, mainly producing TiFx. The tungsten significantly reduces the reaction heat due to its inertness. In addition, the dynamic compression properties and impact-induced reaction behaviors of PTFE/Ti/W composites with different W content prepared by pressing and sintering are studied using Split Hopkinson Pressure Bar and high speed photography. The results show that both the mechanical strength and the reaction degree are significantly improved with the increasing strain rate. Moreover, as W content increases, the mechanical strength is enhanced, but the elasticity/plasticity is decreased. The PTFE/Ti/W composites tend to become more inert with the increasing W content, which is reflected by the reduced reaction degree and the increased reaction threshold for the impact ignition.
5. Experimental study of high spin states in low-medium mass nuclei by use of charge particle induced reactions
International Nuclear Information System (INIS)
Alenius, N.G.
1975-01-01
For the test of nuclear models the study of the properties of nuclear states of high angular momentum is especially important, because such states can often be given very simple theoretical descriptions. High spin states are easily populated by use of reactions initiated by alpha particles or heavy ions. In this thesis a number of low-medium mass nuclei have been studied, with emphasis on high spin states. (Auth.)
6. Photon induced reactions
International Nuclear Information System (INIS)
Mecking, B.A.
1982-04-01
Various aspects of medium energy nuclear reactions induced by real photons are reviewed. Special emphasis is put on high accuracy experiments that will become possible with the next generation of electron accelerators. (orig.)
7. Program DDCS for nucleon and composite particle DDX of nucleon induced reactions up to tens of MeV
International Nuclear Information System (INIS)
Shen Qingbiao
1994-01-01
DDCS is a program for calculating the neutron or proton induced reactions of medium-heavy nuclei in the incident energy range up to 50 MeV including 5 emission processes. This program is written in FORTAN-77 on microscopic computer 486. DDCS is constructed within the framework of optical model, generalized master equation of the exciton model, and the evaporation model. The effect of recoil nucleus is considered in this program. DDCS has been used to calculate reactions of n + 56 Fe, n + 93 Nb, P + 120 Sn, P + 197 Au, and P + 209 Bi. Pretty good results in agreement with the experimental data were obtained
8. Light particle production in spallation reactions induced by protons of 0.8-2.5 GeV incident kinetic energy
International Nuclear Information System (INIS)
Herbach, Claus-Michael; Enke, Michael; Boehm, Andreas
2002-01-01
Absolute production cross sections have been measured simultaneously for neutrons and light charged particles in 0.8-2.5 GeV proton induced spallation reactions for a series of target nuclei from aluminum up to uranium. The high detection efficiency both for neutral and charged evaporative particles provides an event-wise access to the amount of projectile energy dissipated into nuclear excitation. Various intra nuclear cascade plus evaporation models have been confronted with the experimental data showing large discrepancies for hydrogen and helium production. (author)
9. NACRE II: an update of the NACRE compilation of charged-particle-induced thermonuclear reaction rates for nuclei with mass number A<16
International Nuclear Information System (INIS)
Xu, Y.; Takahashi, K.; Goriely, S.; Arnould, M.; Ohta, M.; Utsunomiya, H.
2013-01-01
An update of the NACRE compilation [3] is presented. This new compilation, referred to as NACRE II, reports thermonuclear reaction rates for 34 charged-particle induced, two-body exoergic reactions on nuclides with mass number A 6 ≲T⩽10 10 K range. Along with the ‘adopted’ rates, their low and high limits are provided. The new rates are available in electronic form as part of the Brussels Library (BRUSLIB) of nuclear data. The NACRE II rates also supersede the previous NACRE rates in the Nuclear Network Generator (NETGEN) for astrophysics. [ (http://www.astro.ulb.ac.be/databases.html)
10. Gridded ionization chamber and dual parameter measurement system for fast neutron-induced charged particles emission reaction
International Nuclear Information System (INIS)
Chen Yingtang; Qi Huiquan; Chen Zemin
1995-01-01
A twin ionization chamber with a common cathode and grids is described for (n,α), (n,p) studies. The chamber is used to determine the energy spectra and angular distribution of the charged particles emitted from the sample positioned on the cathode by dual parameter measurements of coinciding pulses from the anode and cathode of the ionization chamber. Pu α source is used to test the property of the chamber, an isotropic angular distribution is basically showed and the energy resolution is about 2%. This ionization chamber has already been applied to the studies of the 40 Ca(n,α) and 64 Zn(n,α) reactions
11. Permeability change with dissolution and precipitation reaction induced by highly alkaline plume in packed bed with amorphous silica particles
International Nuclear Information System (INIS)
Komatsu, Kyo; Kadowaki, Junichi; Niibori, Yuichi; Mimura, Hitoshi; Usui, Hideo
2008-01-01
reaction in the inner part of particles and/or between the particles. (author)
12. Light charged particle production in fast neutron-induced reactions on carbon (En=40 to 75 MeV) (II). Tritons and alpha particles
International Nuclear Information System (INIS)
Dufauquez, C.; Slypen, I.; Benck, S.; Meulders, J.P.; Corcalciuc, V.
2000-01-01
Double-differential cross sections for fast neutron-induced triton and alpha-particle production on carbon are reported at six incident neutron energies between 40 and 75 MeV. Angular distributions were measured at laboratory angles between 20 deg. and 160 deg. . Energy-differential, angle-differential and total cross sections are also reported. Experimental cross sections are compared to existing experimental data and to theoretical model calculations
13. Correction of Doppler broadening of {gamma}-ray lines induced by particle emission in heavy-ion induced fusion-evaporation reactions
Energy Technology Data Exchange (ETDEWEB)
Nyberg, J; Seweryniak, D; Fahlander, C; Insua-Cao, P [Uppsala Univ. (Sweden). Dept. of Radiation Sciences; Johnson, A; Cederwall, B [Manne Siegbahn Inst. of Physics, Stockholm (Sweden); [Royal Inst. of Tech., Stockholm (Sweden); Adamides, E; Piiparinen, M [National Centre for Scientific Research, Ag. Paraskevi, Attiki (Greece); Atac, A; Norlin, L O [Niels Bohr Inst., Copenhagen (Denmark); Ideguchi, E; Mitarai, S [Kyushu Univ., Fukuoka (Japan). Dept. of Physics; Julin, R; Juutinen, S; Tormanen, S; Virtanen, A [Jyvaeskylae Univ. (Finland). Dept. of Physics; Karczmarczyk, W; Kownacki, J [Warsaw Univ. (Poland); Schubart, R [Hahn-Meitner-Institut Berlin GmbH (Germany)
1992-08-01
The effect of particle emission on the peak shape of {gamma}-ray lines have been investigated using the NORDBALL detector system. By detecting neutrons, protons and {alpha} particles emitted in the {sup 32}S (95 MeV) + {sup 27}Al reaction, the energy and direction of emission of the residual nuclei could be determined and subsequently used for an event-by -event Doppler correction of the detected {gamma} rays. Extensive Monte Carlo simulations were performed to study how the different Doppler phenomena influence the peak shape and in particular which particle detector properties are important for the Doppler correction. (author). 2 refs., 1 tab., 4 figs.
14. Excitation functions of alpha particle induced reactions on {sup nat}Ti up to 40 MeV
Energy Technology Data Exchange (ETDEWEB)
Uddin, M.S., E-mail: [email protected] [Tandem Accelerator Facilities, Institute of Nuclear Science and Technology, Atomic Energy Research Establishment, Savar, Dhaka (Bangladesh); Scholten, B. [Institut für Neurowissenschaften und Medizin, INM-5:Nuklearchemie, Forschungszentrum Jülich, D-52425 Jülich (Germany)
2016-08-01
Excitation functions of the reactions {sup nat}Ti(α,x){sup 48}Cr, {sup nat}Ti(α,x){sup 48}V and {sup nat}Ti(α,x){sup 46,48}Sc were determined by the stacked-foil activation technique up to 40 MeV. The radioactivities produced in the {sup nat}Ti target were measured by γ-ray spectrometry using HPGe detector. The reaction {sup nat}Ti(α,x){sup 51}Cr was used to determine the beam parameters. New experimental values for the above reactions have been obtained. An intercomparison of our data with the available literature values has been done. The cross section results obtained in this work could be useful in defining new monitor reactions, radiation safety and isotope production.
15. Laser induced nuclear reactions
International Nuclear Information System (INIS)
Ledingham, Ken; McCanny, Tom; Graham, Paul; Fang Xiao; Singhal, Ravi; Magill, Joe; Creswell, Alan; Sanderson, David; Allott, Ric; Neely, David; Norreys, Peter; Santala, Marko; Zepf, Matthew; Watts, Ian; Clark, Eugene; Krushelnick, Karl; Tatarakis, Michael; Dangor, Bucker; Machecek, Antonin; Wark, Justin
1998-01-01
Dramatic improvements in laser technology since 1984 have revolutionised high power laser technology. Application of chirped-pulse amplification techniques has resulted in laser intensities in excess of 10 19 W/cm 2 . In the mid to late eighties, C. K. Rhodes and K. Boyer discussed the possibility of shining laser light of this intensity onto solid surfaces and to cause nuclear transitions. In particular, irradiation of a uranium target could induce electro- and photofission in the focal region of the laser. In this paper it is shown that μCi of 62 Cu can be generated via the (γ,n) reaction by a laser with an intensity of about 10 19 Wcm -2
16. Particle-gamma and particle-particle correlations in nuclear reactions using Monte Carlo Hauser-Feshback model
Energy Technology Data Exchange (ETDEWEB)
Kawano, Toshihiko [Los Alamos National Laboratory; Talou, Patrick [Los Alamos National Laboratory; Watanabe, Takehito [Los Alamos National Laboratory; Chadwick, Mark [Los Alamos National Laboratory
2010-01-01
Monte Carlo simulations for particle and {gamma}-ray emissions from an excited nucleus based on the Hauser-Feshbach statistical theory are performed to obtain correlated information between emitted particles and {gamma}-rays. We calculate neutron induced reactions on {sup 51}V to demonstrate unique advantages of the Monte Carlo method. which are the correlated {gamma}-rays in the neutron radiative capture reaction, the neutron and {gamma}-ray correlation, and the particle-particle correlations at higher energies. It is shown that properties in nuclear reactions that are difficult to study with a deterministic method can be obtained with the Monte Carlo simulations.
17. Light particle revelation on incomplete fusion reactions
International Nuclear Information System (INIS)
Gillibert, A.
1984-01-01
Incomplete fusion reactions have been studied through light particles emission in the reaction 116 Sn + 16 O at 125 MeV (ALICE facility in Orsay). We measured energy angular distributions and correlations between any two of these particlesparticles, protons, neutrons), while γ multiplicity measurements provide us fuller informations. From collected data, the following pictures can be drawn: - the only fast particles observed are α particles, while protons and neutrons seem to come only from statistical evaporation; - outgoing channels where two α particles are emitted cannot be solely explained by the sequential emission of 8 Be → 2α: about half of the cross section proceeds from statistical evaporation of one α particle. Accordingly, 2αxn channels do not necessarily agree with high value of angular momentum in the entrance channel. From the study of experimental results in the yrast plane, we can assign a large width to the angular momentum distribution [fr
18. Nucleon induced reactions
International Nuclear Information System (INIS)
Gmuca, S.; Antalik, R.; Kristiak, J.
1988-01-01
The collection contains full texts of 37 contributions; all fall within the INIS Subject Scope. The topics treated include some unsolved problems of nuclear reactions and relevant problems of nuclear structure at low and intermediate energies. (Z.S.)
19. Symmetry, Wigner functions and particle reactions
International Nuclear Information System (INIS)
Chavlejshvili, M.P.
1994-01-01
We consider the great principle of physics - symmetry - and some ideas, connected with it, suggested by a great physicist Eugene Wigner. We will discuss the concept of symmetry and spin, study the problem of separation of kinematics and dynamics in particle reactions. Using Wigner rotation functions (reflecting symmetry properties) in helicity amplitude decomposition and crossing-symmetry between helicity amplitudes (which contains the same Wigner functions) we get convenient general formalism for description of reactions between particles with any masses and spins. We also consider some applications of the formalism. 17 refs., 1 tab
20. Fusion chain reaction - a chain reaction with charged particles
International Nuclear Information System (INIS)
Peres, A.; Shvarts, D.
1975-01-01
When a DT-plasma is compressed to very high density, the particles resulting from nuclear reactions give their energy mostly to D and T ions, by nuclear collisions, rather than to electrons as usual. Fusion can thus proceed as a chain reaction, without the need of thermonuclear temperatures. In this paper, we derive relations for the suprathermal ion population created by a fusion reaction. Numerical integration of these equations shows that a chain reaction can proceed in a cold infinite DT-plasma at densities above 8.4x10 27 ions.cm -3 . Seeding the plasma with a small amount of 6 Li reduces the critical density to 7.2x10 27 ions.cm -3 (140000times the normal solid density). (author)
1. Basic reactions induced by radiation
International Nuclear Information System (INIS)
Charlesby, A.
1980-01-01
This paper summarises some of the basic reactions resulting from exposure to high energy radiation. In the initial stages energy is absorbed, but not necessarily at random, giving radical and ion species which may then react to promote the final chemical change. However, it is possible to intervene at intermediate stages to modify or reduce the radiation effect. Under certain conditions enhanced reactions are also possible. Several expressions are given to calculate radiation yield in terms of energy absorbed. Some analogies between radiation-induced reactions in polymers, and those studied in radiobiology are outlined. (author)
2. Kiss-induced severe anaphylactic reactions
Directory of Open Access Journals (Sweden)
2010-01-01
Full Text Available Introduction. Ingestion is the principal route for food allergens to trigger allergic reaction in atopic persons. However, in some highly sensitive patients severe symptoms may develop upon skin contact and by inhalation. The clinical spectrum ranges from mild facial urticaria and angioedema to life-threatening anaphylactic reactions. Outline of Cases. We describe cases of severe anaphylactic reactions by skin contact, induced by kissing in five children with prior history of severe anaphylaxis caused by food ingestion. These cases were found to have the medical history of IgE mediated food allergy, a very high total and specific serum IgE level and very strong family history of allergy. Conclusion. The presence of tiny particles of food on the kisser's lips was sufficient to trigger an anaphylactic reaction in sensitized children with prior history of severe allergic reaction caused by ingestion of food. Allergic reaction provoked with food allergens by skin contact can be a risk factor for generalized reactions. Therefore, extreme care has to be taken in avoiding kissing allergic children after eating foods to which they are highly allergic. Considering that kissing can be a cause of severe danger for the food allergic patient, such persons should inform their partners about the risk factor for causing their food hypersensitivity.
3. Light charged particle emission in heavy-ion reactions – What have ...
coincidence with gamma rays, fission products, evaporation residues have yielded interesting results which bring out the influence of nuclear structure, nuclear mean field and dynamics on the emission of these particles. Keywords. Light charged particles; heavy-ion induced reactions; particle spectra and angular distri-.
4. Single-particle and collective states in transfer reactions
International Nuclear Information System (INIS)
Lhenry, I.; Suomijaervi, T.; Giai, N. van
1993-01-01
The possibility to excite collective states in transfer reactions induced by heavy ions is studied. Collective states are described within the Random Phase Approximation (RPA) and the collectivity is defined according to the number of configurations contributing to a given state. The particle transfer is described within the Distorted Wave Born Approximation (DWBA). Calculations are performed for two different stripping reactions: 207 Pb( 20 Ne, 19 Ne) 208 Pb and 59 Co( 20 Ne, 19 F) 60 Ni at 48 MeV/nucleon for which experimental data are available. The calculation shows that a sizeable fraction of collective strength can be excited in these reactions. The comparison with experiment shows that this parameter-free calculation qualitatively explains the data. (author) 19 refs.; 10 figs
5. Impact parameter dependent light particle correlations for 40Ar induced reactions on 197Au at E/A = 60 MeV
International Nuclear Information System (INIS)
Kyanowski, A.
1987-01-01
With the help of a multidetector system of 96 plastic detectors, mounted in the forward hemisphere between 3 0 and 30 0 , we measured light charged particles to make an off-line event-type selection. The final aim to distinguish different impact parameter domains with the plastic wall could be achieved using the observed multiplicity as the sampling parameter. With the help of a computer simulation based on a participant-spectator model to describe the reation, the mean observed multiplicity could be established to vary smoothly with the total multiplicity, a variable which is often cited as an indicator for the violence of a reaction. Even if the simulation indicates a broad distribution of the observed multiplicity over the different impact parameters, we could separate the extreme cases of peripheral and central collisions. If the events are selected with the appropriate multiplicity gates, it turns out that peripheral collisions give rise to enhanced correlations, whereas the correlation function is strongly reduced for central collisions. Between these extreme values the correlation reduces smoothly with the impact parameter. The space-time extent of the emitting system is therefore larger for small impact parameters than in peripheral collisions. Supposing that the spatial extent is rather independent of the impact parameter (except for very peripheral collisions) we suggest that the observed variation of the correlation function could indicate a variation of the emission time for light particles rather than a spatial evolution. On the contrary the temperature, determined equally after an event-type selection with the observed multiplicity, shows no variation with the impact parameter. This could indicate that a limiting temperature is reached at a value at which the emission of particles is faster than the temporal development of the temperature. (orig./HSI)
6. Ion-exchange separation of radioiodine and its application to production of {sup 124}I by alpha particle induced reactions on antimony
Energy Technology Data Exchange (ETDEWEB)
Shuza Uddin, Md. [Forschungszentrum Juelich (Germany). Inst. fuer Neurowissenschaften und Medizin, INM-5: Nuklearchemie; Atomic Energy Research Establishment, Inst. of Nuclear Science and Technology, Dhaka (Bangladesh); Qaim, Seyed M.; Spahn, Ingo; Spellerberg, Stefan; Scholten, Bernhard; Coenen, Heinz H. [Forschungszentrum Juelich (Germany). Inst. fuer Neurowissenschaften und Medizin, INM-5: Nuklearchemie; Hermanne, Alex [Vrije Univ. Brussel (Belgium). Cyclotron Lab.; Hossain, Syed Mohammod [Atomic Energy Research Establishment, Inst. of Nuclear Science and Technology, Dhaka (Bangladesh)
2015-07-01
The basic parameters related to radiochemical separation of iodine from tellurium and antimony by anion-exchange chromatography using the resin Amberlyst A26 were studied. The separation yield of {sup 124}I amounted to 96% and the decontamination factor from {sup 121}Te and {sup 122}Sb was > 10{sup 4}. The method was applied to the production of {sup 124}I via the {sup 123}Sb(α, 3n) reaction. In an irradiation of 110 mg of {sup nat}Sb{sub 2}O{sub 3} (thickness ∝0.08 g/cm{sup 2}) with 38 MeV α-particles at 1.2 μA beam current for 4 h, corresponding to the beam energy range of E{sub α} = 37 → 27 MeV, the batch yield of {sup 124}I obtained was 12.42 MBq and the {sup 125}I and {sup 126}I impurities amounted to 3.8% and 0.7%, respectively. The experimental batch yield of {sup 124}I amounted to 80% of the theoretically calculated value but the level of the radionuclidic impurities were in agreement with the theoretical values. About 96% of the radioiodine was in the form of iodide and the inactive impurities (Te, Sb, Sn) were below the permissible level. Due to the relatively high level of radionuclidic impurity the {sup 124}I produced would possibly be useful only for restricted local consumption or for animal experiments.
7. Excitation functions of alpha particles induced nuclear reactions on natural titanium in the energy range of 10.4–50.2 MeV
International Nuclear Information System (INIS)
Usman, Ahmed Rufai; Khandaker, Mayeen Uddin; Haba, Hiromitsu; Otuka, Naohiko; Murakami, Masashi
2017-01-01
Highlights: • Detailed presentation of new results on experimental cross-sections of "n"a"tTi(α,x) processes. • Calculations of thick target yields for scandium and other radionuclides via the "n"a"tTi(α,x) production route. • Comparison with TENDL-2015 library. • Detailed review of previous experimental data. - Abstract: We studied the excitation functions of residual radionuclide productions from α particles bombardment on natural titanium in the energy range of 10.4–50.2 MeV. A well-established stacked-foil activation technique combined with HPGe γ-ray spectrometry was used to measure the excitation functions for the "5"1","4"9","4"8Cr, "4"8V, "4"3K, and "4"3","4"4"m","4"4"g","4"6"g"+"m","4"7","4"8Sc radionuclides. The thick target yields for all assessed radionuclides were also calculated. The obtained experimental data were compared with the earlier experimental ones and also with the evaluated data in the TENDL-2015 library. A reasonable agreement was found between this work and some of the previous ones, while a partial agreement was found with the evaluated data. The present results would further enrich the experimental database and facilitate the understanding of existing discrepancies among the previous measurements. The results would also help to enhance the prediction capability of the nuclear reaction model codes.
8. Excitation functions of alpha particles induced nuclear reactions on natural titanium in the energy range of 10.4–50.2 MeV
Energy Technology Data Exchange (ETDEWEB)
Usman, Ahmed Rufai [Department of Physics, University of Malaya, 50603 Kuala Lumpur (Malaysia); Nishina Center for Accelerator-Based Science, RIKEN, Wako, Saitama 351-0198 (Japan); Department of Physics, Umaru Musa Yar' adua University, Katsina (Nigeria); Khandaker, Mayeen Uddin, E-mail: [email protected] [Department of Physics, University of Malaya, 50603 Kuala Lumpur (Malaysia); Haba, Hiromitsu [Nishina Center for Accelerator-Based Science, RIKEN, Wako, Saitama 351-0198 (Japan); Otuka, Naohiko [Nuclear Data Section, Division of Physical and Chemical Sciences, Department of Nuclear Sciences and Applications, International Atomic Energy Agency, A-1400 Vienna (Austria); Murakami, Masashi [Nishina Center for Accelerator-Based Science, RIKEN, Wako, Saitama 351-0198 (Japan)
2017-05-15
Highlights: • Detailed presentation of new results on experimental cross-sections of {sup nat}Ti(α,x) processes. • Calculations of thick target yields for scandium and other radionuclides via the {sup nat}Ti(α,x) production route. • Comparison with TENDL-2015 library. • Detailed review of previous experimental data. - Abstract: We studied the excitation functions of residual radionuclide productions from α particles bombardment on natural titanium in the energy range of 10.4–50.2 MeV. A well-established stacked-foil activation technique combined with HPGe γ-ray spectrometry was used to measure the excitation functions for the {sup 51,49,48}Cr, {sup 48}V, {sup 43}K, and {sup 43,44m,44g,46g+m,47,48}Sc radionuclides. The thick target yields for all assessed radionuclides were also calculated. The obtained experimental data were compared with the earlier experimental ones and also with the evaluated data in the TENDL-2015 library. A reasonable agreement was found between this work and some of the previous ones, while a partial agreement was found with the evaluated data. The present results would further enrich the experimental database and facilitate the understanding of existing discrepancies among the previous measurements. The results would also help to enhance the prediction capability of the nuclear reaction model codes.
9. The Influence of Particle Charge on Heterogeneous Reaction Rate Coefficients
Science.gov (United States)
Aikin, A. C.; Pesnell, W. D.
2000-01-01
The effects of particle charge on heterogeneous reaction rates are presented. Many atmospheric particles, whether liquid or solid are charged. This surface charge causes a redistribution of charge within a liquid particle and as a consequence a perturbation in the gaseous uptake coefficient. The amount of perturbation is proportional to the external potential and the square of the ratio of debye length in the liquid to the particle radius. Previous modeling has shown how surface charge affects the uptake coefficient of charged aerosols. This effect is now included in the heterogeneous reaction rate of an aerosol ensemble. Extension of this analysis to ice particles will be discussed and examples presented.
10. Light charged particle multiplicities in fusion and quasifission reactions
Science.gov (United States)
Kalandarov, Sh. A.; Adamian, G. G.; Antonenko, N. V.; Lacroix, D.; Wieleczko, J. P.
2018-01-01
The light charged particle evaporation from the compound nucleus and from the complex fragments in the reactions 32S+100Mo, 121Sb+27Al, 40Ar+164Dy, and 40Ar+ nat Ag is studied within the dinuclear system model. The possibility to distinguish the reaction products from different reaction mechanisms is discussed.
11. Light charged particle multiplicities in fusion and quasifission reactions
Energy Technology Data Exchange (ETDEWEB)
Kalandarov, Sh.A. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Institute of Nuclear Physics, Tashkent (Uzbekistan); Adamian, G.G. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Antonenko, N.V. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Tomsk Polytechnic University, Mathematical Physics Department, Tomsk (Russian Federation); Lacroix, D. [IN2P3-CNRS, Universite Paris-Sud, Institut de Physique Nucleaire, Orsay (France); Wieleczko, J.P. [GANIL, CEA et IN2P3-CNRS, Caen (France)
2018-01-15
The light charged particle evaporation from the compound nucleus and from the complex fragments in the reactions {sup 32}S + {sup 100}Mo, {sup 121}Sb + {sup 27}Al, {sup 40}Ar + {sup 164}Dy, and {sup 40}Ar + {sup nat}Ag is studied within the dinuclear system model. The possibility to distinguish the reaction products from different reaction mechanisms is discussed. (orig.)
12. Infrared laser-induced chemical reactions
International Nuclear Information System (INIS)
Katayama, Mikio
1978-01-01
The experimental means which clearly distinguishes between infrared ray-induced reactions and thermal reactions has been furnished for the first time when an intense monochromatic light source has been obtained by the development of infrared laser. Consequently, infrared laser-induced chemical reactions have started to develop as one field of chemical reaction researches. Researches of laser-induced chemical reactions have become new means for the researches of chemical reactions since they were highlighted as a new promising technique for isotope separation. Specifically, since the success has been reported in 235 U separation using laser in 1974, comparison of this method with conventional separation techniques from the economic point of view has been conducted, and it was estimated by some people that the laser isotope separation is cheaper. This report briefly describes on the excitation of oscillation and reaction rate, and introduces the chemical reactions induced by CW laser and TEA CO 2 laser. Dependence of reaction yield on laser power, measurement of the absorbed quantity of infrared ray and excitation mechanism are explained. Next, isomerizing reactions are reported, and finally, isotope separation is explained. It was found that infrared laser-induced chemical reactions have the selectivity for isotopes. Since it is evident that there are many examples different from thermal and photo-chemical reactions, future collection of the data is expected. (Wakatsuki, Y.)
13. NNDC evaluated charged particle reaction data library (1975)
Energy Technology Data Exchange (ETDEWEB)
Pearlstein, S
1985-09-01
The US National Nuclear Data Center developed a starter library for charged particle induced nuclear reaction data in a trial ENDF/B format. It was issued in June 1974 and corrected in August 1975. It includes integral cross-section data for 306 nuclides between Z = 21 and 83 for the following reactions in the energy range from 0 to 20 MeV: (p,n); (p,2n); (p,3n); (d,n); (d,2n); (d,3n); (d,p); ({alpha},n); ({alpha},2n); ({alpha},3n); ({alpha},p) and ({alpha},np). The data were calculated following the nuclear systematics developed by J. Lange and H. Muenzel [KFK-767, May 19681]. The library serves to provide unmeasured cross sections and information that usually compares within an order of magnitude with actual data. It also serves as a convenient source for those requiring charged particle data in computerized form. The library contains 38,584 records. The following documentation is a reprint of a report by S. Pearlstein, BNL-19148, May 1974. (author) 6 refs, 12 figs
14. Nuclear Astrophysics and Neutron Induced Reactions: Quasi-Free Reactions and RIBs
International Nuclear Information System (INIS)
Cherubini, S.; Spitaleri, C.; Crucilla, V.; Gulino, M.; La Cognata, M.; Lamia, L.; Pizzone, R. G.; Puglia, S.; Rapisarda, G. G.; Romano, S.; Sergi, M. L.; Coc, A.; Kubono, S.; Binh, D. N.; Hayakawa, S.; Wakabayashi, Y.; Yamaguchi, H.; Burjan, V.; Kroha, V.; De Sereville, N.
2010-01-01
The use of quasi-free reactions in studying nuclear reactions between charged particles of astrophysical interest has received much attention over the last two decades. The Trojan Horse Method is based on this approach and it has been used to study a number of reactions relevant for Nuclear Astrophysics. Recently we applied this method to the study of nuclear reactions that involve radioactive species, namely to the study of the 18 F+p→ 15 O+α process at temperatures corresponding to the energies available in the classical novae scenario. Quasi-free reactions can also be exploited to study processes induced by neutrons. This technique is particularly interesting when applied to reaction induced by neutrons on unstable short-lived nuclei. Such processes are very important in the nucleosynthesis of elements in the sand r-processes scenarios and this technique can give hints for solving key questions in nuclear astrophysics where direct measurements are practically impossible.
15. Particle correlations in high-multiplicity reactions
International Nuclear Information System (INIS)
Hayot, Fernand.
1976-01-01
A comprehensive review of the results obtained in the study of short range correlations in high-multiplicity events is presented: introduction of the fundamental short-range order hypothesis, introduction of clusters in nondiffractive events (only the production of identical, independent, and neutral clusters was considered); search for short range dynamical effects between particles coming from the decay of a same cluster by studying two-particle rapidity correlations in inclusive and semi-inclusive experiments; study of transverse momentum correlations [fr
16. Integral cross-section measurements for investigating the emission of complex particles in 14 MeV neutron-induced nuclear reactions
International Nuclear Information System (INIS)
Qaim, S.M.
1981-01-01
Some of the off-line techniques used for the determination of integral cross-section data are reviewed and, as a critical check, some typical data sets are compared. The systematic trends reported in the cross-section data for (n,d), (n,t), (n, 3 He) and (n,α) reactions are discussed. A brief discussion of the possible reaction mechanisms is given. Some of the applications of the data are outlined. (author)
17. Progress in microscopic direct reaction modeling of nucleon induced reactions
Energy Technology Data Exchange (ETDEWEB)
Dupuis, M.; Bauge, E.; Hilaire, S.; Lechaftois, F.; Peru, S.; Pillet, N.; Robin, C. [CEA, DAM, DIF, Arpajon (France)
2015-12-15
A microscopic nuclear reaction model is applied to neutron elastic and direct inelastic scatterings, and pre-equilibrium reaction. The JLM folding model is used with nuclear structure information calculated within the quasi-particle random phase approximation implemented with the Gogny D1S interaction. The folding model for direct inelastic scattering is extended to include rearrangement corrections stemming from both isoscalar and isovector density variations occurring during a transition. The quality of the predicted (n,n), (n,n{sup '}), (n,xn) and (n,n{sup '}γ) cross sections, as well as the generality of the present microscopic approach, shows that it is a powerful tool that can help improving nuclear reactions data quality. Short- and long-term perspectives are drawn to extend the present approach to more systems, to include missing reactions mechanisms, and to consistently treat both structure and reaction problems. (orig.)
18. Monte Carlo simulation of particle-induced bit upsets
Science.gov (United States)
Wrobel, Frédéric; Touboul, Antoine; Vaillé, Jean-Roch; Boch, Jérôme; Saigné, Frédéric
2017-09-01
We investigate the issue of radiation-induced failures in electronic devices by developing a Monte Carlo tool called MC-Oracle. It is able to transport the particles in device, to calculate the energy deposited in the sensitive region of the device and to calculate the transient current induced by the primary particle and the secondary particles produced during nuclear reactions. We compare our simulation results with SRAM experiments irradiated with neutrons, protons and ions. The agreement is very good and shows that it is possible to predict the soft error rate (SER) for a given device in a given environment.
19. Monte Carlo simulation of particle-induced bit upsets
Directory of Open Access Journals (Sweden)
Wrobel Frédéric
2017-01-01
Full Text Available We investigate the issue of radiation-induced failures in electronic devices by developing a Monte Carlo tool called MC-Oracle. It is able to transport the particles in device, to calculate the energy deposited in the sensitive region of the device and to calculate the transient current induced by the primary particle and the secondary particles produced during nuclear reactions. We compare our simulation results with SRAM experiments irradiated with neutrons, protons and ions. The agreement is very good and shows that it is possible to predict the soft error rate (SER for a given device in a given environment.
20. Nuclear data needs in nuclear astrophysics: Charged-particle reactions
International Nuclear Information System (INIS)
Smith, Michael S.
2001-01-01
Progress in understanding a diverse range of astrophysical phenomena - such as the Big Bang, the Sun, the evolution of stars, and stellar explosions - can be significantly aided by improved compilation, evaluation, and dissemination of charged-particle nuclear reaction data. A summary of the charged-particle reaction data needs in these and other astrophysical scenarios is presented, along with recommended future nuclear data projects. (author)
1. Reference Cross Sections for Charged-particle Monitor Reactions
Czech Academy of Sciences Publication Activity Database
Hermanne, A.; Ignatyuk, A. V.; Capote, R.; Carlson, B. V.; Engle, J. W.; Kellett, M. A.; Kibédi, T.; Kim, G.; Kondev, F. G.; Hussain, M.; Lebeda, Ondřej; Luca, A.; Nagai, Y.; Naik, H.; Nichols, A. L.; Nortier, F. M.; Suryanarayana, S. V.; Takacs, S.; Tarkanyi, F. T.; Verpelli, M.
2018-01-01
Roč. 148, SI (2018), s. 338-382 ISSN 0090-3752 Institutional support: RVO:61389005 Keywords : deuteron induced reactions * proton induced reactions * cross sections Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders OBOR OECD: Nuclear physics Impact factor: 1.146, year: 2016
2. CW-Laser-Induced Solid-State Reactions in Mixed Micron-Sized Particles of Silicon Monoxide and Titanium Monoxide: Nano-Structured Composite with Visible Light Absorption
Czech Academy of Sciences Publication Activity Database
Křenek, T.; Tesař, J.; Kupčík, Jaroslav; Netrvalová, M.; Pola, M.; Jandová, Věra; Pokorná, Dana; Cuřínová, Petra; Bezdička, Petr; Pola, Josef
2017-01-01
Roč. 27, č. 6 (2017), s. 1640-1648 ISSN 1574-1443 Institutional support: RVO:61388980 ; RVO:67985858 Keywords : Cw CO2 laser heating * IR laser imaging * Silicon monoxide * Solid state redox reactions * Ti/Si/O composite * Titanium monoxide Subject RIV: CA - Inorganic Chemistry; CI - Industrial Chemistry, Chemical Engineering (UCHP-M) OBOR OECD: Inorganic and nuclear chemistry; Chemical process engineering (UCHP-M) Impact factor: 1.577, year: 2016
3. Program BETA for simulation of particle decays and reactions
International Nuclear Information System (INIS)
Takhtamyshev, G.G.; Merkulova, T.A.
1997-01-01
Program BETA is designed for simulation of particle decays and reactions. The program also produces integration over the phase space and decay rate or the reaction cross section are calculated as a result of such integration. At the simulation process the adaptive random number generator SMART may be used, what is found to be useful for some difficult cases
4. Backward particles in 4Hep reactions
International Nuclear Information System (INIS)
Glagolev, V.V.; Lebedev, R.M.; Pestova, G.D.; Shimansky, S.S.; Khairetdinov, K.U.; Sobczak, T.; Stepaniak, J.
1988-08-01
The invariant differential cross sections were investigated as functions of different backward particle momenta in 4 Hep interactions. Two data samples were used at the 4 He incident momenta 8.6 GeV/c and 13.6 GeV/c. The data were obtained by means of a 1-m hydrogen bubble chamber at Joint Institute for Nuclear Research Dubna. The spectra for nuclei showed a monotonous exponentially falling trend at both momenta. Some irregularities were observed in the nucleon and Π + spectra and these were especially visible at a lower beam momentum. The experimental data were satisfactorily reproduced by taking into account the Fermi motion and intermediate Δ-isobar states. (author). 7 figs., 2 tabs., 22 refs
5. Unimolecular and collisionally induced ion reactions
International Nuclear Information System (INIS)
Beynon, J.H.; Boyd, R.K.
1978-01-01
The subject is reviewed under the following headings: introduction (mass spectroscopy and the study of fragmentation reactions of gaseous positive ions); techniques and methods (ion sources, detection systems, analysis of ions, data reduction); collision-induced reactions of ions and unimolecular fragmentations of metastable ions; applications (ion structure, energetic measurements, analytical applications, other applications). 305 references. (U.K.)
6. Parity violation in neutron induced reactions
International Nuclear Information System (INIS)
Gudkov, V.P.
1991-06-01
The theory of parity violation in neutron induced reactions is discussed. Special attention is paid to the energy dependence and enhancement factors for the various types of nuclear reactions and the information which might be obtained from P-violating effects in nuclei. (author)
7. X particle effect for 6Li reaction rates calculations
International Nuclear Information System (INIS)
Kocak, G.; Balantekin, A. B.
2009-01-01
The inferred primordial 6 L i-7 L i abundances are different from standard big bang nucleosynthesis results, 6 L i is 1000 times larger and 7 L i is 3 times smaller than the big bang prediction. In big bang nucleosynthesis, negatively charged massive X particles a possible solution to explain this primordial Li abundances problem [1]. In this study, we consider only X particle effect for nuclear reactions to obtain S-factor and reaction rates for Li. All S-factors calculated within the Optical Model framework for d(α,γ)6 L i system. We showed that the enhancement effect of massive negatively charged X particle for 6 L i system reaction rate.(author)
8. NRABASE 2.0. Charged-particle nuclear reaction data for ion beam analysis
International Nuclear Information System (INIS)
Gurbich, A.F.
1997-01-01
For 30 targets between H-1 and Ag-109, differential cross sections for reactions induced by protons, deuterons, He-3 and alpha particles are given in tabular and graphical form. The data were compiled from original experimental references. The database was developed under a research contract with the IAEA Physics Section and is available on diskette from the IAEA Nuclear Data Section. (author)
9. Aerosol nucleation induced by a high energy particle beam
DEFF Research Database (Denmark)
Enghoff, Martin Andreas Bødker; Pedersen, Jens Olaf Pepke; Uggerhøj, Ulrik I.
2011-01-01
We have studied sulfuric acid aerosol nucleation in an atmospheric pressure reaction chamber using a 580 MeV electron beam to ionize the volume of the reaction chamber. We find a clear contribution from ion-induced nucleation and consider this to be the first unambiguous observation of the ion......-effect on aerosol nucleation using a particle beam under conditions that resemble the Earth's atmosphere. By comparison with ionization using a gamma source we further show that the nature of the ionizing particles is not important for the ion-induced component of the nucleation. This implies that inexpensive...... ionization sources - as opposed to expensive accelerator beams - can be used for investigations of ion-induced nucleation....
10. Total cross-sections for reactions of high energy particles (including elastic, topological, inclusive and exclusive reactions). Subvol. b
International Nuclear Information System (INIS)
Schopper, H.; Moorhead, W.G.; Morrison, D.R.O.
1988-01-01
The aim of this report is to present a compilation of cross-sections (i.e. reaction rates) of elementary particles at high energy. The data are presented in the form of tables, plots and some fits, which should be easy for the reader to use and may enable him to estimate cross-sections for presently unmeasured energies. We have analyzed all the data published in the major Journals and Reviews for momenta of the incoming particles larger than ≅ 50 MeV/c, since the early days of elementary particle physics and, for each reaction, we have selected the best cross-section data available. We have restricted our attention to integrated cross-sections, such as total cross-sections, exclusive and inclusive cross-sections etc., at various incident beam energies. We have disregarded data affected by geometrical and/or kinematical cuts which would make them not directly comparable to other data at different energies. Also, in the case of exclusive reactions, we have left out data where not all of the particles in the final state were unambiguously identified. This work contains reactions induced by neutrinos, gammas, charged pions, kaons, nucleons, antinucleons and hyperons. (orig./HSI)
11. Particle-induced thermonuclear fusion
International Nuclear Information System (INIS)
Salisbury, W.W.
1980-01-01
A nuclear fusion process for igniting a nuclear fusion pellet in a manner similar to that proposed for laser beams uses, an array of pulsed high energy combined particle beams, focused to bombard the pellet for isentropically compressing it to a Fermi-degenerate state by thermal blow-off and balanced beam momentum transfer. (author)
12. Neutron-Induced Charged Particle Studies at LANSCE
Science.gov (United States)
Lee, Hye Young; Haight, Robert C.
2014-09-01
Direct measurements on neutron-induced charged particle reactions are of interest for nuclear astrophysics and applied nuclear energy. LANSCE (Los Alamos Neutron Science Center) produces neutrons in energy of thermal to several hundreds MeV. There has been an effort at LANSCE to upgrade neutron-induced charged particle detection technique, which follows on (n,z) measurements made previously here and will have improved capabilities including larger solid angles, higher efficiency, and better signal to background ratios. For studying cross sections of low-energy neutron induced alpha reactions, Frisch-gridded ionization chamber is designed with segmented anodes for improving signal-to-noise ratio near reaction thresholds. Since double-differential cross sections on (n,p) and (n,a) reactions up to tens of MeV provide important information on deducing nuclear level density, the ionization chamber will be coupled with silicon strip detectors (DSSD) in order to stop energetic charged particles. In this paper, we will present the status of this development including the progress on detector design, calibrations and Monte Carlo simulations. This work is funded by the US Department of Energy - Los Alamos National Security, LLC under Contract DE-AC52-06NA25396.
13. Flow induced crystallisation of penetrable particles
Science.gov (United States)
2018-03-01
For a system of Brownian particles interacting via a soft exponential potential we investigate the interaction between equilibrium crystallisation and spatially varying shear flow. For thermodynamic state points within the liquid part of the phase diagram, but close to the crystallisation phase boundary, we observe that imposing a Poiseuille flow can induce nonequilibrium crystalline ordering in regions of low shear gradient. The physical mechanism responsible for this phenomenon is shear-induced particle migration, which causes particles to drift preferentially towards the center of the flow channel, thus increasing the local density in the channel center. The method employed is classical dynamical density functional theory.
14. «-Particle induced reactions on rhodium
After accounting for the energy degradation in the aluminum foils, the energy of the ... with a 152Eu standard source obtained from the Radio-chemistry Laboratory at. VECC, Kolkata. The residual nuclei were ... tainty in the spectroscopic data of the standard source and the statistical errors in the counts. No corrections were ...
15. Charged particle-induced nuclear fission reactions
The nuclear fission phenomenon continues to be an enigma, even after nearly 75 years of its discovery. Considerable progress has been made towards understanding the fission process. Both light projectiles and heavy ions have been employed to investigate nuclear fission. An extensive database of the properties of ...
16. Study of α-particle multiplicity in 16O+196Pt fusion-fission reaction
International Nuclear Information System (INIS)
Kapoor, K.; Kumar, A.; Bansal, N.
2016-01-01
Study of dynamics of fusion-fission reaction is one of the interesting parts of heavy-ion-induced nuclear reaction. Extraction of fission time scales using different probes is of central importance for understanding the dynamics of fusion-fission process. In the past, extensive theoretical and experimental efforts have been made to understand the various aspects of the heavy ion induced fusion-fission reactions. Compelling evidences have been obtained from the earlier studies that the fission decay of hot nuclei is protracted process i.e. slowed down relative to the expectations of the standard statistical model, and large dynamical delays are required due to this hindrance. Nuclear dissipation is assumed to be responsible for this delay and more light particles are expected to be emitted during the fission process. One of neutron multiplicity measurements have been performed for the 16,18 O+ 194,198 Pt populating the CN 210,212,214,216 Rn and observed fission delay due to nuclear viscosity. In order to have complete understanding for the dynamics of 212 Rn nucleus, we measured the charged particle multiplicity for 16 O+ 196 Pt system. Study of charged particles will give us more information about the emitter in comparison to neutrons as charged particles faces Coulomb barrier and are more sensitive probe for understanding the dynamics of fusion-fission reactions. In the present work, we are reporting some of the preliminary results of charged particle multiplicity
17. Induced isospin mixing in direct nuclear reactions
International Nuclear Information System (INIS)
Lenske, H.
1979-07-01
The effect of charge-dependent interactions on nuclear reactions is investigated. First, a survey is given on the most important results concerning the charge dependence of the nucleon-nucleon interaction. The isospin symmetry and invariance principles are discussed. Violations of the isospin symmetry occuring in direct nuclear reactions are analysed using the soupled channel theory, the folding model and microscopic descriptions. Finally, induced isospin mixing in isospin-forbidden direct reactions is considered using the example of the inelastic scattering of deuterons on 12 C. (KBE)
18. Reference Cross Sections for Charged-particle Monitor Reactions
Science.gov (United States)
Hermanne, A.; Ignatyuk, A. V.; Capote, R.; Carlson, B. V.; Engle, J. W.; Kellett, M. A.; Kibédi, T.; Kim, G.; Kondev, F. G.; Hussain, M.; Lebeda, O.; Luca, A.; Nagai, Y.; Naik, H.; Nichols, A. L.; Nortier, F. M.; Suryanarayana, S. V.; Takács, S.; Tárkányi, F. T.; Verpelli, M.
2018-02-01
Evaluated cross sections of beam-monitor reactions are expected to become the de-facto standard for cross-section measurements that are performed over a very broad energy range in accelerators in order to produce particular radionuclides for industrial and medical applications. The requirements for such data need to be addressed in a timely manner, and therefore an IAEA coordinated research project was launched in December 2012 to establish or improve the nuclear data required to characterise charged-particle monitor reactions. An international team was assembled to recommend more accurate cross-section data over a wide range of targets and projectiles, undertaken in conjunction with a limited number of measurements and more extensive evaluations of the decay data of specific radionuclides. Least-square evaluations of monitor-reaction cross sections including uncertainty quantification have been undertaken for charged-particle beams of protons, deuterons, 3He- and 4He-particles. Recommended beam monitor reaction data with their uncertainties are available at the IAEA-NDS medical portal http://www-nds.iaea.org/medical/monitor_reactions.html.
19. Distorted wave method in reactions with composite particles
International Nuclear Information System (INIS)
Zelenskaya, N.S.; Teplov, I.B.
1980-01-01
The work deals with the distorbed wave method with a finite radius of interaction (DWBAFR) as applied to quantitative analysis of direct nuclear reactions with composite particles (including heavy ions) considering the reaction mechanisms other than the cluster stripping mechanism, in particular the exchange processes. The accurate equations of the distorbed-wave method in the three-body problem and the general formula dor calculating differential cross-sections of arbitrary binary reactions by DWBAFR are presented. Accurate and approximate methods allowing for finite interaction radius are discussed. Two main versions of exact account of recoil effects: separation of variables in wave functions of relative motion of particles and in interaction potentials and separation of variables in distorted waves are analysed. Given is a characteristic of the known calculated programs approximately and exactly taking account of recoil effects for direct and exchange processes [ru
20. Dual neutral particle induced transmutation in CINDER2008
Energy Technology Data Exchange (ETDEWEB)
Martin, W.J., E-mail: [email protected] [Sandia National Laboratories, Albuquerque, NM 87185 (United States); University of New Mexico, Albuquerque, NM 87131 (United States); Oliveira, C.R.E. de; Hecht, A.A. [University of New Mexico, Albuquerque, NM 87131 (United States)
2014-12-11
Although nuclear transmutation methods for fission have existed for decades, the focus has been on neutron-induced reactions. Recent novel concepts have sought to use both neutrons and photons for purposes such as active interrogation of cargo to detect the smuggling of highly enriched uranium, a concept that would require modeling the transmutation caused by both incident particles. As photonuclear transmutation has yet to be modeled alongside neutron-induced transmutation in a production code, new methods need to be developed. The CINDER2008 nuclear transmutation code from Los Alamos National Laboratory is extended from neutron applications to dual neutral particle applications, allowing both neutron- and photon-induced reactions for this modeling with a focus on fission. Following standard reaction modeling, the induced fission reaction is understood as a two-part reaction, with an entrance channel to the excited compound nucleus, and an exit channel from the excited compound nucleus to the fission fragmentation. Because photofission yield data—the exit channel from the compound nucleus—are sparse, neutron fission yield data are used in this work. With a different compound nucleus and excitation, the translation to the excited compound state is modified, as appropriate. A verification and validation of these methods and data has been performed. This has shown that the translation of neutron-induced fission product yield sets, and their use in photonuclear applications, is appropriate, and that the code has been extended correctly. - Highlights: • The CINDER2008 transmutation code was modified to include photon-induced transmutation tracking. • A photonuclear interaction library was created to allow CINDER2008 to track photonuclear interactions. • Photofission product yield data sets were created using fission physics similarities with neutron-induced fission.
1. Reactions and mass spectra of complex particles using Aerosol CIMS
Science.gov (United States)
Hearn, John D.; Smith, Geoffrey D.
2006-12-01
Aerosol chemical ionization mass spectrometry (CIMS) is used both on- and off-line for the analysis of complex laboratory-generated and ambient particles. One of the primary advantages of Aerosol CIMS is the low degree of ion fragmentation, making this technique well suited for investigating the reactivity of complex particles. To demonstrate the usefulness of this "soft" ionization, particles generated from meat cooking were reacted with ozone and the composition was monitored as a function of reaction time. Two distinct kinetic regimes were observed with most of the oleic acid in these particles reacting quickly but with 30% appearing to be trapped in the complex mixture. Additionally, detection limits are measured to be sufficiently low (100-200 ng/m3) to detect some of the more abundant constituents in ambient particles, including sulfate, which is measured in real-time at 1.2 [mu]g/m3. To better characterize complex aerosols from a variety of sources, a novel off-line collection method was also developed in which non-volatile and semi-volatile organics are desorbed from particles and concentrated in a cold U-tube. Desorption from the U-tube followed by analysis with Aerosol CIMS revealed significant amounts of nicotine in cigarette smoke and levoglucosan in oak and pine smoke, suggesting that this may be a useful technique for monitoring particle tracer species. Additionally, secondary organic aerosol formed from the reaction of ozone with R-limonene and volatile organics from orange peel were analyzed off-line showing large molecular weight products (m/z > 300 amu) that may indicate the formation of oligomers. Finally, mass spectra of ambient aerosol collected offline reveal a complex mixture of what appears to be highly processed organics, some of which may contain nitrogen.
2. Nucleon and composite-particle production in spallation reactions studied with the multi-purpose detector NESSI
International Nuclear Information System (INIS)
Herbach, C.M.; Hilscher, D.; Jahnke, U.; Tishchenko, V.G.; Galin, J.; Lott, B.; Letourneau, A.; Peghaire, A.; Filges, D.; Goldenbaum, F.; Nuenighoff, K.; Schaal, H.; Sterzenbach, G.; Wohlmuther, M.; Pienkowski, L.; Kostecke, D.; Schroeder, W.U.; Toke, J.
2003-01-01
NESSI, a 4π-detector for neutrons and charged particles, was used in studies of proton-induced spallation reactions at the COSY facility. Due to the high detection efficiency of NESSI for particles evaporated from excited nuclei, measured particle multiplicities provide event-by-event information on the nuclear excitation energy. Data obtained for proton-induced reactions on thin targets ranging from Al to U and proton energies from 0.8 to 2.5 GeV are compared with model predictions. (orig.)
3. Development of a semiconductor counter telescope with low background for the investigations of charged particles produced in reactions induced by neutrons; Realisation et mise au point d'un telescope a semi-conducteurs et a faible bruit-de-fond pour l'etude des reactions neutronsparticules chargees
Energy Technology Data Exchange (ETDEWEB)
Helleboid, J M [Commissariat a l' Energie Atomique, Grenoble (France). Centre d' Etudes Nucleaires
1967-08-15
A AE-E counter telescope for charged particles (p, d, t) produced in reactions with neutrons has been constructed. The semiconductor counter telescope method allows the investigations of two and three-body reactions {sup 6}Li(n,p), D(n,np)n induced by 14 MeV neutrons. By using a coincidence of associated alpha particle pulses with those ({delta}E,E) in the telescope, the background is considerably reduced for all angles outside the coincidence cone, i.e. larger than 15 deg. (L). For forward angles, the same telescope ({delta}E{sub 2}/E) plus a thin semiconductor ({delta}E{sub 1}) allows keeping a low background. The multiparameter analysing method ({delta}E{sub 1}, {delta}E{sub 2}, E) from the experimental range-energy data gives a linearity, an efficiency and an identifying power which are satisfactory. The identification is performed by differed time on a IBM 7044 computer. (author) [French] On a realise un spectrometre de particules chargees (p, d, t) avec identification des particules, pour l'etude des reactions neutrons-particules chargees. La methode du telescope a semi-conducteurs permet d'effectuer l'etude de reactions a deux corps [{sup 6}Li(n,p)] et de reactions a trois corps [D(n,pn)n] a 14 MeV. L'utilisation de la particule alpha associee en coincidence avec les impulsions ({delta}E,E) du telescope permet d'obtenir un bruit-de-fond tres faible pour tous les angles situes en dehors du cone de coincidence, c'est-a-dire superieurs a {approx_equal} 15 deg. (L). Pour les angles avant, le meme telescope ({delta}E{sub 2}/E) plus un semi-conducteur mince ({delta}E{sub 1}) permet de conserver un faible bruit-de-fond. La methode d'analyse multiparametrique ({delta}E{sub 1}, {delta}E{sub 2}, E) a partir des donnees experimentales parcours-energie, donne une linearite, une efficacite et un pouvoir d'identification satisfaisants. L'identification est effectuee en temps differe sur un calculateur numerique IBM 7044. (auteur)
International Nuclear Information System (INIS)
Zoepfl, F.J.
1983-01-01
5. Measurement of double differential cross sections for light charged particles production in neutron induced reaction at 62.7 MeV on lead target; Mesures des sections efficaces doublement differentielles de production de particules chargees legeres lors de reactions induites par neutrons de 62.7 MeV sur cible de plomb
Energy Technology Data Exchange (ETDEWEB)
Kerveno, M
2000-09-27
In order to develop new options for nuclear waste management, studies are carrying out on the perfecting of hybrid systems (sub-critical reactor driven by accelerator). This thesis work takes place more precisely in the framework of nuclear data linked to hybrid systems development. Increasing the upper limit energy value (from 20 to 150 MeV) of data bases supposes that theoretical codes could have sufficient predictive power in this energy range. Thus it's necessary to measure new cross sections to constrain these codes. The experiment, performed at Louvain-la-Neuve Cyclotron, aims to determine the double differential cross sections for light charged particles production in neutron induced reactions at 62.7 MeV on natural lead target. The detection device consists of 6 NE102-CsI telescopes. Time of flight measurements are used to reconstruct the neutron energy spectra. The general framework (hybrid systems and associated nuclear data problematic) in which this work takes place is presented in a first part. The experimental set up used for our measurements is described in a second part. The three following parts are dedicated to the data analysis and double differential cross sections extraction. The particle discrimination, the energy calibration of detectors as the different corrections applied to the experimental spectra are related in details. And finally a comparative study between our experimental results and some theoretical predictions is presented. (author)
6. Measurement of double differential cross sections for light charged particles production in neutron induced reaction at 62.7 MeV on lead target; Mesures des sections efficaces doublement differentielles de production de particules chargees legeres lors de reactions induites par neutrons de 62.7 MeV sur cible de plomb
Energy Technology Data Exchange (ETDEWEB)
Kerveno, M
2000-09-27
In order to develop new options for nuclear waste management, studies are carrying out on the perfecting of hybrid systems (sub-critical reactor driven by accelerator). This thesis work takes place more precisely in the framework of nuclear data linked to hybrid systems development. Increasing the upper limit energy value (from 20 to 150 MeV) of data bases supposes that theoretical codes could have sufficient predictive power in this energy range. Thus it's necessary to measure new cross sections to constrain these codes. The experiment, performed at Louvain-la-Neuve Cyclotron, aims to determine the double differential cross sections for light charged particles production in neutron induced reactions at 62.7 MeV on natural lead target. The detection device consists of 6 NE102-CsI telescopes. Time of flight measurements are used to reconstruct the neutron energy spectra. The general framework (hybrid systems and associated nuclear data problematic) in which this work takes place is presented in a first part. The experimental set up used for our measurements is described in a second part. The three following parts are dedicated to the data analysis and double differential cross sections extraction. The particle discrimination, the energy calibration of detectors as the different corrections applied to the experimental spectra are related in details. And finally a comparative study between our experimental results and some theoretical predictions is presented. (author)
7. A hybrid charged-particle guide for studying (n, charged particle) reactions
International Nuclear Information System (INIS)
Haight, R.C.; White, R.M.; Zinkle, S.J.
1983-01-01
Charged-particle transport systems consisting of magnetic quadrupole lenses have been employed in recent years in the study of (n, charged particle) reactions. A new transport system was completed at the laboratory that is based both on magnetic lenses as well as electrostatic fields. The magnetic focusing of the charged-particle guide is provided by six magnetic quadrupole lenses arranged in a CDCCDC sequence (in the vertical plane). The electrostatic field is produced by a wire at high voltage which stretches the length of the guide and is physically at the centre of the magnetic axis. The magnetic lenses are used for charged particles above 5 MeV; the electrostatic guide is used for lower energies. This hybrid system possesses the excellent focusing and background rejection properties of other magnetic systems. For low energy charged-particles, the electrostatic transport avoids the narrow band-passes in charged-particle energy which are a problem with purely magnetic transport systems. This system is installed at the LLNL Cyclograaff facility for the study of (n, charged particle) reactions at neutron energies up to 35 MeV. (Auth.)
8. Chemical reactions induced by fast neutron irradiation
International Nuclear Information System (INIS)
Katsumura, Y.
1989-01-01
Here, several studies on fast neutron irradiation effects carried out at the reactor 'YAYOI' are presented. Some indicate a significant difference in the effect from those by γ-ray irradiation but others do not, and the difference changes from subject to subject which we observed. In general, chemical reactions induced by fast neutron irradiation expand in space and time, and there are many aspects. In the time region just after the deposition of neutron energy in the system, intermediates are formed densely and locally reflecting high LET of fast neutrons and, with time, successive reactions proceed parallel to dissipation of localized energy and to diffusion of the intermediates. Finally the reactions are completed in longer time region. If we pick up the effects which reserve the locality of the initial processes, a significant different effect between in fast neutron radiolysis and in γ-ray radiolysis would be derived. If we observe the products generated after dissipation and diffusion in longer time region, a clear difference would not be observed. Therefore, in order to understand the fast neutron irradiation effects, it is necessary to know the fundamental processes of the reactions induced by radiations. (author)
9. Extension of a Kinetic-Theory Approach for Computing Chemical-Reaction Rates to Reactions with Charged Particles
Science.gov (United States)
Liechty, Derek S.; Lewis, Mark J.
2010-01-01
Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties (i.e., no macroscopic reaction rate information) are extended to include reactions involving charged particles and electronic energy levels. The proposed extensions include ionization reactions, exothermic associative ionization reactions, endothermic and exothermic charge exchange reactions, and other exchange reactions involving ionized species. The extensions are shown to agree favorably with the measured Arrhenius rates for near-equilibrium conditions.
10. Fluid transport in reaction induced fractures
Science.gov (United States)
Ulven, Ole Ivar; Sun, WaiChing; Malthe-Sørenssen, Anders
2015-04-01
The process of fracture formation due to a volume increasing chemical reaction has been studied in a variety of different settings, e.g. weathering of dolerites by Røyne et al. te{royne}, serpentinization and carbonation of peridotite by Rudge et al. te{rudge} and replacement reactions in silica-poor igneous rocks by Jamtveit et al. te{jamtveit}. It is generally assumed that fracture formation will increase the net permeability of the rock, and thus increase the reactant transport rate and subsequently the total rate of material conversion, as summarised by Kelemen et al. te{kelemen}. Ulven et al. te{ulven_1} have shown that for fluid-mediated processes the ratio between chemical reaction rate and fluid transport rate in bulk rock controls the fracture pattern formed, and Ulven et al. te{ulven_2} have shown that instantaneous fluid transport in fractures lead to a significant increase in the total rate of the volume expanding process. However, instantaneous fluid transport in fractures is clearly an overestimate, and achievable fluid transport rates in fractures have apparently not been studied in any detail. Fractures cutting through an entire domain might experience relatively fast advective reactant transport, whereas dead-end fractures will be limited to diffusion of reactants in the fluid, internal fluid mixing in the fracture or capillary flow into newly formed fractures. Understanding the feedback process between fracture formation and permeability changes is essential in assessing industrial scale CO2 sequestration in ultramafic rock, but little is seemingly known about how large the permeability change will be in reaction-induced fracturing. In this work, we study the feedback between fracture formation during volume expansion and fluid transport in different fracture settings. We combine a discrete element model (DEM) describing a volume expanding process and the related fracture formation with different models that describe the fluid transport in the
11. Measurement of charmed particle production in hadronic reactions
CERN Multimedia
2002-01-01
The aim of the experiment is to measure the production cross-section for charmed particles in hadronic reactions, study their production mechanism, and search for excited charmed hadrons.\\\\ \\\\ Charmed Mesons and Baryons will be measured in $\\pi$ and $p$ interactions on Beryllium between 100 and 200 GeV/c. The trigger will be on an electron from the leptonic decay of one charmed particle by signals from the Cerenkov counter (Ce), the electron trigger calorimeter (eCal), scintillation counters, and proportional wire chambers. The accompanying charmed particle will be measured via its hadronic decay in a two-stage magnetic spectrometer with drift chambers (arms 2, 3a, 3b, 3c), two large-area multicell Cerenkov counters (C2, C3) and a large-area shower counter ($\\gamma$-CAL). The particles which can be measured and identified include $\\gamma, e, \\pi^{\\pm}, \\pi^{0}, K^{\\pm}, p, \\bar{p}$ so that a large number of hadronic decay modes of charmed particles can be studied. \\\\ \\\\ A silicon counter telescope with 5 \\m... 12. On light cluster production in nucleon induced reactions at intermediate energy International Nuclear Information System (INIS) Lacroix, D.; Blideanu, V.; Durand, D. 2004-09-01 A dynamical model dedicated to nucleon induced reaction between 30-150 MeV is presented. It considers different stages of the reaction: the approaching phase, the in-medium nucleon-nucleon collisions, the cluster formation and the secondary de-excitation process. The notions of influence area and phase-space exploration during the reaction are introduced. The importance of the geometry of the reaction and of the conservation laws are underlined. The model is able to globally reproduce the absolute cross sections for the emission of neutron and light charged particles for proton and neutron induced reactions on heavy and intermediate mass targets ( 56 Fe and 208 Pb). (authors) 13. Evaluation of charged-particle reactions for fusion applications International Nuclear Information System (INIS) White, R.M.; Resler, D.A.; Warshaw, S.I. 1991-01-01 New evaluations of the total reaction cross sections for 2 H(d,n) 3 He, 2 H(d,p) 3 H, 3 H(t,2n) 4 He, 3 H(d,n) 4 He, and 3 He(d,p) 4 He have been completed. These evaluations are based on all known published data from 1946 to 1990 and include over 1150 measured data points from 67 references. The purpose of this work is to provide a consistent and well-documented set of cross sections for use in calculations relating to fusion energy research. A new thermonuclear data file, TDF, and a library of FORTRAN subprograms to read the file have been developed. Calculated from the new evaluations, the TDF file contains information on the Maxwellian-averaged reaction rates as a function of reaction and plasma temperature and the Maxwellian-averaged average energy of the interacting particles and reaction products. Routines are included that provide thermally-broadened spectral information for the secondary reaction products. 67 refs., 18 figs 14. Status of experimental data of proton-induced reactions for intermediate-energy nuclear data evaluation Energy Technology Data Exchange (ETDEWEB) Watanabe, Yukinobu; Kawano, Toshihiko [Kyushu Univ., Fukuoka (Japan); Yamano, Naoki; Fukahori, Tokio 1998-11-01 The present status of experimental data of proton-induced reactions is reviewed, with particular attention to total reaction cross section, elastic and inelastic scattering cross section, double-differential particle production cross section, isotope production cross section, and activation cross section. (author) 15. Cold-target recoil-ion momentum-spectroscopy: First results and future perspectives of a novel high resolution technique for the investigation of collision induced many-particle reactions International Nuclear Information System (INIS) Ullrich, J.; Doerner, R.; Mergel, V.; Jagutzki, O.; Spielberger, L.; Schmidt-Boecking, H. 1994-09-01 In order to investigate many-particle reaction dynamics in atomic collisions a novel high-resolution technique has been developed, which determines the momentum and the charge state of the slowly recoiling target ions. Using a very cold, thin, and localized supersonic gas jet target a momentum resolution of better than 0.05 a.u. is obtained by measuring the recoil-ion time-of-flight and the recoil-ion trajectory. Because of the very high detection efficiency of nearly 100% this technique is well suited for many-particle coincidence measurements in ionizing collisions. First experimental results for fast ion and electron impact on helium targets are presented. Future applications in atomic collision physics and related areas are discussed. (orig.) 16. Chemisorption and Reactions of Small Molecules on Small Gold Particles Directory of Open Access Journals (Sweden) Geoffrey C. Bond 2012-02-01 Full Text Available The activity of supported gold particles for a number of oxidations and hydrogenations starts to increase dramatically as the size falls below ~3 nm. This is accompanied by an increased propensity to chemisorption, especially of oxygen and hydrogen. The explanation for these phenomena has to be sought in kinetic analysis that connects catalytic activity with the strength and extent of chemisorption of the reactants, the latter depending on the electronic structure of the gold atoms constituting the active centre. Examination of the changes to the utilisation of electrons as particle size is decreased points to loss of metallic character at about 3 nm, as energy bands are replaced by levels, and a band gap appears. Detailed consideration of the Arrhenius parameters (E and ln A for CO oxidation points clearly to a step-change in activity at the point where metallic character is lost, as opposed to there being a monotonic dependence of rate on a physical property such as the fraction of atoms at corners or edges of particles. The deplorable scarcity of kinetic information on other reactions makes extension of this analysis difficult, but non-metallic behaviour is an unavoidable property of very small gold particles, and therefore cannot be ignored when seeking to explain their exceptional activity. 17. Reaction to fire of ETICS applied on wood particle board Directory of Open Access Journals (Sweden) Bonati Antonio 2016-01-01 Full Text Available As well known the ETICS are diffusely used both for energy saving and thermal insulation reasons. They have been applied recently in wood buildings and in regions of southern Europe too due to green building and sustainability reasons. ITC-CNR has tested a lot of building materials and developed good knowledge about reaction to fire since the 1980 and currently, ETICS fixed directly to particle wood panels have been investigated with several SBI tests. In the case study are presented the main factors that can influence the fire reaction results when applied on wood structure are highlighted: the thickness of the insulating material, the presence of accidental damage, the flame attack from the inside. From the results obtained by tests on samples prepared with simulated accidental damages and fire from inside, some considerations are made about the hazard due to this specific construction technology and others on limits of the type of actually used standards product classification. 18. Systematics of excitation functions for (n, charged particle) reactions International Nuclear Information System (INIS) Zhao Zhixiang; Zhou Delin 1986-06-01 On the bases of evaporation model considering the preequilibrium emission under some approximations, the analytical expressions including two adjustable parameters have been derived for excitation functions of (n, charged particle) reactions. Fitting these expressions to the available measured data, these parameters have been extracted and the systematic behaviour of the parameters have been studied. More accurate predictions than before could be obtained by using these expressions and systematic parameters. In the present work the neutron energy is considered up to about 20 MeV and the target mass region is 23< A<197 19. Nuclear Reactions in Micro/Nano-Scale Metal Particles International Nuclear Information System (INIS) Kim, Y. E. 2013-01-01 Low-energy nuclear reactions in micro/nano-scale metal particles are described based on the theory of Bose-Einstein condensation nuclear fusion (BECNF). The BECNF theory is based on a single basic assumption capable of explaining the observed LENR phenomena; deuterons in metals undergo Bose-Einstein condensation. The BECNF theory is also a quantitative predictive physical theory. Experimental tests of the basic assumption and theoretical predictions are proposed. Potential application to energy generation by ignition at low temperatures is described. Generalized theory of BECNF is used to carry out theoretical analyses of recently reported experimental results for hydrogen-nickel system. (author) 20. Nuclear Reactions in Micro/Nano-Scale Metal Particles Science.gov (United States) Kim, Y. E. 2013-03-01 Low-energy nuclear reactions in micro/nano-scale metal particles are described based on the theory of Bose-Einstein condensation nuclear fusion (BECNF). The BECNF theory is based on a single basic assumption capable of explaining the observed LENR phenomena; deuterons in metals undergo Bose-Einstein condensation. The BECNF theory is also a quantitative predictive physical theory. Experimental tests of the basic assumption and theoretical predictions are proposed. Potential application to energy generation by ignition at low temperatures is described. Generalized theory of BECNF is used to carry out theoretical analyses of recently reported experimental results for hydrogen-nickel system. 1. Competing reaction channels in IR-laser-induced unimolecular reactions International Nuclear Information System (INIS) Berman, M.R. 1981-01-01 The competing reaction channels in the unimolecular decomposition of two molecules, formaldehyde and tetralin were studied. A TEA CO 2 laser was used as the excitation source in all experiments. The dissociation of D 2 CO was studied by infrared multiphoton dissociation (MPD) and the small-molecule nature of formaldehyde with regard to MPD was explored. The effect of collisions in MPD were probed by the pressure dependence of the MPD yield and ir fluorescence from multiphoton excited D 2 CO. MPD yield shows a near cubic dependence in pure D 2 CO which is reduced to a 1.7 power dependence when 15 torr of NO is added. The peak amplitude of 5 μm ir fluorescence from D 2 CO is proportional to the square of the D 2 CO pressure in pure D 2 CO or in the presence of 50 torr of Ar. Results are explained in terms of bottlenecks to excitation at the v = 1 level which are overcome by a combination of vibrational energy transfer and rotational relaxation. The radical/molecule branching ratio in D 2 CO MPD was 0.10 +- 0.02 at a fluence of 125 J/cm 2 at 946.0 cm -1 . The barrier height to molecular dissociation was calculated to be 3.6 +- 2.0 kcal/mole below the radical threshold or 85.0 +- 3.0 kcal/mole above the ground state of D 2 CO. In H 2 CO, this corresponds to 2.5 +- 2.0 kcal/mole below the radical threshold or 83.8 +- 3.0 kcal/mole above the ground state. Comparison with uv data indicate that RRKM theory is an acceptable description of formaldehyde dissociation in the 5 to 10 torr pressure range. The unimolecular decomposition of tetralin was studied by MPD and SiF 4 - sensitized pyrolysis. Both techniques induce decomposition without the interference of catalytic surfaces. Ethylene loss is identified as the lowest energy reaction channel. Dehydrogenation is found to result from step-wise H atom loss. Isomerization via disproportionation is also identified as a primary reaction channel 2. Pion-induced knock-out reactions International Nuclear Information System (INIS) Jain, B.K.; Phatak, S.C. 1977-01-01 A strong absorption model for pion-induced Knock-out reactions is proposed. The distortion of the in-coming and out-going pions has been included by (1) computing pion wave number in nuclear medium (dispersive effect) and (2) excluding the central region of the nucleus where the real pion-absorption is dominant (absorption effect). In order to study the dependence of the (π + π + p) reaction on the off-shell pion-nucleon t-matrix, different off-shell extrapolations are used. The magnitude of the cross-sections seems to be sensitive to the type of off-shell extrapolation; their shapes, however, are similar. The theoretical results are compared with experimental data. The agreement between the theoretical results for separable off-shell extrapolation and the data is good. (author) 3. Particle induced X-ray emission International Nuclear Information System (INIS) Cohen, D.D. 1991-08-01 The accelerator based ion beam technique of Particle Induced X-ray Emission (PIXE) is discussed in some detail. This report pulls together all major reviews and references over the last ten years and reports on PIXE in vacuum and using external beams. The advantages, limitations, costs and types of studies that may be undertaken using an accelerator based ion beam technique such as PIXE, are also discussed. 25 refs., 7 tabs., 40 figs 4. Exploratory study of nuclear reaction data utility framework of Japan charged particle reaction data group (JCPRG) International Nuclear Information System (INIS) Masui, Hiroshi; Ohnishi, Akira; Kato, Kiyoshi; Ohbayasi, Yosihide; Aoyama, Shigeyoshi; Chiba, Masaki 2002-01-01 Compilation, evaluation and dissemination are essential pieces of work for the nuclear data activities. We, Japan charged particle data group, have researched the utility framework for the nuclear reaction data on the basis of recent progress of computer and network technologies. These technologies will be not only for the data dissemination but for the compilation and evaluation assistance among the many corresponding researchers of all over the world. In this paper, current progress of our research and development is shown. (author) 5. Trajectory calculation of a trapped particle in electro-dynamic balance for study of chemical reaction of aerosol particles International Nuclear Information System (INIS) Okuma, Miho; Itou, Takahiro; Harano, Azuchi; Takarada, Takayuki; James, Davis E 2013-01-01 Electrodynamic balance (EDB) is a powerful tool for investigating the chemical reactions between a fine particle and gaseous species. But the EDB device alone is inadequate to match the rapid weight change of a fine particle caused by chemical reactions, because it takes a few seconds to set a fine particle at null point. The particle trajectory calculation for the trapped particle added to the EDB is thus a very useful tool for the measurement of the transient response of a particle weight change with no need to adjust the applied DC voltage to set the null point. The purpose of this study is to develop the trajectory calculation method to track the particle oscillation pattern in the EDB and examine the possibility for kinetic studies on the reaction of a single aerosol particle with gaseous species. The results demonstrated the feasibility of applying particle trajectory calculation to realize the research purpose. 6. Pre-equilibrium particle decay in the photonuclear reactions International Nuclear Information System (INIS) Wu, J.R.; Chang, C.C. 1976-11-01 Calculations of particle energy spectra resulting from the photonuclear reactions at energies below the meson production threshold have been carried out in the framework of combining the pre-equilibrium exiton model and the quasi-deuteron model. A 2p-2h initial state in the exciton model is assumed because in the energy region above giant resonance the quasi-deuteron absorption is the dominant process. With these combined models, the subsequent secondary interactions of the emerging particle with the rest of the nucleus following the initial photon-nucleus interaction are appropriately taken into account. The experimental difference energy spectra of fast photoneutrons from several elements (Al, Cu, In, Sn, Ta, Pb, Bi and U) at bremsstrahlung energies of 55 and 85 MeV and the photoproton energy spectra from 12 C at bremsstrahlung energy 110 MeV were compared with the theoretical predictions. General agreements in both spectral shapes and cross sections are obtained. The relative yields of the reactions (γ, xn) resulting from monoenergetic photons on 127 I at 50, 100 and 150 MeV are also predicted reasonably well by the combined models together with the conventional evaporation theory 7. Radiation reaction of a classical quasi-rigid extended particle International Nuclear Information System (INIS) Medina, Rodrigo 2006-01-01 The problem of the self-interaction of a quasi-rigid classical particle with an arbitrary spherically symmetric charge distribution is completely solved up to the first order in the acceleration. No ad hoc assumptions are made. The relativistic equations of conservation of energy and momentum in a continuous medium are used. The electromagnetic fields are calculated in the reference frame of instantaneous rest using the Coulomb gauge; in this way the troublesome power expansion is avoided. Most of the puzzles that this problem has aroused are due to the inertia of the negative pressure that equilibrates the electrostatic repulsion inside the particle. The effective mass of this pressure is -U e /(3c 2 ), where U e is the electrostatic energy. When the pressure mass is taken into account the dressed mass m turns out to be the bare mass plus the electrostatic mass m = m 0 + U e /c 2 . It is shown that a proper mechanical behaviour requires that m 0 > U e /3c 2 . This condition poses a lower bound on the radius that a particle of a given bare mass and charge may have. The violation of this condition is the reason why the Lorentz-Abraham-Dirac formula for the radiation reaction of a point charge predicts unphysical motions that run away or violate causality. Provided the mass condition is met the solutions of the exact equation of motion never run away and conform to causality and conservation of energy and momentum. When the radius is much smaller than the wavelength of the radiated fields, but the mass condition is still met, the exact expression reduces to the formula that Rohrlich (2002 Phys. Lett. A 303 307) has advocated for the radiation reaction of a quasi-point charge 8. Enhanced emission of non-compound light particles in the reaction plane International Nuclear Information System (INIS) Tsang, M.B. 1984-01-01 In an experiment performed at the K500 cyclotron at Michigan State University, light particles in coincidence with two fission fragments for 14 N induced reactions on 197 Au at 420 MeV incident energy have been measured. The fission fragments were detected with two large area position sensitive parallel plate avalanche detectors. Light particle telescopes consisting of silicon-ΔE and Nal-E detectors were placed both in and out of the plane defined by the centers of the two fission detectors and the beam axis. The momentum transferred to the composite system was determined by measuring the folding angle between the two outgoing fission fragments. Unlike observations with more fissile targets, however, transfer and inelastic reactions characterized by small linear momentum transfers contribute negligibly to the fission cross section for reactions on the 197 Au target. For events which lead to fission, the most probable linear momentum transfer corresponded to about 85% of the beam momentum. This is similar to the most probable momentum transfer observed for fusion-like reactions on 238 U at the same beam energy. Much of the missing momentum is carried away by non-equilibrium light particle emission 9. Charged particle induced energy dispersive X-ray analysis International Nuclear Information System (INIS) Johansson, S.A.E. 1979-01-01 This review article deals with the X-ray emission induced by heavy, charged particles and the use of this process as an analytical method (PIXE). The physical processes involved, X-ray emission and the various reactions contributing to the background, are described in some detail. The sensitivity is calculated theoretically and the results compared with practical experience. A discussion is given on how the sensitivity can be optimized. The experimental arrangements are described and the various technical problems discussed. The analytical procedure, especially the sample preparation, is described in considerable detail. A number of typical practical applications are discussed. (author) 10. Particle size effect of redox reactions for Co species supported on silica International Nuclear Information System (INIS) Chotiwan, Siwaruk; Tomiga, Hiroki; Katagiri, Masaki; Yamamoto, Yusaku; Yamashita, Shohei; Katayama, Misaki; Inada, Yasuhiro 2016-01-01 Conversions of chemical states during redox reactions of two silica-supported Co catalysts, which were prepared by the impregnation method, were evaluated by using an in situ XAFS technique. The addition of citric acid into the precursor solution led to the formation on silica of more homogeneous and smaller Co particles, with an average diameter of 4 nm. The supported Co 3 O 4 species were reduced to metallic Co via the divalent CoO species during a temperature-programmed reduction process. The reduced Co species were quantitatively oxidized with a temperature-programmed oxidation process. The higher observed reduction temperature of the smaller CoO particles and the lower observed oxidation temperature of the smaller metallic Co particles were induced by the higher dispersion of the Co oxide species, which apparently led to a stronger interaction with supporting silica. The redox temperature between CoO and Co 3 O 4 was found to be independent of the particle size. - Graphical abstract: Chemical state conversions of SiO 2 -supported Co species and the particle size effect have been analyzed by means of in situ XAFS technique. The small CoO particles have endurance against the reduction and exist in a wide temperature range. Display Omitted - Highlights: • The conversions of the chemical state of supported Co species during redox reaction are evaluated. • In operando XAFS technique were applied to measure redox properties of small Co particles. • A small particle size affects to the redox temperatures of cobalt catalysts. 11. Charged-particle transfer reactions and nuclear astrophysics problems International Nuclear Information System (INIS) Artemov, S.V.; Yarmukhamedov, R.; Yuldashev, B.S.; Burtebaev, N.; Duysebaev, A.; Kadyrzhanov, K.K. 2002-01-01 In the report a review of the recent results of calculation of the astrophysical S-factors S(E) for the D(α, γ) 6 Li, 3 He(α, γ) 7 Be, 7 Be(p, γ) 8 Be, 12,13 C(p, γ) 13, 14 N and 12 C(p,γ) 16 O* reactions at extremely low energies E, including value E=0 , performed within the framework of a new method taking into account the additional information about the nuclear vertex constant (Nc) (or the respective asymptotic normalization coefficient) are presented. The required values of Nc can be obtained from an analysis of measured differential cross-sections of proton and α-particle transfer reactions (for example A( 3 He,d)B, 6 Li(d, 6 Li)d, 6 Li(α, 6 Li)α, 12 C( 6 Li, d) 16 O* etc.). A comparative analysis between the results obtained by different authors is also done. Taking into account an important role of the NVC's values for the nuclear astrophysical A(p, γ)B and A(α, γ)B reactions, a possibility of obtaining the reliable NVC values for the virtual decay B→A+p and B→A+α from the analysis of differential cross sections both sub- and above-barrier A( 3 He, d) and A( 6,7 Li, 2,3 H)B reactions is discussed in detail. In this line the use the isochronous cyclotron U-150 M, the 'DC-60' heavy ion machine and electrostatic charge-exchanging accelerator UKP-2-1 of Institute of Nuclear Physics of National Nuclear Center of the Republic of Kazakhstan for carrying out the needed experiments is considered and the possibility of the obtained data application for the astrophysical interest is also discussed 12. Investigation of energetic particle induced geodesic acoustic mode Science.gov (United States) Schneller, Mirjam; Fu, Guoyong; Chavdarovski, Ilija; Wang, Weixing; Lauber, Philipp; Lu, Zhixin 2017-10-01 Energetic particles are ubiquitous in present and future tokamaks due to heating systems and fusion reactions. Anisotropy in the distribution function of the energetic particle population is able to excite oscillations from the continuous spectrum of geodesic acoustic modes (GAMs), which cannot be driven by plasma pressure gradients due to their toroidally and nearly poloidally symmetric structures. These oscillations are known as energetic particle-induced geodesic acoustic modes (EGAMs) [G.Y. Fu'08] and have been observed in recent experiments [R. Nazikian'08]. EGAMs are particularly attractive in the framework of turbulence regulation, since they lead to an oscillatory radial electric shear which can potentially saturate the turbulence. For the presented work, the nonlinear gyrokinetic, electrostatic, particle-in-cell code GTS [W.X. Wang'06] has been extended to include an energetic particle population following either bump-on-tail Maxwellian or slowing-down [Stix'76] distribution function. With this new tool, we study growth rate, frequency and mode structure of the EGAM in an ASDEX Upgrade-like scenario. A detailed understanding of EGAM excitation reveals essential for future studies of EGAM interaction with micro-turbulence. Funded by the Max Planck Princeton Research Center. Computational resources of MPCDF and NERSC are greatefully acknowledged. 13. Switching dynamics in reaction networks induced by molecular discreteness International Nuclear Information System (INIS) Togashi, Yuichi; Kaneko, Kunihiko 2007-01-01 To study the fluctuations and dynamics in chemical reaction processes, stochastic differential equations based on the rate equation involving chemical concentrations are often adopted. When the number of molecules is very small, however, the discreteness in the number of molecules cannot be neglected since the number of molecules must be an integer. This discreteness can be important in biochemical reactions, where the total number of molecules is not significantly larger than the number of chemical species. To elucidate the effects of such discreteness, we study autocatalytic reaction systems comprising several chemical species through stochastic particle simulations. The generation of novel states is observed; it is caused by the extinction of some molecular species due to the discreteness in their number. We demonstrate that the reaction dynamics are switched by a single molecule, which leads to the reconstruction of the acting network structure. We also show the strong dependence of the chemical concentrations on the system size, which is caused by transitions to discreteness-induced novel states 14. Nuclear dynamics in heavy ion induced fusion-fission reactions International Nuclear Information System (INIS) Kapoor, S.S. 1992-01-01 Heavy ion induced fission and fission-like reactions evolve through a complex nuclear dynamics encountered in the medium energy nucleus-nucleus collisions. In the recent years, measurements of the fragment-neutron and fragment-charged particle angular correlations in heavy ion induced fusion-fission reactions, have provided new information on the dynamical times of nuclear deformations of the initial dinuclear complex to the fission saddle point and the scission point. From the studies of fragment angular distributions in heavy ion induced fission it has been possible to infer the relaxation times of the dinuclear complex in the K-degree of freedom and our recent measurements on the entrance channel dependence of fragment anisotropies have provided an experimental signature of the presence of fissions before K-equilibration. This paper reviews recent experimental and theoretical status of the above studies with particular regard to the questions relating to dynamical times, nuclear dissipation and the effect of nuclear dissipation on the K-distributions at the fission saddle in completely equilibrated compound nucleus. (author). 19 refs., 9 figs 15. Investigation of GeV proton-induced spallation reactions International Nuclear Information System (INIS) Hilscher, D.; Herbach, C.-M.; Jahnke, U. 2003-01-01 A reliable and precise modeling of GeV proton-induced spallation reactions is indispensable for the design of the spallation module and the target station of future accelerator driven hybrid reactors (ADS) or spallation neutron sources (ESS), in particular, to provide precise predictions for the neutron production, the radiation damage of materials (window), and the production of radioactivity ( 3 H, 7 Be etc.) in the target medium. Detailed experimental nuclear data are needed for sensitive validations and improvements of the models, whose predictive power is strongly dependent on the correct physical description of the three main stages of a spallation reaction: (i) the Intra-Nuclear-Cascade (INC) with the fast heating of the target nucleus, (ii) the de-excitation due to pre-equilibrium emission including the possibility of multi-fragmentation, and (iii) the statistical decay of thermally excited nuclei by evaporation of light particles and fission in the case of heavy nuclei. Key experimental data for this endeavour are absolute production cross sections and energy spectra for neutrons and light charged-particles (LCPs), emission of composite particles prior and post to the attainment of an equilibrated system, distribution of excitation energies deposited in the nuclei after the INC, and fission probabilities. The correlations of these quantities are particularly important to detect and identify possible deficiencies of the theoretical modeling of the various stages of a spallation reaction. Systematic measurements of such data are furthermore needed over large ranges of target nuclei and incident proton energies. Such data has been measured with the NESSI detector. An overview of new and previous results will be given. (authors) 16. Investigation of activation cross section data of alpha particle induced nuclear reaction on molybdenum up to 40 MeV: Review of production routes of medically relevant {sup 97,103}Ru Energy Technology Data Exchange (ETDEWEB) Tárkányi, F. [Institute of Nuclear Research of the Hungarian Academy of Sciences, Debrecen (Hungary); Hermanne, A., E-mail: [email protected] [Cyclotron Laboratory, Vrije Universiteit Brussel, Brussels (Belgium); Ditrói, F.; Takács, S. [Institute of Nuclear Research of the Hungarian Academy of Sciences, Debrecen (Hungary); Ignatyuk, A. [Institute of Physics and Power Engineering (IPPE), Obninsk (Russian Federation) 2017-05-15 The main goals of this investigations were to expand and consolidate reliable activation cross-section data for the {sup nat}Mo(α,x) reactions in connection with production of medically relevant {sup 97,103}Ru and the use of the {sup nat}Mo(α,x){sup 97}Ru reaction for monitoring beam parameters. The excitation functions for formation of the gamma-emitting radionuclides {sup 94}Ru, {sup 95}Ru, {sup 97}Ru, {sup 103}Ru, {sup 93m}Tc, {sup 93g}Tc(m+), {sup 94m}Tc, {sup 94g}Tc, {sup 95m}Tc, {sup 95g}Tc, {sup 96g}Tc(m+), {sup 99m}Tc, {sup 93m}Mo, {sup 99}Mo(cum), {sup 90}Nb(m+) and {sup 88}Zr were measured up to 40 MeV alpha-particle energy by using the stacked foil technique and activation method. Data of our earlier similar experiments were re-evaluated and resulted in corrections on the reported results. Our experimental data were compared with critically analyzed literature data and with the results of model calculations, obtained by using the ALICE-IPPE, EMPIRE 3.1 (Rivoli) and TALYS codes (TENDL-2011 and TENDL-2015 on-line libraries). Nuclear data for different production routes of {sup 97}Ru and {sup 103}Ru are compiled and reviewed. 17. Differences between Drug-Induced and Contrast Media-Induced Adverse Reactions Based on Spontaneously Reported Adverse Drug Reactions. Science.gov (United States) Ryu, JiHyeon; Lee, HeeYoung; Suh, JinUk; Yang, MyungSuk; Kang, WonKu; Kim, EunYoung 2015-01-01 We analyzed differences between spontaneously reported drug-induced (not including contrast media) and contrast media-induced adverse reactions. Adverse drug reactions reported by an in-hospital pharmacovigilance center (St. Mary's teaching hospital, Daejeon, Korea) from 2010-2012 were classified as drug-induced or contrast media-induced. Clinical patterns, frequency, causality, severity, Schumock and Thornton's preventability, and type A/B reactions were recorded. The trends among causality tools measuring drug and contrast-induced adverse reactions were analyzed. Of 1,335 reports, 636 drug-induced and contrast media-induced adverse reactions were identified. The prevalence of spontaneously reported adverse drug reaction-related admissions revealed a suspected adverse drug reaction-reporting rate of 20.9/100,000 (inpatient, 0.021%) and 3.9/100,000 (outpatients, 0.004%). The most common adverse drug reaction-associated drug classes included nervous system agents and anti-infectives. Dermatological and gastrointestinal adverse drug reactions were most frequently and similarly reported between drug and contrast media-induced adverse reactions. Compared to contrast media-induced adverse reactions, drug-induced adverse reactions were milder, more likely to be preventable (9.8% vs. 1.1%, p contrast media-induced adverse reactions (56.6%, p = 0.066). Causality patterns differed between the two adverse reaction classes. The World Health Organization-Uppsala Monitoring Centre causality evaluation and Naranjo algorithm results significantly differed from those of the Korean algorithm version II (p contrast media-induced adverse reactions. The World Health Organization-Uppsala Monitoring Centre and Naranjo algorithm causality evaluation afforded similar results. 18. Setup for fission and evaporation cross-section measurements in reactions induced by secondary beams International Nuclear Information System (INIS) Hassan, A.A.; Luk'yanov, S.M.; Kalpakchieva, R.; Skobelev, N.K.; Penionzhkevich, Yu.Eh.; Dlouhy, Z.; Radnev, S.; Poroshin, N.V. 2002-01-01 A setup for studying reactions induced by secondary radioactive beams has been constructed. It allows simultaneous measurement of α-particle and fission fragment energy spectra. By measuring the α-particles, identification of evaporation residues is achieved. A set of three targets can be used so as to ensure sufficient statistics. Two silicon detectors, located at 90 degrees to the secondary beam direction, face each target, thus covering 30% of the solid angle. This experimental setup is to be used to obtain excitation functions of fusion-fission reactions and of reactions leading to evaporation residue production 19. Kinetics of contrail particles formation and heterogeneous reactions on such particles Energy Technology Data Exchange (ETDEWEB) Kogan, M.N.; Butkovsky, A.V.; Erofeev, A.I.; Freedlender, O.G.; Makashev, N.K. [Central Aerohydrodynamic Inst., Zhukovsky (Russian Federation) 1997-12-31 The research of impact of aircraft emissions upon the atmosphere is very complex and difficult problem. More than two decades of intensive investigations of the problem of ozone decay do not permit to make definite conclusions. Many important problems still remain unsolved in the aircraft/atmosphere interaction: engine, nozzle, jet, jet/vortex system interaction, vortex breakdown, contrail formation, meso-scale and global processes, their effects on climate. The particles formation and heterogeneous reactions play an important role in some of these processes. These problems are discussed. (author) 11 refs. 20. Kinetics of contrail particles formation and heterogeneous reactions on such particles Energy Technology Data Exchange (ETDEWEB) Kogan, M N; Butkovsky, A V; Erofeev, A I; Freedlender, O G; Makashev, N K [Central Aerohydrodynamic Inst., Zhukovsky (Russian Federation) 1998-12-31 The research of impact of aircraft emissions upon the atmosphere is very complex and difficult problem. More than two decades of intensive investigations of the problem of ozone decay do not permit to make definite conclusions. Many important problems still remain unsolved in the aircraft/atmosphere interaction: engine, nozzle, jet, jet/vortex system interaction, vortex breakdown, contrail formation, meso-scale and global processes, their effects on climate. The particles formation and heterogeneous reactions play an important role in some of these processes. These problems are discussed. (author) 11 refs. 1. Comments on (n, charged particle) reactions at E/sub n/ = 14 MeV International Nuclear Information System (INIS) Haight, R.C. 1984-01-01 The study of charged particles produced by bombarding materials with 14 MeV neutrons is important for the development of fusion reactors and for biomedical applications as well as for the basic understanding of nuclear reactions. Several experimental techniques for investigating these reactions are discussed here. The interpretation of the data requires the consideration of several possible reaction mechanisms including equilibrium and preequilibrium particle emission and, for light nuclei, sequential particle emission, final state interactions, and the effect of resonances. 17 references 2. Dibaryon resonances in photon induced reactions International Nuclear Information System (INIS) Schwille, W.J. 1981-11-01 The author gives a review about the production of dibaryon resonances in photon reactions on deuterium targets. Especially he considers the reactions γ + d → p + n, γ + d → p + X, and γ + d → p + N + π. (HSI) 3. Alpha particle induced reactions on {sup nat}Cr up to 39 MeV: Experimental cross-sections, comparison with theoretical calculations and thick target yields for medically relevant {sup 52g}Fe production Energy Technology Data Exchange (ETDEWEB) Hermanne, A.; Adam Rebeles, R. [Cyclotron Laboratory, Vrije Universiteit Brussel, Brussel 1090 (Belgium); Tárkányi, F.; Takács, S. [Institute of Nuclear Research, Hungarian Academy of Science, 4026 Debrecen (Hungary) 2015-08-01 Thin {sup nat}Cr targets were obtained by electroplating, using 23.75 μm Cu foils as backings. In five stacked foil irradiations, followed by high resolution gamma spectroscopy, the cross sections for production of {sup 52g}Fe, {sup 49,51cum}Cr, {sup 52cum,54,56cum}Mn and {sup 48cum}V in Cr and {sup 61}Cu,{sup 68}Ga in Cu were measured up to 39 MeV incident α-particle energy. Reduced uncertainty is obtained by simultaneous remeasurement of the {sup nat}Cu(α,x){sup 67,66}Ga monitor reactions over the whole energy range. Comparisons with the scarce literature values and results from the TENDL-2013 on-line library, based on the theoretical code family TALYS-1.6, were made. A discussion of the production routes for {sup 52g}Fe with achievable yields and contamination rates was made. 4. Aerosol nucleation induced by a high energy particle beam DEFF Research Database (Denmark) Enghoff, Martin Andreas Bødker; Pedersen, Jens Olaf Pepke; Uggerhøj, Ulrik I. The effect of ions in aerosol nucleation is a subject where much remains to be discovered. That ions can enhance nucleation has been shown by theory, observations, and experiments. However, the exact mechanism still remains to be determined. One question is if the nature of the ionization affects...... the nucleation. This is an essential question since many experiments have been performed using radioactive sources that ionize differently than the cosmic rays which are responsible for the majority of atmospheric ionization. Here we report on an experimental study of sulphuric acid aerosol nucleation under near...... atmospheric conditions using a 580 MeV electron beam to ionize the volume of the reaction chamber. We find a clear and significant contribution from ion induced nucleation and consider this to be an unambiguous observation of the ion-effect on aerosol nucleation using a particle beam under conditions not far... 5. Light particle emission measurements in heavy ion reactions: Progress report, June 1, 1987-May 31, 1988 International Nuclear Information System (INIS) Petitt, G.A. 1988-01-01 This paper discusses work on heavy ion reactions done at Georgia State University. Topics and experiments discussed are: energy division in damped reactions between 58 Ni projectiles and 165 Ho and 58 Ni targets using time-of-flight methods; particle-particle correlations; and development works on the Hili detector system. 10 refs., 9 figs 6. Growth behavior of LiMn{sub 2}O{sub 4} particles formed by solid-state reactions in air and water vapor Energy Technology Data Exchange (ETDEWEB) Kozawa, Takahiro, E-mail: [email protected] [Joining and Welding Research Institute, Osaka University, 11–1 Mihogaoka, Ibaraki, Osaka 567-0047 (Japan); Yanagisawa, Kazumichi [Research Laboratory of Hydrothermal Chemistry, Faculty of Science, Kochi University, 2–5-1 Akebono-cho, Kochi 780-8520 (Japan); Murakami, Takeshi; Naito, Makio [Joining and Welding Research Institute, Osaka University, 11–1 Mihogaoka, Ibaraki, Osaka 567-0047 (Japan) 2016-11-15 Morphology control of particles formed during conventional solid-state reactions without any additives is a challenging task. Here, we propose a new strategy to control the morphology of LiMn{sub 2}O{sub 4} particles based on water vapor-induced growth of particles during solid-state reactions. We have investigated the synthesis and microstructural evolution of LiMn{sub 2}O{sub 4} particles in air and water vapor atmospheres as model reactions; LiMn{sub 2}O{sub 4} is used as a low-cost cathode material for lithium-ion batteries. By using spherical MnCO{sub 3} precursor impregnated with LiOH, LiMn{sub 2}O{sub 4} spheres with a hollow structure were obtained in air, while angulated particles with micrometer sizes were formed in water vapor. The pore structure of the particles synthesized in water vapor was found to be affected at temperatures below 700 °C. We also show that the solid-state reaction in water vapor is a simple and valuable method for the large-scale production of particles, where the shape, size, and microstructure can be controlled. - Graphical abstract: This study has demonstrated a new strategy towards achieving morphology control without the use of additives during conventional solid-state reactions by exploiting water vapor-induced particle growth. - Highlights: • A new strategy to control the morphology of LiMn{sub 2}O{sub 4} particles is proposed. • Water vapor-induced particle growth is exploited in solid-state reactions. • The microstructural evolution of LiMn{sub 2}O{sub 4} particles is investigated. • The shape, size and microstructure can be controlled by solid-state reactions. 7. Growth behavior of LiMn2O4 particles formed by solid-state reactions in air and water vapor International Nuclear Information System (INIS) Kozawa, Takahiro; Yanagisawa, Kazumichi; Murakami, Takeshi; Naito, Makio 2016-01-01 Morphology control of particles formed during conventional solid-state reactions without any additives is a challenging task. Here, we propose a new strategy to control the morphology of LiMn 2 O 4 particles based on water vapor-induced growth of particles during solid-state reactions. We have investigated the synthesis and microstructural evolution of LiMn 2 O 4 particles in air and water vapor atmospheres as model reactions; LiMn 2 O 4 is used as a low-cost cathode material for lithium-ion batteries. By using spherical MnCO 3 precursor impregnated with LiOH, LiMn 2 O 4 spheres with a hollow structure were obtained in air, while angulated particles with micrometer sizes were formed in water vapor. The pore structure of the particles synthesized in water vapor was found to be affected at temperatures below 700 °C. We also show that the solid-state reaction in water vapor is a simple and valuable method for the large-scale production of particles, where the shape, size, and microstructure can be controlled. - Graphical abstract: This study has demonstrated a new strategy towards achieving morphology control without the use of additives during conventional solid-state reactions by exploiting water vapor-induced particle growth. - Highlights: • A new strategy to control the morphology of LiMn 2 O 4 particles is proposed. • Water vapor-induced particle growth is exploited in solid-state reactions. • The microstructural evolution of LiMn 2 O 4 particles is investigated. • The shape, size and microstructure can be controlled by solid-state reactions. 8. On light cluster production in nucleon induced reactions at intermediate energy Energy Technology Data Exchange (ETDEWEB) Lacroix, D.; Blideanu, V.; Durand, D 2004-09-01 A dynamical model dedicated to nucleon induced reaction between 30-150 MeV is presented. It considers different stages of the reaction: the approaching phase, the in-medium nucleon-nucleon collisions, the cluster formation and the secondary de-excitation process. The notions of influence area and phase-space exploration during the reaction are introduced. The importance of the geometry of the reaction and of the conservation laws are underlined. The model is able to globally reproduce the absolute cross sections for the emission of neutron and light charged particles for proton and neutron induced reactions on heavy and intermediate mass targets ({sup 56}Fe and {sup 208}Pb). (authors) 9. Mechanism of 238U disintegration induced by relativistic particles International Nuclear Information System (INIS) Andronenko, L.N.; Zhdanov, A.A.; Kravtsov, A.V.; Solyakin, G.E. 2002-01-01 In heavy-nucleus disintegration induced by a relativistic projectile particle, the production of collinear massive fragments accompanied by numerous charged particles and neutrons is explained in terms of the mechanism of projectile-momentum compensation due to the emission of a particle whose mass is greater than the projectile mass 10. Development of a utility system for charged particle nuclear reaction data by using intelligentPad International Nuclear Information System (INIS) Aoyama, Shigeyoshi; Ohbayashi, Yoshihide; Masui, Hiroshi; Kato, Kiyoshi; Chiba, Masaki 2000-01-01 We have developed a utility system, WinNRDF2, for a nuclear charged particle reaction data of NRDF (Nuclear Reaction Data File) on the IntelligentPad architecture. By using the system, we can search the experimental data of a charged particle reaction of NRDF. Furthermore, we also see the experimental data by using graphic pads which was made through the CONTIP project. (author) 11. A position sensitive parallel plate avalanche fission detector for use in particle induced fission coincidence measurements NARCIS (Netherlands) Plicht, J. van der 1980-01-01 A parallel plate avalanche detector developed for the detection of fission fragments in particle induced fission reactions is described. The active area is 6 × 10 cm2; it is position sensitive in one dimension with a resolution of 2.5 mm. The detector can withstand a count rate of 25000 fission 12. Trojan Horse particle invariance for 2H(d,p)3H reaction: a detailed study International Nuclear Information System (INIS) Pizzone, R.G.; La Cognata, M.; Rinollo, A.; Spitaleri, C; Sparta, R.; Bertulani, C.A.; Mukhamedzhanov, A.M.; Blokhintsev, L.; Lamia, L.; Tumino, A. 2014-01-01 The basic idea of the Trojan Horse Method (THM) is to extract the cross section in the low-energy region of a two-body reaction with significant astrophysical impact: a + x → c + C from a suitable quasi-free (QF) break-up of the so called Trojan Horse nucleus, e.g. A=x (+) s where usually x is referred to as the participant and s as the spectator particle. In the last decades the Trojan Horse method has played a crucial role for the measurement of several charged particle induced reactions cross sections of astrophysical interest. To better understand its cornerstones and its applications to physical cases many tests were performed to verify all its properties and the possible future perspectives. The Trojan Horse nucleus invariance for the binary d(d,p)t reaction was therefore tested using the quasi free 2 H( 6 Li, pt) 4 He and 2 H( 3 He,pt)H reactions after 6 Li and 3 He break-up, respectively. The astrophysical S(E)-factor for the d(d,p)t binary process was then extracted in the framework of the Plane Wave Approximation applied to the two different break-up schemes. Polynomial fits were then performed on the data giving S 0 = (75 ± 21) keV*b in the case of the 6 Li break-up, while for 3 He one obtains S 0 = (58 ± 2) keV*b. The obtained results are compared with direct data as well as with previous indirect investigations. The very good agreement confirms the applicability of the plane wave approximation and suggests the independence of binary indirect cross section on the chosen Trojan Horse nucleus also for the present case 13. Linear cascade calculations of matrix due to neutron-induced nuclear reactions International Nuclear Information System (INIS) Avila, Ricardo E 2000-01-01 A method is developed to calculate the total number of displacements created by energetic particles resulting from neutron-induced nuclear reactions. The method is specifically conceived to calculate the damage in lithium ceramics by the 6L i(n, α)T reaction. The damage created by any particle is related to that caused by atoms from the matrix recoiling after collision with the primary particle. An integral equation for that self-damage is solved by interactions, using the magic stopping powers of Ziegler, Biersack and Littmark. A projectile-substrate dependent Kinchin-Pease model is proposed, giving and analytic approximation to the total damage as a function of the initial particle energy (au) 14. Fusion dynamics in 40Ca induced reactions International Nuclear Information System (INIS) Prasad, E.; Hinde, D.J.; Williams, E. 2017-01-01 Synthesis of superheavy elements (SHEs) and investigation of their properties are among the most challenging research topics in modern science. A non-compound nuclear process called quasi fission is partly responsible for the very low production cross sections of SHEs. The formation and survival probabilities of the compound nucleus (CN) strongly depend on the competition between fusion and quasi fission. A clear understanding of these processes and their dynamics is required to make reliable predictions of the best reactions to synthesise new SHEs. All elements beyond Nh are produced using hot fusion reactions and beams of 48 Ca were used in most of these experiments. In this context a series of fission measurements have been carried out at the Australian National University (ANU) using 40;48 Ca beams on various targets ranging from 142 Nd to 249 Cf. Some of the 40 Ca reactions will be discussed in this symposium 15. Multifragment emission times in Xe induced reactions Energy Technology Data Exchange (ETDEWEB) Moroni, A. [INFN and Dipartimento di Fisica, Via Celoria 16, 20133 Milano (Italy); Bowman, D.R. [AECL Research, Chalk River Laboratories, Chalk River, Ont. (Canada); Bruno, M. [Dipartimento di Fisica and INFN, Via Irnerio 46, 40126 Bologna (Italy); Buttazzo, P. [Dipartimento di Fisica and INFN, Via A. Valerio 2, 34127 Trieste (Italy); Celano, L. [INFN, Via Amendola 173, 70126 Bari (Italy); Colonna, N. [INFN, Via Amendola 173, 70126 Bari (Italy); D`Agostino, M. [Dipartimento di Fisica and INFN, Via Irnerio 46, 40126 Bologna (Italy); Dinius, J.D. [NSCL, Michigan State University, E. Lansing, 48824 MI (United States); Ferrero, A. [INFN and Dipartimento di Fisica, Via Celoria 16, 20133 Milano (Italy); Fiandri, M.L. [Dipartimento di Fisica and INFN, Via Irnerio 46, 40126 Bologna (Italy); Gelbke, K. [NSCL, Michigan State University, E. Lansing, 48824 MI (United States); Glasmacher, T. [NSCL, Michigan State University, E. Lansing, 48824 MI (United States); Gramegna, F. [INFN Laboratori Nazionali di Legnaro, Via Romea 4, 35020 Legnaro (Italy); Handzy, D.O. [NSCL, Michigan State University, E. Lansing, 48824 MI (United States); Horn, D. [AECL Research, Chalk River Laboratories, Chalk River, Ont. (Canada); Hsi Wenchien [NSCL, Michigan State University, E. Lansing, 48824 MI (United States); Huang, M. [NSCL, Michigan State University, E. Lansing, 48824 MI (United States); Iori, I. [INFN and Dipartimento di Fisica, Via Celoria 16, 20133 Milano (Italy); Lisa, M. [NSCL, Michigan State University, E. Lansing, 48824 MI (United States); Lynch, W.G. [NSCL, Michigan State University, E. Lansing, 48824 MI (United States); Margagliotti, G.V. [Dipartimento di Fisica and INFN, Via A. Valerio 2, 34127 Trieste (Italy); Mastinu, P.F. [Dipartimento di Fisica and INFN, Via Irnerio 46, 40126 Bologna (Italy); Milazzo, P.M. [Dipartimento di Fisica and INFN, Via Irnerio 46, 40126 Bologna (Italy); Montoya, C. 1995-02-06 Multifragment emission is studied in {sup 129}Xe+{sup nat}Cu reactions. The emission process for central collisions occurs on a time scale of similar 200fm/c at 30MeV/n. Intermediate-mass-fragment yields, velocity correlation functions and emission velocities of Z=6 fragments are compared with predictions of statistical decay models. ((orig.)). 16. short communication simple grinding-induced reactions of 2 African Journals Online (AJOL) Preferred Customer SIMPLE GRINDING-INDUCED REACTIONS OF 2-AMINOBENZYL ... As part of our broad interest in the chemistry of heterocyclic compounds [10, 11], the .... study grant) and the University of Botswana (THT study grant) for financial support, Dr. 17. Developmental Aspects of Reaction to Positive Inducements Science.gov (United States) Lindskold, Svenn; And Others 1970-01-01 Probes children's behavioral sensitivity to variation in reward probability and magnitude (bribes) and suggests that preadolescent children do respond to promises of positive inducements for cooperation in a mixed-motive situation. (WY) 18. Laser-induced chemical vapor deposition reactions International Nuclear Information System (INIS) Teslenko, V.V. 1990-01-01 The results of investigation of chemical reactions of deposition of different substances from the gas phase when using the energy of pulse quasicontinuous and continuous radiation of lasers in the wave length interval from 0.193 to 10.6 μm are generalized. Main attetion is paid to deposition of inorganic substances including nonmetals (C, Si, Ge and others), metals (Cu, Au, Zn, Cd, Al, Cr, Mo, W, Ni) and some simple compounds. Experimental data on the effect of laser radiation parameters and reagent nature (hydrides, halogenides, carbonyls, alkyl organometallic compounds and others) on the deposition rate and deposit composition are described in detail. Specific features of laser-chemical reactions of deposition and prospects of their application are considered 19. A semi-classical model for the description of angular distribution of light particles emitted in nuclear reactions International Nuclear Information System (INIS) Zhang Jingshang 1990-04-01 A semi-classical model of multi-step direct and compound nuclear reactions has been proposed to describe the angular distributions of light particles emitted in reaction processes induced by nucleons with energies of several tens of MeV. The exact closed solution for the time-dependent master equation of the exciton model is applied. Based on the Fermi gas model, the scattering kernel for two-nucleon collisions includes the influence of the Fermi motion and the Pauli exclusion principle, which give a significant improvement in the description of the rise of the backward distributions. The angle-energy correlation for the first few steps of the collision process (multi-step direct process) yields further improvements in the description of the angular distribution. The pick-up mechanism is employed to describe the composite particle emission. This reasonable physical picture reproduces the experimental data of the energy spectra of composite particles satisfactorily. The angular distribution of the emitted composite particles is determined by an angular factor in terms of the momentum conservation of the nucleons forming the composite cluster. The generalized master equation is employed for the multi-step compound process. Thus a classical approach has been established to calculate the double differential cross sections for all kinds of particles emitted in multi-step nuclear reaction processes. (author). 19 refs, 6 figs, 1 tab 20. Reactions of macrophages exposed to particles <10 μm International Nuclear Information System (INIS) Monn, Christian; Naef, Roland; Koller, Theo 2003-01-01 This study describes experiments on cytotoxic effects and the production of oxidative radicals and the proinflammatory cytokine tumor growth factor alpha (TNFα) in a cell line of rat lung macrophages exposed to aqueou extracts from ambient air particles 10 ) collected on Teflon filters. The particles were collected during the four seasons at two urban sites, one rural site, and one alpine site in Switzerland. Cytotoxic effects determined as a reduction in the metabolic activity, were found in particle extracts from all sites and seasons. Taking together the data from all site and seasons, a dose-response function was observed between the particle mass on the filter and toxicity (r 2 =0.633, linear regression). The release of the pro-inflammatory cytokine TNFα as well as of oxidative radicals was most pronounced in particles collected in spring-summer and autumn. While a Montana (alpine), the stimulation of the cells was positively correlated with the particle mass on the filters, this correlation was negative at the urban sites Zuerich and Lugano. It is interpreted that at high PM 10 levels, as in these cities, macrophages are inhibited by increasing air pollution due to toxic effects. Cytotoxic effects and the release of oxidative radicals could be inhibited when the extracts were treated with an endotoxin-neutralizing protein. This suggests that endotoxin, a cell-wall constituent of gram-negative bacteria, is one of the factors which modulates macrophag activity. All together, the experiments indicate that in the PM 10 fraction water-soluble macrophage-toxic and macrophage-stimulating compounds ar present. The data offer an explanation for at least some of the known harmful effects of PM 10 , and confirm endotoxin as a possible reactant 1. Trojan Horse Method for neutrons-induced reaction studies Science.gov (United States) Gulino, M.; Asfin Collaboration 2017-09-01 Neutron-induced reactions play an important role in nuclear astrophysics in several scenario, such as primordial Big Bang Nucleosynthesis, Inhomogeneous Big Bang Nucleosynthesis, heavy-element production during the weak component of the s-process, explosive stellar nucleosynthesis. To overcome the experimental problems arising from the production of a neutron beam, the possibility to use the Trojan Horse Method to study neutron-induced reactions has been investigated. The application is of particular interest for reactions involving radioactive nuclei having short lifetime. 2. A study on the grafting reaction of isocyanates with hydroxyapatite particles NARCIS (Netherlands) Liu, Q.; de Wijn, J.R.; van Blitterswijk, Clemens 1998-01-01 The surface grafting reactions of a series of isocyanates with hydroxyapatite particles at different temperatures were studied by Infrared spectrophotometry (IR) and thermal gravimetric analysis (TGA). The study results show that both hexamethylene diisocyanate (HMDI) and isocyanatoethyl 3. Influence of particles on sonochemical reactions in aqueous solutions. Science.gov (United States) Keck, A; Gilbert, E; Köster, R 2002-05-01 Numerous publications deal with the possible application of ultrasound for elimination of organic pollutants as a tool for water pollution abatement. Most of the experiments were performed in pure water under laboratory conditions. For developing technologies that hold promise it is necessary to investigate the effect of ultrasound in natural systems or waste water where particulate matter could play an important role. In this paper the influence of quartz particles (2-25 microm) on the chemical effects of ultrasound in aqueous system using a high power ultrasound generator (68-1028 kHz, 100 W, reactor volume 500 ml) is reported. In pure water in dependence on particle size, concentration and frequency the formation rate of hydrogen peroxide under Ar/O2 (4:1) shows a maximum using 206 kHz in presence of 3-5 microm quartz particles (4-8 g/l). Under these conditions the yield of peroxide is higher than without quartz. Additionally under N2/O2 (4:1) besides hydrogen peroxide the formation of nitrite/nitrate was measured. Compared to pure water quartz particle depressed the formation of nitrite/nitrate up to 10-fold but not the formation of H2O2. According to the results of H2O2 formation the elimination of organic compounds by sonolysis (206 kHz) and the influence of quartz particles were investigated. As organic compounds salicylic acid, 2-chlorobenzoic acid and p-toluenesulfonic acid were used. The influence of quartz on the oxidation of organic compounds (206 kHz) is similar to that on the formation of H2O2. 4. Charged particle reaction studies on /sup 14/C. [Spectroscopic factors Energy Technology Data Exchange (ETDEWEB) Cecil, F E; Shepard, J R; Anderson, R E; Peterson, R J; Kaczkowski, P [Colorado Univ., Boulder (USA). Nuclear Physics Lab. 1975-12-22 The reactions /sup 14/C(p,d), (d,d') and (d,p) have been measured for E/sub p/ = 27 MeV and E/sub d/ = 17 MeV. The (d,d') and (d,p) reactions were studied between theta/sub lab/ = 15/sup 0/ and 85/sup 0/; the (p,d) reactions, between theta/sub lab/ = 5/sup 0/ and 40/sup 0/. The /sup 14/C deformation parameters were deduced from the deuteron inelastic scattering and found to agree with deformations measured in nearby doubly even nuclei. The spectroscopic factors deduced from the (p,d) reaction allowed a /sup 14/C ground-state wave function to be deduced which compares favorably with a theoretically deduced wave function. The (p,d) and (d,p) spectroscopic factors are consistent. The implications of our /sup 14/C ground-state wave function regarding the problem of the /sup 14/C hindered beta decay are discussed. 5. Tritium production in neutron induced reactions International Nuclear Information System (INIS) Krasa, A.; Andreotti, E.; Hult, M.; Marissens, G.; Plompen, A.; Angelone, M.; Pillon, M. 2011-01-01 We present an overview of the present knowledge of (n,t) reaction excitation functions in the 14-21 MeV energy range for Cd, Cr, Fe, Mg, Mo, Ni, Pb, Pd, Ru, Sn, Ti, Zr. Experimental data are compared with evaluated data libraries, cross-section systematics, and TALYS calculations. The new values for the "5"0Cr(n,t)"4"8V cross-section measured using γ-spectrometry at 15, 16, 17.3 MeV are presented. The trend of the results confirms that while early experimental data at 14.6 MeV are strongly overestimated, the calculations performed with the default version of TALYS strongly underestimate the excitation curve in the measured energy region 6. A large area position-sensitive ionization chamber for heavy-ion-induced reaction studies CERN Document Server Pant, L M; Dinesh, B V; Thomas, R G; Saxena, A; Sawant, Y S; Choudhury, R K 2002-01-01 A large area position-sensitive ionization chamber with a wide dynamic range has been developed to measure the mass, charge and energy of the heavy ions and the fission fragments produced in heavy-ion-induced reactions. The split anode geometry of the detector makes it suitable for both particle identification and energy measurements for heavy ions and fission fragments. The detector has been tested with alpha particles from sup 2 sup 4 sup 1 Am- sup 2 sup 3 sup 9 Pu source, fission fragments from sup 2 sup 5 sup 2 Cf and the heavy-ion beams from the 14UD Mumbai Pelletron accelerator facility. Using this detector, measurements on mass and total kinetic energy distributions in heavy-ion-induced fusion-fission reactions have been carried out for a wide range of excitation energies. Results on deep inelastic collisions and mass-energy correlations on different systems using this detector setup are discussed. 7. Radiation reaction for the classical relativistic spinning particle in scalar, tensor and linearized gravitational fields International Nuclear Information System (INIS) Barut, A.O.; Cruz, M.G. 1992-08-01 We use the method of analytic continuation of the equation of motion including the self-fields to evaluate the radiation reaction for a classical relativistic spinning point particle in interaction with scalar, tensor and linearized gravitational fields in flat spacetime. In the limit these equations reduce to those of spinless particles. We also show the renormalizability of these theories. (author). 10 refs 8. Radiation-induced reactions in polydimethyl siloxanes International Nuclear Information System (INIS) Menhofer, H. 1988-01-01 The dissertation reports an investigation into the behaviour of polydimethyl soloxanes (PDMS) subject to the radiation field of a 60 Co-γ radiation source at different irradiation conditions. Several different analytical methods have been applied for the detection of chemical changes in the material and their effects on the polymeric segment mobility. Application of the ESR-spintrap technique identifies the primary radicals x CH 3 , -Si x , and -Si-CH 2 x , induced by the radiolysis of the PDMS. The individual rates of radical formation have been found to be strongly dependent on temperature. (orig./LU) [de 9. Reactions of charged and neutral recoil particles following nuclear transformations. Final report International Nuclear Information System (INIS) Ache, H.J. 1980-12-01 A summary is given of the various activities conducted as part of the research on the chemical reactions of energetic particles generated in nuclear reactions. Emphasis was on hot atom chemistry in gases and liquids. A bibliography of 110 publications published as part of the program is included 10. Light particle and gamma ray emission measurements in heavy-ion reactions. Progress report International Nuclear Information System (INIS) Petitt, G.A. 1982-01-01 The development of a position-sensitive neutron detector and a data acquisition system at HHIRF for studying light particle emission in heavy ion reactions is described. Results are presented and discussed for the reactions 12 C + 158 Gd, 13 C + 157 Gd, and 20 Ne + 150 Nd 11. D+D thermonuclear fusion reactions with polarized particles International Nuclear Information System (INIS) Kozma, P. 1986-01-01 Polarization measurements from the 2 H(d, n) 3 He and 2 H(d, p) 3 H thermonuclear reactions at deuteron energies below 1 MeV are anayzed. Results of analysis enable to discuss the existence of 4 He excited states in the vicinity of d+d threshold energy as well as to extrapolate total cross-sections σ tot (d+d) into the region of very low energies 12. Emission of high-energy, light particles from intermediate-energy heavy-ion reactions International Nuclear Information System (INIS) Ball, J.B.; Auble, R.L. 1982-01-01 One of the early surprises in examining reaction products from heavy ion reactions at 10 MeV/nucleon and above was the large yield of light particles emitted and the high energies to which the spectra of these particles extended. The interpretation of the origin of the high energy light ions has evolved from a picture of projectile excitation and subsequent evaporation to one of pre-equilibrium (or nonequilibrium) emission. The time scale for particle emission has thus moved from one that occurs following the initial collision to one that occurs at the very early stages of the collision. Research at ORNL on this phenomenon is reviewed 13. Development of utility system of charged particle Nuclear Reaction Data on Unified Interface International Nuclear Information System (INIS) Aoyama, Shigeyoshi; Ohbayashi, Yosihide; Kato, Kiyoshi; Masui, Hiroshi; Ohnishi, Akira; Chiba, Masaki 1999-01-01 We have developed a utility system, WinNRDF, for a nuclear charged particle reaction data of NRDF (Nuclear Reaction Data File) on a unified interface of Windows95, 98/NT. By using the system, we can easily search the experimental data of a charged particle reaction in NRDF and also see the graphic data on GUI (Graphical User Interface). Furthermore, we develop a mechanism of making a new index of keywords in order to include the time developing character of the NRDF database. (author) 14. Ripple induced trapped particle loss in tokamaks International Nuclear Information System (INIS) White, R.B. 1996-05-01 The threshold for stochastic transport of high energy trapped particles in a tokamak due to toroidal field ripple is calculated by explicit construction of primary resonances, and a numerical examination of the route to chaos. Critical field ripple amplitude is determined for loss. The expression is given in magnetic coordinates and makes no assumptions regarding shape or up-down symmetry. An algorithm is developed including the effects of prompt axisymmetric orbit loss, ripple trapping, convective banana flow, and stochastic ripple loss, which gives accurate ripple loss predictions for representative Tokamak Fusion Test Reactor and International Thermonuclear Experimental Reactor equilibria. The algorithm is extended to include the effects of collisions and drag, allowing rapid estimation of alpha particle loss in tokamaks 15. Method for calculating the characteristics of nuclear reactions with composite particle International Nuclear Information System (INIS) Zelenskaya, N.S. 1978-01-01 The purpose of the lectures is to attempt to give a brief review of the present status of the theory of nuclear reactions involving composite particles (heavy ions, 6 Li, 7 Li, and 9 Be ions, α-particles). In order to analyze such reactions, one should employ and ''exact'' method of distorted waves with a finite radius of interaction. Since the zero radius approximation is valid only at low momentum transfer, its rejection immediately includes all possible transferred momenta and consequently, the reaction mechanisms different from the usual cluster stripping we shall discuss a sufficiently general formalism of the distorted waves method, which does not use additional assumptions about the smaliness of the region of interaction between particles and about the possible reaction mechanisms. We shall also discuss all physical simplifications introduced in specific particular codes and the ranges of their applicability will be established. (author) 16. Inducing Lift on Spherical Particles by Traveling Magnetic Fields Science.gov (United States) Mazuruk, Konstantin; Grugel, Richard N.; Rose, M. Franklin (Technical Monitor) 2001-01-01 Gravity induced sedimentation of suspensions is a serious drawback to many materials and biotechnology processes, a factor that can, in principle, be overcome by utilizing an opposing Lorentz body force. In this work we demonstrate the utility of employing a traveling magnetic field (TMF) to induce a lifting force on particles dispersed in the fluid. Theoretically, a model has been developed to ascertain the net force, induced by TMF, acting on a spherical body as a function of the fluid medium's electrical conductivity and other parameters. Experimentally, the model is compared to optical observations of particle motion in the presence of TMF. 17. Dynamics of synchrotron VUV-induced intracluster reactions Energy Technology Data Exchange (ETDEWEB) Grover, J.R. [Brookhaven National Laboratory, Upton, NY (United States) 1993-12-01 Photoionization mass spectrometry (PIMS) using the tunable vacuum ultraviolet radiation available at the National Synchrotron Light Source is being exploited to study photoionization-induced reactions in small van der Waals mixed complexes. The information gained includes the observation and classification of reaction paths, the measurement of onsets, and the determination of relative yields of competing reactions. Additional information is obtained by comparison of the properties of different reacting systems. Special attention is given to finding unexpected features, and most of the reactions investigated to date display such features. However, understanding these reactions demands dynamical information, in addition to what is provided by PIMS. Therefore the program has been expanded to include the measurement of kinetic energy release distributions. 18. Charged-particle thermonuclear reaction rates: II. Tables and graphs of reaction rates and probability density functions International Nuclear Information System (INIS) Iliadis, C.; Longland, R.; Champagne, A.E.; Coc, A.; Fitzgerald, R. 2010-01-01 Numerical values of charged-particle thermonuclear reaction rates for nuclei in the A=14 to 40 region are tabulated. The results are obtained using a method, based on Monte Carlo techniques, that has been described in the preceding paper of this issue (Paper I). We present a low rate, median rate and high rate which correspond to the 0.16, 0.50 and 0.84 quantiles, respectively, of the cumulative reaction rate distribution. The meaning of these quantities is in general different from the commonly reported, but statistically meaningless expressions, 'lower limit', 'nominal value' and 'upper limit' of the total reaction rate. In addition, we approximate the Monte Carlo probability density function of the total reaction rate by a lognormal distribution and tabulate the lognormal parameters μ and σ at each temperature. We also provide a quantitative measure (Anderson-Darling test statistic) for the reliability of the lognormal approximation. The user can implement the approximate lognormal reaction rate probability density functions directly in a stellar model code for studies of stellar energy generation and nucleosynthesis. For each reaction, the Monte Carlo reaction rate probability density functions, together with their lognormal approximations, are displayed graphically for selected temperatures in order to provide a visual impression. Our new reaction rates are appropriate for bare nuclei in the laboratory. The nuclear physics input used to derive our reaction rates is presented in the subsequent paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results. 19. Relativistic Photon Induced Processes of Composite Particles International Nuclear Information System (INIS) Ribeiro-Silva, C.I; Curado, E. M. F.; Rego-Monteiro, M. A. 2007-01-01 We consider a complex quantum field theory based on a generalized Heisenberg[1] algebra, which describes at the space-time a spin less composite particle. We compute the perturbative series and the cross section of the scattering process 2 γ→φ - , φ + up to second order in the coupling constant and we find a further contribution due to the structure of the composite pion which is described here phenomenologically by the generalized algebra. We compare the results of this study with available experimental data. (Author) 20. Blockage-induced condensation controlled by a local reaction Science.gov (United States) Cirillo, Emilio N. M.; Colangeli, Matteo; Muntean, Adrian 2016-10-01 We consider the setup of stationary zero range models and discuss the onset of condensation induced by a local blockage on the lattice. We show that the introduction of a local feedback on the hopping rates allows us to control the particle fraction in the condensed phase. This phenomenon results in a current versus blockage parameter curve characterized by two nonanalyticity points. 1. Molecular Mechanisms of Particle Ration Induced Apoptosis in Lymphocyte Science.gov (United States) Shi, Yufang Space radiation, composed of high-energy charged nuclei (HZE particles) and protons, has been previously shown to severely impact immune homeostasis in mice. To determine the molecular mechanisms that mediate acute lymphocyte depletion following exposure to HZE particle radiation mice were exposed to particle radiation beams at Brookhaven National Laboratory. We found that mice given whole body 5 6Fe particle irradiation (1GeV /n) had dose-dependent losses in total lymphocyte numbers in the spleen and thymus (using 200, 100 and 50 cGy), with thymocytes being more sensitive than splenocytes. All phenotypic subsets were reduced in number. In general, T cells and B cells were equally sensitive, while CD8+ T cells were more senstive than CD4+ T cells. In the thymus, immature CD4+CD8+ double-positive thymocytes were exquisitely sensitive to radiation-induced losses, single-positive CD4 or CD8 cells were less sensitive, and the least mature double negative cells were resistant. Irradiation of mice deficient in genes encoding essential apoptosis-inducing proteins revealed that the mechanism of lymphocyte depletion is independent of Fas ligand and TRAIL (TNF-ralated apoptosis-inducing ligand), in contrast to γ-radiation-induced lymphocyte losses which require the Fas-FasL pathway. Using inhibitors in vitro, lymphocyte apoptosis induced by HZE particle radiation was found to be caspase dependent, and not involve nitric oxide or oxygen free radicals. 2. Study of reactions induced by 6He on 9Be Directory of Open Access Journals (Sweden) Pires K.C.C. 2014-03-01 Full Text Available We present the results of experiments using a 6He beam on a 9Be target at energies 7 − 9 times the Coulomb barrier. Angular distributions of the elastic, inelastic scattering (target breakup and the a-particle production in the 6He+9Be collision have been analysed. Total reaction cross sections were obtained from the elastic scatteringanalyses and a considerable enhancement has been observed by comparing to stable systems. 3. Neutron induced reactions II: (n,x) reactions on medium and heavy nuclei International Nuclear Information System (INIS) Cindro, N. 1976-01-01 Recent interest in (n,x) reactions in the MeV and above range of energies is concentrated on two main subjects: the mechanism of nucleon emission (precompound in particular) and the possible role of clustering in the emission of complex particles. Hence the first two sections of this paper will be devoted to these two subjects. In the last section some other subjects that have recently emerged in the field are discussed 4. Chromosomal aberrations induced by alpha particles International Nuclear Information System (INIS) Guerrero C, C.; Brena V, M. 2005-01-01 The chromosomal aberrations produced by the ionizing radiation are commonly used when it is necessary to establish the exposure dose of an individual, it is a study that is used like complement of the traditional physical systems and its application is only in cases in that there is doubt about what indicates the conventional dosimetry. The biological dosimetry is based on the frequency of aberrations in the chromosomes of the lymphocytes of the individual in study and the dose is calculated taking like reference to the dose-response curves previously generated In vitro. A case of apparent over-exposure to alpha particles to which is practiced analysis of chromosomal aberrations to settle down if in fact there was exposure and as much as possible, to determine the presumed dose is presented. (Author) 5. Radiation reaction effect on laser driven auto-resonant particle acceleration International Nuclear Information System (INIS) Sagar, Vikram; Sengupta, Sudip; Kaw, P. K. 2015-01-01 The effects of radiation reaction force on laser driven auto-resonant particle acceleration scheme are studied using Landau-Lifshitz equation of motion. These studies are carried out for both linear and circularly polarized laser fields in the presence of static axial magnetic field. From the parametric study, a radiation reaction dominated region has been identified in which the particle dynamics is greatly effected by this force. In the radiation reaction dominated region, the two significant effects on particle dynamics are seen, viz., (1) saturation in energy gain by the initially resonant particle and (2) net energy gain by an initially non-resonant particle which is caused due to resonance broadening. It has been further shown that with the relaxation of resonance condition and with optimum choice of parameters, this scheme may become competitive with the other present-day laser driven particle acceleration schemes. The quantum corrections to the Landau-Lifshitz equation of motion have also been taken into account. The difference in the energy gain estimates of the particle by the quantum corrected and classical Landau-Lifshitz equation is found to be insignificant for the present day as well as upcoming laser facilities 6. Test particle modeling of wave-induced energetic electron precipitation International Nuclear Information System (INIS) Chang, H.C.; Inan, U.S. 1985-01-01 A test particle computer model of the precipitation of radiation belt electrons is extended to compute the dynamic energy spectrum of transient electron fluxes induced by short-duration VLF wave packets traveling along the geomagnetic field lines. The model is adapted to estimate the count rate and associated spectrum of precipitated electrons that would be observed by satellite-based particle detectors with given geometric factor and orientation with respect to the magnetic field. A constant-frequency wave pulse and a lightning-induced whistler wave packet are used as examples of the stimulating wave signals. The effects of asymmetry of particle mirror heights in the two hemispheres and the atmospheric backscatter of loss cone particles on the computed precipitated fluxes are discussed 7. A brief overview of models of nucleon-induced reactions International Nuclear Information System (INIS) Carlson, B.V. 2003-01-01 The basic features of low to intermediate energy nucleon-induced reactions are discussed within the contexts of the optical model, the statistical model, preequilibrium and intranuclear cascade models. The calculation of cross sections and other scattering quantities are described. (author) 8. Status of experimental data for neutron induced reactions Energy Technology Data Exchange (ETDEWEB) Baba, Mamoru [Tohoku Univ., Sendai (Japan) 1998-11-01 A short review is presented on the status of experimental data for neutron induced reactions above 20 MeV based on the EXFOR data base and journals. Experimental data which were obtained in a systematic manner and/or by plural authors are surveyed and tabulated for the nuclear data evaluation and the benchmark test of the evaluated data. (author). 61 refs. 9. Maillard reaction induces changes in saccharides and amino acids ... African Journals Online (AJOL) Purpose: To investigate changes in saccharides and amino acids induced by Maillard reaction (MR) during stir-baking of areca nuts (AN). Methods: The pH of aqueous extracts of AN and charred AN (CAN) were measured by a pH meter, and their absorbances at 420 nm were read in an ultraviolet-visible (UV-VIS) ... 10. Some general features of alpha-particle pick-up reactions International Nuclear Information System (INIS) Becchetti, F.D.; Jaenecke, J. 1982-01-01 The general features of single- and multi-α transfer reactions are discussed. While there are numerous difficulties in extracting α-particle ''spectroscopic'' factors, the reduced α-widths extracted appear to be meaningful. These can be related, in an absolute fashion, to α-decay widths (or α-decay lifetimes). Simpler theories describing α-particle transfer reactions are needed and should be formulated in terms of α-widths, i.e. α-particle densities in the nuclear periphery. These are the quantities measured in most experiments. IBA and SU 3 models appear to be most relevant and should be extended to α-transfer reactions for heavy nuclei. (Auth.) 11. Lateral Tension-Induced Penetration of Particles into a Liposome Directory of Open Access Journals (Sweden) Kazuki Shigyou 2017-07-01 Full Text Available It is important that we understand the mechanism of the penetration of particles into a living cell to achieve advances in bionanotechnology, such as for treatment, visualization within a cell, and genetic modification. Although there have been many studies on the application of functional particles to cells, the basic mechanism of penetration across a biological membrane is still poorly understood. Here we used a model membrane system to demonstrate that lateral membrane tension drives particle penetration across a lipid bilayer. After the application of osmotic pressure, fully wrapped particles on a liposome surface were found to enter the liposome. We discuss the mechanism of the tension-induced penetration in terms of narrow constriction of the membrane at the neck part. The present findings are expected to provide insight into the application of particles to biological systems. 12. Analysis of the excitation functions for 3He- and α-induced reactions on 107Ag and 109Ag International Nuclear Information System (INIS) Misaelides, P. 1976-06-01 Excitation functions of 32 3 He- and α-induced nuclear reactions on 107 Ag and 109 Ag have been measured. The incident projectile energies ranged from 10 to 40 MeV for the 3 He-ions and 10 to 100 MeV for the α-particles. The recoil range of some 3 He-induced reaction products and the isomeric ratio values indicate the predominance of a precompound-compound nucleous mechanism. The experimental cross sections were compared with the excitation functions calculated on the basis of the compound nucleus and hybrid models. Using the values n 0 ( 3 He) = 5 and n 0 (α) = 4 for the initial exciton number and a = A/12.5 for the level density parameter a satisfactory reproduction of the experimental results for the α-induced reactions was achieved, whereas the calculated excitation functions for the 3 He-induced reactions are about a factor of two higher. (orig.) [de 13. Reactions of charged and neutral recoil particles following nuclear transformations. Progress report No. 10 International Nuclear Information System (INIS) Ache, H.J. 1976-09-01 The status of the following programs is reported: study of the stereochemistry of halogen atom reactions produced via (n,γ) nuclear reactions with diastereomeric molecules in the condensed phase; decay-induced labelling of compounds of biochemical interest; and chemistry of positronium 14. MODESTY, Statistical Reaction Cross-Sections and Particle Spectra in Decay Chain International Nuclear Information System (INIS) Mattes, W. 1977-01-01 1 - Nature of the physical problem solved: Code MODESTY calculates all energetically possible reaction cross sections and particle spectra within a nuclear decay chain. 2 - Method of solution: It is based on the statistical nuclear model following the method of Uhl (reference 1) where the optical model is used in the calculation of partial widths and the Blatt-Weisskopf single particle model for gamma rays 15. α-particle D-state effects in (d,α) reactions International Nuclear Information System (INIS) Santos, F.D.; Tonsfelt, S.A.; Clegg, T.B.; Ludwig, E.J.; Tagishi, Y.; Wilkerson, J.F. 1982-01-01 It is shown that the tensor analyzing powers for (d,α) reactions are sensitive to the D-state component in the α-particle wave function. The D to S-state asymptotic ratio extracted from T 20 and T 22 data in J = L +- 1 transitions is discussed using an α-particle D state generated with the Jackson and Riska model 16. Electrochemically induced reactions in soils - a new approach to the in-situ remediation of contaminated soils? Energy Technology Data Exchange (ETDEWEB) Rahner, D.; Ludwig, G.; Roehrs, J. [Dresden Univ. of Technology, Inst. of Physical Chemistry and Electrochemistry (Germany); Neumann, V.; Nitsche, C.; Guderitz, I. [Soil and Groundwater Lab. GmbH, Dresden (Germany) 2001-07-01 Electrochemical reactions can be induced in soils if the soil matrix contains particles or films with electronic conducting properties ('microconductors'). In these cases the wet soil may act as a 'diluted' electrochemical solid bed reactor. A discussion of this reaction principle within the soil matrix will be presented here. It will be shown, that under certain conditions immobile organic contaminants may be converted. (orig.) 17. 1-4 Strangeness Production in Antiproton Induced Nuclear Reactions. Institute of Scientific and Technical Information of China (English) Feng; Zhaoqing[1 2014-01-01 More localized energy deposition is able to be produced in antiproton-nucleus collisions in comparison withheavy-ion collisions due to annihilation reactions. Searching for the cold quark-gluon plasma (QGP) with antiprotonbeamshas been considered as a hot topic both in experiments and in theretical calculations over the past severaldecades. Strangeness production and hypernucleus formation in antiproton-induced nuclear reactions are importancein exploring the hyperon (antihyperon)-nucleon (HN) potential and the antinucleon-nucleon interaction, whichhave been hot topics in the forthcoming experiments at PANDA in Germany. 18. Exciton model and quantum molecular dynamics in inclusive nucleon-induced reactions International Nuclear Information System (INIS) Bevilacqua, Riccardo; Pomp, Stephan; Watanabe, Yukinobu 2011-01-01 We compared inclusive nucleon-induced reactions with two-component exciton model calculations and Kalbach systematics; these successfully describe the production of protons, whereas fail to reproduce the emission of composite particles, generally overestimating it. We show that the Kalbach phenomenological model needs to be revised for energies above 90 MeV; agreement improves introducing a new energy dependence for direct-like mechanisms described by the Kalbach model. Our revised model calculations suggest multiple preequilibrium emission of light charged particles. We have also compared recent neutron-induced data with quantum molecular dynamics (QMD) calculations complemented by the surface coalescence model (SCM); we observed that the SCM improves the predictive power of QMD. (author) 19. Multiscale simulations of patchy particle systems combining Molecular Dynamics, Path Sampling and Green's Function Reaction Dynamics Science.gov (United States) Bolhuis, Peter Important reaction-diffusion processes, such as biochemical networks in living cells, or self-assembling soft matter, span many orders in length and time scales. In these systems, the reactants' spatial dynamics at mesoscopic length and time scales of microns and seconds is coupled to the reactions between the molecules at microscopic length and time scales of nanometers and milliseconds. This wide range of length and time scales makes these systems notoriously difficult to simulate. While mean-field rate equations cannot describe such processes, the mesoscopic Green's Function Reaction Dynamics (GFRD) method enables efficient simulation at the particle level provided the microscopic dynamics can be integrated out. Yet, many processes exhibit non-trivial microscopic dynamics that can qualitatively change the macroscopic behavior, calling for an atomistic, microscopic description. The recently developed multiscale Molecular Dynamics Green's Function Reaction Dynamics (MD-GFRD) approach combines GFRD for simulating the system at the mesocopic scale where particles are far apart, with microscopic Molecular (or Brownian) Dynamics, for simulating the system at the microscopic scale where reactants are in close proximity. The association and dissociation of particles are treated with rare event path sampling techniques. I will illustrate the efficiency of this method for patchy particle systems. Replacing the microscopic regime with a Markov State Model avoids the microscopic regime completely. The MSM is then pre-computed using advanced path-sampling techniques such as multistate transition interface sampling. I illustrate this approach on patchy particle systems that show multiple modes of binding. MD-GFRD is generic, and can be used to efficiently simulate reaction-diffusion systems at the particle level, including the orientational dynamics, opening up the possibility for large-scale simulations of e.g. protein signaling networks. 20. Theoretical study of cross sections of proton-induced reactions on cobalt Directory of Open Access Journals (Sweden) Mustafa Yiğit 2018-04-01 Full Text Available Nuclear fusion may be among the strongest sustainable ways to replace fossil fuels because it does not contribute to acid rain or global warming. In this context, activated cobalt materials in corrosion products for fusion energy are significant in determination of dose levels during maintenance after a coolant leak in a nuclear fusion reactor. Therefore, cross-section studies on cobalt material are very important for fusion reactor design. In this article, the excitation functions of some nuclear reaction channels induced by proton particles on 59Co structural material were predicted using different models. The nuclear level densities were calculated using different choices of available level density models in ALICE/ASH code. Finally, the newly calculated cross sections for the investigated nuclear reactions are compared with the experimental values and TENDL data based on TALYS nuclear code. Keywords: Cobalt, Nuclear Structural Materials, Reaction Cross Section, TENDL Database 1. Chemical memory reactions induced bursting dynamics in gene expression. Science.gov (United States) Tian, Tianhai 2013-01-01 Memory is a ubiquitous phenomenon in biological systems in which the present system state is not entirely determined by the current conditions but also depends on the time evolutionary path of the system. Specifically, many memorial phenomena are characterized by chemical memory reactions that may fire under particular system conditions. These conditional chemical reactions contradict to the extant stochastic approaches for modeling chemical kinetics and have increasingly posed significant challenges to mathematical modeling and computer simulation. To tackle the challenge, I proposed a novel theory consisting of the memory chemical master equations and memory stochastic simulation algorithm. A stochastic model for single-gene expression was proposed to illustrate the key function of memory reactions in inducing bursting dynamics of gene expression that has been observed in experiments recently. The importance of memory reactions has been further validated by the stochastic model of the p53-MDM2 core module. Simulations showed that memory reactions is a major mechanism for realizing both sustained oscillations of p53 protein numbers in single cells and damped oscillations over a population of cells. These successful applications of the memory modeling framework suggested that this innovative theory is an effective and powerful tool to study memory process and conditional chemical reactions in a wide range of complex biological systems. 2. Heterogeneous Reactions between Toluene and NO2 on Mineral Particles under Simulated Atmospheric Conditions. Science.gov (United States) Niu, Hejingying; Li, Kezhi; Chu, Biwu; Su, Wenkang; Li, Junhua 2017-09-05 Heterogeneous reactions between organic and inorganic gases with aerosols are important for the study of smog occurrence and development. In this study, heterogeneous reactions between toluene and NO 2 with three atmospheric mineral particles in the presence or absence of UV light were investigated. The three mineral particles were SiO 2 , α-Fe 2 O 3 , and BS (butlerite and szmolnokite). In a dark environment, benzaldehyde was produced on α-Fe 2 O 3 . For BS, nitrotoluene and benzaldehyde were obtained. No aromatic products were produced in the absence of NO 2 in the system. In the presence of UV irradiation, benzaldehyde was detected on the SiO 2 surface. Identical products were produced in the presence and absence of UV light over α-Fe 2 O 3 and BS. UV light promoted nitrite to nitrate on mineral particles surface. On the basisi of the X-ray photoelectron spectroscopy (XPS) results, a portion of BS was reduced from Fe 3+ to Fe 2+ with the adsorption of toluene or the reaction with toluene and NO 2 . Sulfate may play a key role in the generation of nitrotoluene on BS particles. From this research, the heterogeneous reactions between organic and inorganic gases with aerosols that occur during smog events will be better understood. 3. Kinematical analysis of the data from three-particle reactions by statistical methods International Nuclear Information System (INIS) Krug, J.; Nocken, U. 1976-01-01 A statistical procedure to unfold the kinematics of coincidence spectra from three-particle reactions is presented which is used to protect the coincidence events on the kinematical curve. The width of the projection intervals automatically matches the experimental resolution.. The method is characterized by its consistency thus also permitting a reasonable projection of sum-coincidences. (Auth.) 4. A simplified pyrolysis model of a biomass particle based on infinitesimally thin reaction front approximation NARCIS (Netherlands) Haseli, Y.; Oijen, van J.A.; Goey, de L.P.H. 2012-01-01 This paper presents a simplified model for prediction of pyrolysis of a biomass particle. The main assumptions include (1) decomposition of virgin material in an infinitesimal thin reaction front at a constant pyrolysis temperature, (2) constant thermo-physical properties throughout the process, 5. Charged particle-induced nuclear fission reactions – Progress and ... Indian Academy of Sciences (India) attracting the attention of the investigators right from the beginning of nuclear fis- ... the change from mass asymmetric division to symmetric division as A and N/Z values .... (a) Schematic of the fissioning nucleus showing the decision making. 6. HLA Association with Drug-Induced Adverse Reactions Directory of Open Access Journals (Sweden) Wen-Lang Fan 2017-01-01 Full Text Available Adverse drug reactions (ADRs remain a common and major problem in healthcare. Severe cutaneous adverse drug reactions (SCARs, such as Stevens–Johnson syndrome (SJS/toxic epidermal necrolysis (TEN with mortality rate ranges from 10% to more than 30%, can be life threatening. A number of recent studies demonstrated that ADRs possess strong genetic predisposition. ADRs induced by several drugs have been shown to have significant associations with specific alleles of human leukocyte antigen (HLA genes. For example, hypersensitivity to abacavir, a drug used for treating of human immunodeficiency virus (HIV infection, has been proposed to be associated with allele 57:01 of HLA-B gene (terms HLA-B∗57:01. The incidences of abacavir hypersensitivity are much higher in Caucasians compared to other populations due to various allele frequencies in different ethnic populations. The antithyroid drug- (ATDs- induced agranulocytosis are strongly associated with two alleles: HLA-B∗38:02 and HLA-DRB1∗08:03. In addition, HLA-B∗15:02 allele was reported to be related to carbamazepine-induced SJS/TEN, and HLA-B∗57:01 in abacavir hypersensitivity and flucloxacillin induced drug-induced liver injury (DILI. In this review, we summarized the alleles of HLA genes which have been proposed to have association with ADRs caused by different drugs. 7. Neutron-induced reactions on U and Th - A new approach via AMS International Nuclear Information System (INIS) Wallner, A.; Capote, R.; Christl, M.; Fifield, L.K.; Srncik, M.; Tims, S.; Hotchkis, M.; Krasa, A.; Lachner, J.; Lippold, J.; Plompen, A.; Semkova, V.; Steier, P.; Winkler, S. 2014-01-01 Recent studies exhibit discrepancies at keV and MeV energies between major nuclear data libraries for 238 U(n,γ), 232 Th(n,γ) and also for (n,xn) reactions. We have extended our initial (n,γ) measurements on 235,238 U to higher neutron energies and to additional reaction channels. Neutron-induced reactions on 232 Th and 238 U were measured by a combination of the activation technique and atom counting of the reaction products using accelerator mass spectrometry (AMS). Natural thorium and uranium samples were activated with quasi-monoenergetic neutrons at IRMM. Neutron capture data were produced for neutron energies between 0.5 and 5 MeV. Fast neutron-induced reactions were studied in the energy range from 17 to 22 MeV. Preliminary data indicate a fair agreement with data libraries; however at the lower band of existing data. This approach represents a complementary method to on-line particle detection techniques and also to conventional decay counting. (authors) 8. Nonequilibrium photochemical reactions induced by lasers. Technical progress report International Nuclear Information System (INIS) Steinfeld, J.I. 1978-04-01 Research has progressed in six principal subject areas of interest to DOE advanced (laser) isotope separation efforts. These are (1) Infrared double resonance spectroscopy of molecules excited by multiple infrared photon absorption, particularly SF 6 and vinyl chloride. (2) Infrared multiphoton excitation of metastable triplet-state molecules, e.g., biacetyl. (3) An Information Theory analysis of multiphoton excitation and collisional deactivation has been carried out. (4) The mechanism of infrared energy deposition and multiphoton-induced reactions in chlorinated ethylene derivatives; and RRKM (statistical) model accounts for all observed behavior of the system, and a deuterium-specific reaction pathway has been identified. (5) Diffusion-enhanced laser isotope separation in N 16 O/N 18 O. (6) A technical evaluation of laser-induced chemistry and isotope separation 9. Dynamical isospin effects in nucleon-induced reactions International Nuclear Information System (INIS) Ou Li; Li Zhuxia; Wu Xizhen 2008-01-01 The isospin effects in proton-induced reactions on isotopes of 112-132 Sn and the corresponding β-stable isobars are studied by means of the improved quantum molecular dynamics model and some sensitive probes for the density dependence of the symmetry energy at subnormal densities are proposed. The beam energy range is chosen to be 100-300 MeV. Our study shows that the system size dependence of the reaction cross sections for p+ 112-132 Sn deviates from the Carlson's empirical expression obtained by fitting the reaction cross sections for proton on nuclei along the β-stability line and sensitively depends on the stiffness of the symmetry energy. We also find that the angular distribution of elastic scattering for p+ 132 Sn at large impact parameters is very sensitive to the density dependence of the symmetry energy, which is uniquely due to the effect of the symmetry potential with no mixture of the effect from the isospin dependence of the nucleon-nucleon cross sections. The isospin effects in neutron-induced reactions are also studied and it is found that the effects are just opposite to that in proton-induced reactions. We find that the difference between the peaks of the angular distributions of elastic scattering for p+ 132 Sn and n+ 132 Sn at E p,n =100 MeV and b=7.5 fm is positive for soft symmetry energy U sym sf and negative for super-stiff symmetry energy U sym nlin and close to zero for linear density dependent symmetry energy U sym lin , which seems very useful for constraining the density dependence of the symmetry energy at subnormal densities 10. Students' Visualisation of Chemical Reactions--Insights into the Particle Model and the Atomic Model Science.gov (United States) Cheng, Maurice M. W. 2018-01-01 This paper reports on an interview study of 18 Grade 10-12 students' model-based reasoning of a chemical reaction: the reaction of magnesium and oxygen at the submicro level. It has been proposed that chemical reactions can be conceptualised using two models: (i) the "particle model," in which a reaction is regarded as the simple… 11. Crosschecking of alpha particle monitor reactions up to 50 MeV Energy Technology Data Exchange (ETDEWEB) Takács, S., E-mail: [email protected] [Institute for Nuclear Research, Hungarian Academy of Sciences, 4026 Debrecen (Hungary); Ditrói, F.; Szűcs, Z. [Institute for Nuclear Research, Hungarian Academy of Sciences, 4026 Debrecen (Hungary); Haba, H.; Komori, Y. [Nishina Center for Accelerator-Based Science, RIKEN, Wako 351-0198 (Japan); Aikawa, M. [Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan); Nishina Center for Accelerator-Based Science, RIKEN, Wako 351-0198 (Japan); Saito, M. [Graduate School of Science, Hokkaido University, Sapporo 060-0810 (Japan); Nishina Center for Accelerator-Based Science, RIKEN, Wako 351-0198 (Japan) 2017-04-15 Selected reactions with well-defined excitation functions can be used to monitor the parameters of charged particle beams. The frequently used reactions for monitoring alpha particle beams are the {sup 27}Al(α,x){sup 22,24}Na, {sup nat}Ti(α,x){sup 51}Cr, {sup nat}Cu(α,x){sup 66,67}Ga and {sup nat}Cu(α,x){sup 65}Zn reactions. The excitation functions for these reactions were studied using the activation method and stacked target irradiation technique to crosscheck and to compare the above six reactions. Thin metallic foils with natural isotopic composition and well defined thickness were stacked together in sandwich targets and were irradiated at the AVF cyclotron of RIKEN with an alpha particle beam of 51.2 MeV. The activity of the target foils were assessed by using high-resolution gamma spectrometers of high purity Ge detectors. The data sets of the six processes were crosschecked with each other to provide consistent, cross-linked numerical cross section data. 12. Study of aniline polymerization reactions through the particle size formation in acidic and neutral medium Science.gov (United States) Aribowo, Slamet; Hafizah, Mas Ayu Elita; Manaf, Azwar; Andreas 2018-04-01 In the present paper, we reported particle size kinetic studies on the conducting polyaniline (PANI) which synthesized through a chemical oxidative polymerization technique from aniline monomer. PANI was prepared using ammonium persulfate (APS) as oxidizing agent which carried out in acidic and neutral medium at various batch temperatures of respectively 20, 30 and 50 °C. From the studies, it was noticed that the complete polymerization reaction progressed within 480 minutes duration time. The pH of the solution during reaction kinetic reached values 0.8 - to 1.2 in acidic media, while in the neutral media the pH value reached values 3.8 - 4.9. The batch temperature controlled the polymerization reaction in which the reaction progressing, which followed by the temperature rise of solution above the batch temperature before settled down to the initial temperature. An increment in the batch temperature gave highest rise in the solution temperature for the two media which cannot be more than 50 °C. The final product of polymerization reaction was PANI confirmed by Fourier Transform Infra-Red (FTIR) spectrophotometer for molecule structure identification. The averages particle size of PANI which carried out in the two different media is evidently similar in the range 30 - 40 μm and insensitive to the batch temperature. However, the particle size of PANI which obtained from the polymerization reaction at a batch temperature of 50 °C under acidic condition reached ˜53.1 μm at the tip of the propagation stage which started in the first 5 minutes. The size is obviously being the largest among the batch temperatures. Whereas, under neutral condition the particle size is much larger which reached the size 135 μm at the batch temperature of 20 °C. It is concluded that the particle size formation during the polymerization reaction being one of the important parameter to determine particle growing of polymer which indicated the reaction kinetics mechanism of synthesize 13. Adverse drug reactions induced by cardiovascular drugs in outpatients Directory of Open Access Journals (Sweden) Gholami K 2008-03-01 Full Text Available Considering increased use of cardiovascular drugs and limitations in pre-marketing trials for drug safety evaluation, post marketing evaluation of adverse drug reactions (ADRs induced by this class of medicinal products seems necessary.Objectives: To determine the rate and seriousness of adverse reactions induced by cardiovascular drugs in outpatients. To compare sex and different age groups in developing ADRs with cardiovascular agents. To assess the relationship between frequencies of ADRs and the number of drugs used. Methods: This cross-sectional study was done in cardiovascular clinic at a teaching hospital. All patients during an eight months period were evaluated for cardiovascular drugs induced ADRs. Patient and reaction factors were analyzed in detected ADRs. Patients with or without ADRs were compared in sex and age by using chi-square test. Assessing the relationship between frequencies of ADRs and the number of drugs used was done by using Pearson analysis. Results: The total number of 518 patients was visited at the clinic. ADRs were detected in 105 (20.3% patients. The most frequent ADRs were occurred in the age group of 51-60. The highest rate of ADRs was recorded to be induced by Diltiazem (23.5% and the lowest rate with Atenolol (3%. Headache was the most frequent detected ADR (23%. Assessing the severity and preventability of ADRs revealed that 1.1% of ADRs were detected as severe and 1.9% as preventable reactions. Women significantly developed more ADRs in this study (chi square = 3.978, P<0.05. ADRs more frequently occurred with increasing age in this study (chi square = 15.871, P<0.05. With increasing the number of drugs used, the frequency of ADRs increased (Pearson=0.259, P<0.05. Conclusion: Monitoring ADRs in patients using cardiovascular drugs is a matter of importance since this class of medicines is usually used by elderly patients with critical conditions and underlying diseases. 14. Adverse drug reactions induced by cardiovascular drugs in outpatients. Science.gov (United States) Gholami, Kheirollah; Ziaie, Shadi; Shalviri, Gloria 2008-01-01 Considering increased use of cardiovascular drugs and limitations in pre-marketing trials for drug safety evaluation, post marketing evaluation of adverse drug reactions (ADRs) induced by this class of medicinal products seems necessary. To determine the rate and seriousness of adverse reactions induced by cardiovascular drugs in outpatients. To compare sex and different age groups in developing ADRs with cardiovascular agents. To assess the relationship between frequencies of ADRs and the number of drugs used. This cross-sectional study was done in cardiovascular clinic at a teaching hospital. All patients during an eight months period were evaluated for cardiovascular drugs induced ADRs. Patient and reaction factors were analyzed in detected ADRs. Patients with or without ADRs were compared in sex and age by using chi-square test. Assessing the relationship between frequencies of ADRs and the number of drugs used was done by using Pearson analysis. The total number of 518 patients was visited at the clinic. ADRs were detected in 105 (20.3%) patients. The most frequent ADRs were occurred in the age group of 51-60. The highest rate of ADRs was recorded to be induced by Diltiazem (23.5%) and the lowest rate with Atenolol (3%). Headache was the most frequent detected ADR (23%). Assessing the severity and preventability of ADRs revealed that 1.1% of ADRs were detected as severe and 1.9% as preventable reactions. Women significantly developed more ADRs in this study (chi square = 3.978, PPearson=0.259, P<0.05). Monitoring ADRs in patients using cardiovascular drugs is a matter of importance since this class of medicines is usually used by elderly patients with critical conditions and underlying diseases. 15. Low-energy deuteron-induced reactions on Nb-93 Czech Academy of Sciences Publication Activity Database Avrigeanu, M.; Avrigeanu, V.; Bém, Pavel; Fischer, U.; Honusek, Milan; Koning, A.J.; Mrázek, Jaromír; Šimečková, Eva; Štefánik, Milan; Závorka, Lukáš 2013-01-01 Roč. 88, č. 1 (2013), 014612 ISSN 0556-2813 R&D Projects: GA MŠk(XE) LM2011019 Institutional support: RVO:61389005 Keywords : deuteron-induced reactions * cross sections * breakup mechanism Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 3.881, year: 2013 http://prc.aps.org/pdf/PRC/v88/i1/e014612 16. Particle spectra and correlations in sulfur-nucleus reactions at 200 GeV per nucleon International Nuclear Information System (INIS) Alber, T. 1995-08-01 In this work the production of negatively charged particles and two-particle correlations in nucleus-nucleus reactions at high energies are studied. The range of the acceptance of experiment NA35 at the CERN-SPS was increased in 1990 by adding a large volume Time Projection Chamber downstream of the streamer chamber. The analysis of the data taken during the run period 1991 shows that such a detector faces no basic problems when operated in a high multiplicity experiment. The scenario of particle production in sulfur-nucleus reactions is studied via the measurement of rapidity and transverse momentum distributions which show good agreement with the results from other data-sets of the same experiment. The width of the rapidity distribution is only a little narrower than observed in nucleon-nucleon collisions and is in contradiction to the assumption of a static source with isotropic particle emission. The shape of the transverse momentum distribution indicates an effective temperature at freeze-out of about 150 MeV. The analysis of the two-particle correlation benefits particularly from the high statistics collected for different reactions in different phase-space regions. This allows a differential analysis of the correlation function for the different components of the momentum difference in various regions of rapidity and transverse momentum. It is recalled that for an expanding source the experimentally obtained radius parameters are not a direct measure of the geometrical size of the source but measure a so-called region of homogeneity. This expectation is also confirmed by a microscopic simulation of the reaction. The experimental results for the radius parameters support such a description of the particle production mechanism in terms of an expanding source. (orig.) 17. Hamiltonian theory of wave and particle in quantum mechanics 2. Hamilton-Jacobi theory and particle back-reaction International Nuclear Information System (INIS) Holland, P. 2001-01-01 Pursuing the Hamiltonian formulation of the De Broglie-Bohm (deBB) theory presented in the preceding paper, the Hamilton-Jacobi (HJ) theory of the wave-particle system is developed. It is shown how to derive a HJ equation for the particle, which enables trajectories to be computed algebraically using Jacobi's method. Using Liouville's equation in the HJ representation it was found the restriction on the Jacobi solutions which implies the quantal distribution. This gives a first method for interpreting the deBB theory in HJ terms. A second method proceeds via an explicit solution of the field+particle HJ equation. Both methods imply that the quantum phase may be interpreted as an incomplete integral. Using these results and those of the first paper it is shown how Schroedinger's equation can be represented in Liouvilian terms, and vice versa. The general theory of canonical transformations that represent quantum unitary transformations is given, and it is shown in principle how the trajectory theory may be expressed in other quantum representations. Using the solution found for the total HJ equation, an explicit solution for the additional field containing a term representing the particle back-reaction is found. The conservation of energy and momentum in the model is established, and weak form of the action-reaction principle is shown to hold. Alternative forms for the Hamiltonian are explored and it is shown that, within this theoretical context, the deBB theory is not unique. The theory potentially provides an alternative way of obtaining the classical limit 18. Particle induced X-ray emission and complementary nuclear methods for trace element determination; Plenary lecture Energy Technology Data Exchange (ETDEWEB) Johansson, S A.E. [Lund Univ. (Sweden). Dept. of Nuclear Physics 1992-03-01 In this review the state-of-the-art of particle induced X-ray emission (PIXE) methods for the determination of trace elements is described. The developmental work has mostly been carried out in nuclear physics laboratories, where accelerators are available, but now the increased interest has led to the establishment of other dedicated PIXE facilities. The reason for this interest is the versatility, high sensitivity and multi-element capability of PIXE analysis. A further very important advantage is that PIXE can be combined with the microbeam technique, which makes elemental mapping with a spatial resolution of about 1 {mu}m possible. As a technique, PIXE can also be combined with other nuclear reactions such as elastic scattering and particle-induced gamma emission, so that light elements can be determined. The usefulness of PIXE is illustrated by a number of typical applications in biology, medicine, geology, air pollution research, archaeology and the arts. (author). 19. Neutron-induced complex reaction analysis with 3D nuclear track simulation International Nuclear Information System (INIS) Sajo-Bohus, L.; Palfalvi, J.K.; Akatov, Yu.; Arevalo, O.; Greaves, E.D.; Nemeth, P.; Palacios, D.; Szabo, J.; Eoerdoegh, I. 2005-01-01 Complex (multiple) etched tracks are analysed through digitised images and 3D simulation by a purpose-built algorithm. From a binary track image an unfolding procedure is followed to generate a 3D track model, from which several track parameters are estimated. The method presented here allows the deposited energy, that originated from particle fragmentation or carbon spallation by means of induced tracks in commercially available PADC detectors, to be estimated. Results of evaluated nuclear tracks related to 12 C (n,3αn ' ) reaction are presented here. The detectors were exposed on the ISS in 2001 20. Study on fine particles influence on sodium sulfite and oxygen gas-liquid reaction Energy Technology Data Exchange (ETDEWEB) Tao, Shuchang; Zhao, Bo; Wang, Shujuan; Zhuo, Yuqun; Chen, Changhe [Tsinghua Univ., Beijing (China). Dept. of Thermal Engineering; Ministry of Education, Beijing (China). Key Lab. for Thermal Science and Power Engineering 2013-07-01 Wet limestone scrubbing is the most common flue gas desulfurization process for control of sulfur dioxide emissions from the combustion of fossil fuels, and forced oxidation is a key part of the reaction. During the reaction which controlled by gas-liquid mass transfer, the fine particles' characteristic, size, solid loading and temperature has a great influence on gas-liquid mass transfer. In the present work is to explain how these factors influence the reaction between Na{sub 2}SO{sub 3} and O{sub 2} and find the best react conditions through experiment. The oxidation rate was experimentally studied by contacting pure oxygen with a sodium sulfite solution with active carbon particle in a stirred tank, and the system pressure drop was record by the pressure sensor. At the beginning the pressure is about 215 kPa and Na{sub 2}SO{sub 3} is about 0.5mol/L. The temperature is 40, 50, 60, 70, 80 C. Compare the results of no particles included, we can conclude that high temperature, proper loadings and smaller particles resulting in higher mass transfer coefficients k{sub L}. 1. Sulphation reactions of oxidic dust particles in waste heat boiler environment. Literature review Energy Technology Data Exchange (ETDEWEB) Ranki, T. 1999-09-01 Sulphation of metal oxides has an important role in many industrial processes. In different applications sulphation reactions have different aims and characteristics. In the flash smelting process sulphation of oxidic flue dust is a spontaneous and inevitable phenomena, which takes place in the waste heat boiler (WHB) when cooling down hot dust laden off-gases from sulphide smelters. Oxidic dust particles (size 0 - 50 {mu}m) react with O{sub 2} and SO{sub 2} or SO{sub 3} in a certain temperature range (500 - 800 deg C). Sulphation reactions are highly exothermic releasing large amount of heat, which affects the gas cooling and thermal performance of the boiler. Thermodynamics and kinetics of the system have to be known to improve the process and WHB operation. The rate of sulphation is affected by the prevailing conditions (temperature, gas composition) and particle size and microstructure (porosity, surface area). Some metal oxides (CuO) can react readily with SO{sub 2} and O{sub 2} and act as self-catalysts, but others (NiO) require the presence of an external catalyst to enhance the SO{sub 3} formation and sulphation to proceed. Some oxides (NiO) sulphate directly, some (CuO) may form first intermediate phases (basic sulphates) depending on the reaction conditions. Thus, the reaction mechanisms are very complex. The aim of this report was to search information about the factors affecting the dust sulphation reactions and suggested reaction mechanisms and kinetics. Many investigators have studied sulphation thermodynamics and reaction kinetics and mechanisms of macroscopical metal oxide pieces, but only few articles have been published about sulphation of microscopical particles, like dust. All the found microscale studies dealt with sulphation reactions of calcium oxide, which is not present in the flash smelting process, but used as an SO{sub 2} absorbent in the combustion processes. However, also these investigations may give some hints about the sulphation 2. Laser-induced reaction alumina coating on ceramic composite Science.gov (United States) Xiao, Chenghe Silicon carbide ceramics are susceptible to corrosion by certain industrial furnace environments. It is also true for a new class of silicon carbide-particulate reinforced alumina-matrix composite (SiCsb(P)Alsb2Osb3) since it contains more than 55% of SiC particulate within the composite. This behavior would limit the use of SiCsb(P)Alsb2Osb3 composites in ceramic heat exchangers. Because oxide ceramics corrode substantially less in the same environments, a laser-induced reaction alumina coating technique has been developed for improving corrosion resistance of the SiCsb(P)Alsb2Osb3 composite. Specimens with and without the laser-induced reaction alumina coating were subjected to corrosion testing at 1200sp°C in an air atmosphere containing Nasb2COsb3 for 50 ˜ 200 hours. Corroded specimens were characterized via x-ray diffraction (XRD), scanning electron microscopy (SEM), and energy dispersive spectrometer (EDS). The uncoated SiCsbP/Alsb2Osb3 composite samples experienced an initial increase in weight during the exposure to Nasb2COsb3 at 1200sp°C due to the oxidation of residual aluminum metal in the composite. There was no significant weight change difference experienced during exposure times between 50 and 200 hours. The oxidation layer formed on the as-received composite surface consisted of Si and Alsb2Osb3 (after washing with a HF solution). The oxidation layer grew outward and inward from the original surface of the composite. The growth rate in the outward direction was faster than in the inward direction. The formation of the Si/Alsb2Osb3 oxidation layer on the as-received composite was nonuniform, and localized corrosion was observed. The coated samples experienced very little mass increase. The laser-induced reaction alumina coating effectively provided protection for the SiCsbP/Alsb2Osb3 composite by keeping the corrodents from contacting the composite and by the formation of some refractory compounds such as Nasb2OAlsb2Osb3SiOsb2 and Nasb2Alsb{22}Osb 3. Mechanisms of emission of particles charged in 6Li + 6Li and 6Li + 10B reactions at low energies International Nuclear Information System (INIS) Quebert, Jean 1964-01-01 The lithium 6 nucleus is a projectile of interest to study nuclear reactions at low energy due to the possibility to obtain high heats of reaction, and to its structure which can play an important role in the projectile-target interaction. This research thesis focused on the study of two low-energy reactions provoked by lithium projectiles. These reactions are studied within the framework of the theoretical model of aggregates. The first part presents the experimental conditions of both reactions, reports the development and analysis of nuclear plates, and the transformation of a given type of particle histogram into a spectrum in the mass centre system. The next parts report the study of the 6 Li + 6 Li reaction (previous results, kinematic analysis, spectrum of secondary particles, theoretical analysis of results) and of the 6 Li + 10 B reaction (previous results, experimental results, study of the continuous spectrum of alpha particle, reaction mechanisms) 4. Human FcγRIIA induces anaphylactic and allergic reactions. Science.gov (United States) Jönsson, Friederike; Mancardi, David A; Zhao, Wei; Kita, Yoshihiro; Iannascoli, Bruno; Khun, Huot; van Rooijen, Nico; Shimizu, Takao; Schwartz, Lawrence B; Daëron, Marc; Bruhns, Pierre 2012-03-15 IgE and IgE receptors (FcεRI) are well-known inducers of allergy. We recently found in mice that active systemic anaphylaxis depends on IgG and IgG receptors (FcγRIIIA and FcγRIV) expressed by neutrophils, rather than on IgE and FcεRI expressed by mast cells and basophils. In humans, neutrophils, mast cells, basophils, and eosinophils do not express FcγRIIIA or FcγRIV, but FcγRIIA. We therefore investigated the possible role of FcγRIIA in allergy by generating novel FcγRIIA-transgenic mice, in which various models of allergic reactions induced by IgG could be studied. In mice, FcγRIIA was sufficient to trigger active and passive anaphylaxis, and airway inflammation in vivo. Blocking FcγRIIA in vivo abolished these reactions. We identified mast cells to be responsible for FcγRIIA-dependent passive cutaneous anaphylaxis, and monocytes/macrophages and neutrophils to be responsible for FcγRIIA-dependent passive systemic anaphylaxis. Supporting these findings, human mast cells, monocytes and neutrophils produced anaphylactogenic mediators after FcγRIIA engagement. IgG and FcγRIIA may therefore contribute to allergic and anaphylactic reactions in humans. 5. Production and decay of baryonic resonances in pion induced reactions Directory of Open Access Journals (Sweden) Przygoda Witold 2016-01-01 Full Text Available Pion induced reactions give unique opportunities for an unambiguous description of baryonic resonances and their coupling channels. A systematic energy scan and high precision data, in conjunction with a partial wave analysis, allow for the study of the excitation function of the various contributions. A review of available world data unravels strong need for modern facilities delivering measurements with a pion beam. Recently, HADES collaboration collected data in pion-induced reactions on light (12C and heavy (74W nuclei at a beam momentum of 1.7 GeV/c dedicated to strangeness production. It was followed by a systematic scan at four different pion beam momenta (0.656, 0.69, 0.748 and 0.8 GeV/c in π− − p reaction in order to tackle the role of N(1520 resonance in conjunction with the intermediate ρ production. First results on exclusive channels with one pion (π− p and two pions (nπ+π−, pπ−π0 in the final state are discussed. 6. Synthesis of ZnO particles using water molecules generated in esterification reaction Science.gov (United States) Šarić, Ankica; Gotić, Marijan; Štefanić, Goran; Dražić, Goran 2017-07-01 Zinc oxide particles were synthesized without the addition of water by autoclaving (anhydrous) zinc acetate/alcohol and zinc acetate/acetic acid/alcohol solutions at 160 °C. The solvothermal synthesis was performed in ethanol or octanol. The structural, optical and morphological characteristics of ZnO particles were investigated by X-ray diffraction (XRD), UV-Vis spectroscopy, FE-SEM and TEM/STEM microscopy. 13C NMR spectroscopy revealed the presence of ester (ethyl- or octyl-acetate) in the supernatants which directly indicate the reaction mechanism. The formation of ester in this esterification reaction generated water molecule in situ, which hydrolyzed anhydrous zinc acetate and initiated nucleation and formation of ZnO. It was found that the size and shape of ZnO particles depend on the type of alcohol used as a solvent and on the presence of acetic acid in solution. The presence of ethanol in the ;pure; system without acetic acid favoured the formation of fine and uniform spherical ZnO nanoparticles (∼20 nm). With the addition of small amount of acetic acid the size of these small nanoparticles increased significantly up to a few hundred nanometers. The addition of small amount of acetic acid in the presence of octanol caused even more radical changes in the shape of ZnO particles, favouring the growth of huge rod-like particles (∼3 μm). 7. Vortex-Breakdown-Induced Particle Capture in Branching Junctions. Science.gov (United States) Ault, Jesse T; Fani, Andrea; Chen, Kevin K; Shin, Sangwoo; Gallaire, François; Stone, Howard A 2016-08-19 We show experimentally that a flow-induced, Reynolds number-dependent particle-capture mechanism in branching junctions can be enhanced or eliminated by varying the junction angle. In addition, numerical simulations are used to show that the features responsible for this capture have the signatures of classical vortex breakdown, including an approach flow aligned with the vortex axis and a pocket of subcriticality. We show how these recirculation regions originate and evolve and suggest a physical mechanism for their formation. Furthermore, comparing experiments and numerical simulations, the presence of vortex breakdown is found to be an excellent predictor of particle capture. These results inform the design of systems in which suspended particle accumulation can be eliminated or maximized. 8. Coarse grain model for coupled thermo-mechano-chemical processes and its application to pressure-induced endothermic chemical reactions International Nuclear Information System (INIS) Antillon, Edwin; Banlusan, Kiettipong; Strachan, Alejandro 2014-01-01 We extend a thermally accurate model for coarse grain dynamics (Strachan and Holian 2005 Phys. Rev. Lett. 94 014301) to enable the description of stress-induced chemical reactions in the degrees of freedom internal to the mesoparticles. Similar to the breathing sphere model, we introduce an additional variable that describes the internal state of the particles and whose dynamics is governed both by an internal potential energy function and by interparticle forces. The equations of motion of these new variables are derived from a Hamiltonian and the model exhibits two desired features: total energy conservation and Galilean invariance. We use a simple model material with pairwise interactions between particles and study pressure-induced chemical reactions induced by hydrostatic and uniaxial compression. These examples demonstrate the ability of the model to capture non-trivial processes including the interplay between mechanical, thermal and chemical processes of interest in many applications. (paper) 9. Partial equilibrium in induced redox reactions of plutonium Energy Technology Data Exchange (ETDEWEB) Nikol' skii, B P; Posvol' skii, M V; Krylov, L I; Morozova, Z P 1975-01-01 A study was made of oxidation-reduction reactions of Pu in buffer solutions containing bichromate and a reducing agent which reacted with hexavalent chromium at pH=3.5. In most cases sodium nitrite was used. A rather slow reduction of Pu (6) with NaNO/sub 2/ in the course of which tetravalent plutonium was formed via disproportionation reaction of plutonium (5), became very rapid upon the addition of bichromate to the solution. The yield of tetravalent plutonium increased with an increase in the concentration of NaNO/sub 2/ and the bichromate but never reached 100%. This was due to a simultaneous occurrenc of the induced oxidation reaction of Pu(4), leading to a partial equilibrium between the valence forms of plutonium in the nitrite-bichromate system which on the whole was in a nonequilibrium state. It was shown that in the series of reactions leading to the reduction of plutonium the presence of bivalent chromium was a necessary link. 10. Adverse cutaneous reactions induced by TNF-alpha antagonist therapy. Science.gov (United States) Borrás-Blasco, Joaquín; Navarro-Ruiz, Andrés; Borrás, Consuelo; Casterá, Elvira 2009-11-01 To review adverse cutaneous drug reactions induced by tumor necrosis factor alpha (TNF-alpha) antagonist therapy. A literature search was performed using PubMed (1996-March 2009), EMBASE, and selected MEDLINE Ovid bibliography searches. All language clinical trial data, case reports, letters, and review articles identified from the data sources were used. Since the introduction of TNF-alpha antagonist, the incidence of adverse cutaneous drug reactions has increased significantly. A wide range of different skin lesions might occur during TNF-alpha antagonist treatment. New onset or exacerbation of psoriasis has been reported in patients treated with TNF-alpha antagonists for a variety of rheumatologic conditions. TNF-alpha antagonist therapy has been associated with a lupus-like syndrome; most of these case reports occurred in patients receiving either etanercept or infliximab. Serious skin reactions such as erythema multiforme, Stevens-Johnson syndrome, and toxic epidermal necrolysis have been reported rarely with the use of TNF-alpha antagonists. As the use of TNF-alpha antagonists continues to increase, the diagnosis and management of cutaneous side effects will become an increasingly important challenge. In patients receiving TNF-alpha antagonist treatment, skin disease should be considered, and clinicians need to be aware of the adverse reactions of these drugs. 11. [Studies of heavy-ion induced reactions]: Annual progress report International Nuclear Information System (INIS) Mignerey, A.C. 1986-10-01 An experiment was performed at the Lawrence Berkeley Laboratory Bevalac, extending previous studies using inverse reactions to 50 MeV/u 139 La incident on targets of C and Al. Studies of excitation energy division in lower energy division in lower energy heavy-ion reactions were furthered using kinematic coincidences to measure the excitation energies of primary products in the Fe + Ho reaction at 12 MeV/u. These results will provide important systematics for comparisons with previous measurements at 9 MeV/u on the same system and at 15 MeV/u on the Fe + Fe and Fe + U systems. Also studied were different aspects of 15 MeV/u Fe-induced reactions, with experiments performed at the Oak Ridge HHIRF. The first three contributions of this report constitute a major portion of the results from this research. Finally, at the Lawrence Berkeley Laboratory Bevalac a large detector array for coincident detection of fragmentation products in heavy-ion collisions below 100 MeV/u is being built. A list of publications, personnel, and activities is provided 12. Dependence of CuO particle size and diameter of reaction tubing on tritium recovery for tritium safety operation Energy Technology Data Exchange (ETDEWEB) Hu, Cui, E-mail: [email protected] [Shizuoka University, 836 Ohya, Suruga-ku Shizuoka 422-8529 (Japan); Uemura, Yuki; Yuyama, Kenta; Fujita, Hiroe; Sakurada, Shodai; Azuma, Keisuke [Shizuoka University, 836 Ohya, Suruga-ku Shizuoka 422-8529 (Japan); Taguchi, Akira; Hara, Masanori; Hatano, Yuji [University of Toyama, 3190 Gofuku, Toyama 939-8555 (Japan); Chikada, Takumi; Oya, Yasuhisa [Shizuoka University, 836 Ohya, Suruga-ku Shizuoka 422-8529 (Japan) 2016-12-15 Highlights: • Influence of CuO particle size and diameter of reaction tubing on the tritium recovery was evaluated. • Reaction rate constant of tritium with CuO particle has been calculated by the combination of experimental results and a simulation code. • Dependence of reaction tubing length on tritium conversion ratio has been explored. - Abstract: Usage of CuO and water bubbler is one of the conventional and convenient methods for tritium recovery. In present work, influence of CuO particle size and diameter of reaction tubing on the tritium recovery was evaluated. Reaction rate constant of tritium with CuO particle has been calculated by the combination of experimental results and a simulation code. Then, these results were applied for exploring the dependence of reaction tubing length on tritium conversion ratio. The results showed that the surface area of CuO has a great influence on the oxidation rate constant. The frequency factor of the reaction would be approximately doubled by reducing the CuO particle size from 1.0 mm to 0.2 mm. Cross section of reaction tubing mainly affected on the duration of tritium at the temperature below 600 K. Reaction tubing with length of 1 m at temperature of 600 K would be suitable for keeping the tritium conversion ratio above 99.9%. The length of reaction tubing can be reduced by using the smaller CuO particle or increasing the CuO temperature. 13. Alpha particle induced soft errors in NMOS RAMs: a review International Nuclear Information System (INIS) Carter, P.M.; Wilkins, B.R. 1987-01-01 The paper aims to explain the alpha particle induced soft error phenomenon using the NMOS dynamic random access memory (RAM) as a model. It discusses some of the many techniques experimented with by manufacturers to overcome the problem, and gives a review of the literature covering most aspects of soft errors in dynamic RAMs. Finally, the soft error performance of current dynamic RAM and static RAM products from several manufacturers are compared. (author) 14. Development of charged particle nuclear reaction data retrieval system on IntelligentPad International Nuclear Information System (INIS) Ohbayashi, Yosihide; Masui, Hiroshi; Aoyama, Shigeyoshi; Kato, Kiyoshi; Chiba, Masaki 1999-01-01 An newly designed database retrieval system of charged particle nuclear reaction database system is developed with IntelligentPad architecture. We designed the network-based (server-client) data retrieval system, and a client system constructs on Windows95, 98/NT with IntelligentPad. We set the future aim of our database system toward the 'effective' use of nuclear reaction data: I. 'Re-produce, Re-edit, Re-use', II. 'Circulation, Evolution', III. 'Knowledge discovery'. Thus, further developments are under way. (author) 15. ACT-XN: Revised version of an activation calculation code for fusion reactor analysis. Supplement of the function for the sequential reaction activation by charged particles International Nuclear Information System (INIS) Yamauchi, Michinori; Sato, Satoshi; Nishitani, Takeo; Konno, Chikara; Hori, Jun-ichi; Kawasaki, Hiromitsu 2007-09-01 The ACT-XN is a revised version of the ACT4 code, which was developed in the Japan Atomic Energy Research Institute (JAERI) to calculate the transmutation, induced activity, decay heat, delayed gamma-ray source etc. for fusion devices. The ACT4 code cannot deal with the sequential reactions of charged particles generated by primary neutron reactions. In the design of present experimental reactors, the activation due to sequential reactions may not be of great concern as it is usually buried under the activity by primary neutron reactions. However, low activation material is one of the important factors for constructing high power fusion reactors in future, and unexpected activation may be produced through sequential reactions. Therefore, in the present work, the ACT4 code was newly supplemented with the calculation functions for the sequential reactions and renamed the ACT-XN. The ACT-XN code is equipped with functions to calculate effective cross sections for sequential reactions and input them in transmutation matrix. The FISPACT data were adopted for (x,n) reaction cross sections, charged particles emission spectra and stopping powers. The nuclear reaction chain data library were revised to cope with the (x,n) reactions. The charged particles are specified as p, d, t, 3 He(h) and α. The code was applied to the analysis of FNS experiment for LiF and Demo-reactor design with FLiBe, and confirmed that it reproduce the experimental values within 15-30% discrepancies. In addition, a notice was presented that the dose rate due to sequential reaction cannot always be neglected after a certain period cooling for some of the low activation material. (author) 16. Reaction mechanism and spectroscopy of transfer reactions induced by heavy ions International Nuclear Information System (INIS) Lemaire, M.-C. 1977-01-01 The specific features displayed by data on heavy ion elastic and inelastic angular distributions are discussed, and their physical origin is pointed out from semi-classical calculations in counterpart ambiguities in the phenomenological description of the optical potential appear. Two nucleon transfer reactions induced by heavy ions successfully point out important contributions of a two-step process where the transfer is proceeding via target and residual nucleus inelastic excitation. At incident energies not too high above the Coulomb barrier, such process produces clear shape changes between different final state angular distributions. At higher incident energy, the angular distributions are forward peaked and display oscillations for both mechanisms. As for four-nucleon transfer reactions, the existing data suggest that the nucleons are well transferred into a Os relative 17. Study of charged current reactions induced by muon antineutrinos International Nuclear Information System (INIS) Huss, D. 1979-07-01 We present in this work a study of antineutrino reactions on light targets. We have used the Gargamelle cloud chamber with a propane-freon mix. In the 2 first chapters we give a brief description of the experimental setting and we present the selection criteria of the events. In the third chapter we analyse the data for the reaction anti-ν + p → μ + + n that preserves strangeness. We have deduced the values of the axial (M A ) and vector (M V ) form factors: M A = (O.92 ± 0.08) GeV and M V = (0.86 ± 0.04) GeV. In the fourth chapter we study reactions in which strange particles appear (ΔS = 1) and we have determined their production cross-sections. The elastic reaction: anti-ν + p → μ + + Λ is studied in a more accurate manner thanks to a 3-constraint adjustment that enables the selection of events occurring on free protons. We have deduced from our data the longitudinal, orthogonal and transverse polarization of Λ, we have got respectively P l = -0.06 ± 0.44; P p = 0.29 ± 0.41; P t 1.05 ± 0.30. We have also deduced the values of the total cross-section as a function of the incident antineutrino energy E: σ (0.27 ± 0.02)*E*10 -38 cm -2 . E has been assessed from the energy deposited in the cloud chamber and we have adjusted the cross-section with a straight line as it is expected under the assumption of scale invariance. (A.C.) 18. Phenomenological model for non-equilibrium deuteron emission in nucleon induced reactions International Nuclear Information System (INIS) Broeders, C.H.M.; Konobeyev, A.Yu. 2005-01-01 A new approach is proposed for the calculation of non-equilibrium deuteron energy distributions in nuclear reactions induced by nucleons of intermediate energies. It combines the model of the nucleon pick-up, the coalescence and the deuteron knock-out. Emission and absorption rates for excited particles are described by the pre-equilibrium hybrid model. The model of Sato, Iwamoto, Harada is used to describe the nucleon pick-up and the coalescence of nucleons from the exciton configurations starting from (2p, 1h). The model of deuteron knock-out is formulated taking into account the Pauli principle for the nucleon-deuteron interaction inside a nucleus. The contribution of the direct nucleon pick-up is described phenomenologically. The multiple pre-equilibrium emission of particles is taken into account. The calculated deuteron energy distributions are compared with experimental data from 12 C to 209 Bi. (orig.) 19. Probing α-particle wave functions using (rvec d,α) reactions International Nuclear Information System (INIS) Crosson, E.R.; Lemieux, S.K.; Ludwig, E.J.; Thompson, W.J.; Bisenberger, M.; Hertenberger, R.; Hofer, D.; Kader, H.; Schiemenz, P.; Graw, G.; Eiro, A.M.; Santos, F.D. 1993-01-01 Wave functions of the α particle corresponding to different S- and D-state deuteron-deuteron overlaps, left-angle dd|α right-angle, were investigated using exact finite-range distorted-wave Born-approximation (DWBA) analyses of (rvec d,α) reactions. Cross sections, vector, and tensor-analyzing powers were measured for (rvec d,α) reactions populating the lowest J π =7 + state in 56 Co at bombarding energies E d of 16 and 22 MeV, the lowest 7 + state in 48 Sc at E d =16 MeV, and the lowest 7 + state in 46 Sc at E d =22 MeV. We find that DWBA analyses of tensor-analyzing powers produce satisfactory agreement with the data and that A xx is especially sensitive to the D-state component of α-particle wave functions generated by different realistic nucleon-nucleon interactions 20. Heterogeneous kinetics, products, and mechanisms of ferulic acid particles in the reaction with NO3 radicals Science.gov (United States) Liu, Changgeng; Zhang, Peng; Wen, Xiaoying; Wu, Bin 2017-03-01 Methoxyphenols, as an important component of wood burning, are produced by lignin pyrolysis and considered to be the potential tracers for wood smoke emissions. In this work, the heterogeneous reaction between ferulic acid particles and NO3 radicals was investigated. Six products including oxalic acid, 4-vinylguaiacol, vanillin, 5-nitrovanillin, 5-nitroferulic acid, and caffeic acid were confirmed by gas chromatography-mass spectrometry (GC-MS). In addition, the reaction mechanisms were proposed and the main pathways were NO3 electrophilic addition to olefin and the meta-position to the hydroxyl group. The uptake coefficient of NO3 radicals on ferulic acid particles was 0.17 ± 0.02 and the effective rate constant under experimental conditions was (1.71 ± 0.08) × 10-12 cm3 molecule-1 s-1. The results indicate that ferulic acid degradation by NO3 can be an important sink at night. 1. Light particle emission as a probe of reaction mechanism and nuclear excitation International Nuclear Information System (INIS) Guerreau, D. 1989-01-01 The central part of these lectures will be dealing with the problem of energy dissipation. A good understanding of the mechanisms for the dissipation requires to study both peripheral and central collisions or, in other words, to look at the impact paramenter dependence. This should also provide valuable information on the time scale. In order to probe the reaction mechanism and nuclear excitation, one of the most powerful tool is unquestionably the observation of light particle emission, including neutrons and charged particles. Several examples will be discussed related to peripheral collisions (the fate of transfer reactions, the excitation energy generation, the production of projectile-like fragments) as well as inner collisions for which extensive studies have demonstrated the strength of intermediate energy heavy ions for the production of very hot nuclei and detailed study of their decay properties 2. Fragment formation in light-ion induced reactions International Nuclear Information System (INIS) Hirata, Yuichi 2001-01-01 The intermediate mass fragment (IMF) formation in the 12 GeV proton induced reaction on Au target is analyzed by the quantum molecular dynamics model combined with the JAM hadronic cascade model and the non-equilibrated percolation model. We show that the sideward peaked angular distribution of IMF occur in the multifragmentation at very short time scale around 20 fm/c where non-equilibrated features of the residual nucleus fluctuates the nucleon density and fragments in the repulsive Coulomb force are pushed for the sideward direction. (author) 3. Electro-induced reactions of biologically important molecules International Nuclear Information System (INIS) Kocisek, J. 2010-01-01 The thesis presents the results of research activities in the field of electron interactions with biologically relevant molecules which was carried out during my PhD studies at the Department of Experimental Physics, Comenius University in Bratislava. Electron induced interactions with biologically relevant molecules were experimentally studied using crossed electron-molecule beams experiment. The obtained results, were presented in four publications in international scientific journals. First study of deals with electron impact ionisation of furanose alcohols [see 1. in list of author publications on page 22]. It has been motivated by most important works in the field of electron induced damages of DNA bases [4]. Studied 3-hydroxytetrahydrofuran and tetrahydrofurfuryl alcohol, are important model molecules for more complex biological systems (e.g. deoxyribose).The influence of hydroxyl group on stabilisation of the positive ions of the molecules, together with the stability of furan ring in ionized form are main themes of the study. The studies of small amides and aminoacids are connected to scientific studies in the field of formation of the aminoacids and other biologically relevant molecules in space and works trying to explain electron induced processes in more complex molecules[12, 13, 24]. The most important results were obtained for aminoacid Serine [see 2. in list of author publications on page 22]. We have showed that additional OH group of Serine considerably lower the reaction enthalpy limit of reactions resulting to formation of neutral water molecules, in comparison to other amino acids. Also the study of (M-H)- reaction channel using the electron beam with FWHM under 100 meV is of high importance in the field. The last part of the thesis is focused on the electron interactions with organosilane compounds. Materials prepared from organosilane molecules in plasmas have wide range of applications in both biology and medicine. We have studied electron 4. Light particle and gamma ray emission measurements in heavy ion reactions. Progress report International Nuclear Information System (INIS) Petitt, G.A. 1983-01-01 Studies of neutron and charged particle emission in heavy ion reactions using the facilities at the HHIRF and the new computer facilities at Georgia State are briefly described. A progress report for 1982 to 1983 is combined with a proposal for work to be performed during 1983 to 1984. Present activities and immediate plans for a run already approved by the Program Advisory Committee of the HHIRF are discussed 5. Multiple particles production for hadron-hadron reactions with finite hadronization time International Nuclear Information System (INIS) Arbex, N. 1991-01-01 Experimental data on multiple particle production for proton-proton reaction are analysed in the context of a very simple analytical model. The model exhibits the essential features of hydrodynamical calculations as, e.g., the formation of an intermediate object, which undergoes expansion. The simultaneous analysis of different types of data allows for the conclusion that such data reflect the dynamics of this intermediate object and have a very deem connection to the elementary processes. (author) 6. Particle-hole state densities for statistical multi-step compound reactions International Nuclear Information System (INIS) Oblozinsky, P. 1986-01-01 An analytical relation is derived for the density of particle-hole bound states applying the equidistant-spacing approximation and the Darwin-Fowler statistical method. The Pauli exclusion principle as well as the finite depth of the potential well are taken into account. The set of densities needed for calculations of multi-step compound reactions is completed by deriving the densities of accessible final states for escape and damping. (orig.) 7. Japan Charged-Particle Nuclear Reaction Data Group (JCPRG). Progress report. P10 International Nuclear Information System (INIS) Executive Committee of JCPRG 2001-01-01 In 2000 the following activities were carried out: compilation of the CNDP (Charged Particle Nuclear Reaction Data); translation of NRDF data into EXFOR data; making of the retrieval systems using Internet and Intelligent Pad for the CPND in both NRDF and EXFOR; distributing the CPND and promoting utilization in Japan; making a new system to transform from NRDF to EXFOR. Preliminary version of a new editing system for compiling and inputting the NRDF data was completed 8. Morphological control of strontium oxalate particles by PSMA-mediated precipitation reaction Energy Technology Data Exchange (ETDEWEB) Yu Jiaguo [State Key Laboratory of Advanced Technology for Material Synthesis and Processing, Wuhan University of Technology, Wuhan 430070 (China)]. E-mail: [email protected]; Tang Hua [State Key Laboratory of Advanced Technology for Material Synthesis and Processing, Wuhan University of Technology, Wuhan 430070 (China); Cheng Bei [State Key Laboratory of Advanced Technology for Material Synthesis and Processing, Wuhan University of Technology, Wuhan 430070 (China) 2005-05-15 In this paper, strontium oxalate particles with different morphologies could be easily obtained by a precipitation reaction of sodium oxalate with strontium chloride in the absence and presence of poly-(styrene-alt-maleic acid) (PSMA). The as-prepared products were characterized with scanning electron microscopy and X-ray diffraction. The effects of pH, aging time and concentration of PSMA on the phase structures and morphologies of the as-prepared strontium oxalate particles were investigated and discussed. The results showed that strontium oxalate particles with various morphologies, such as, bi-pyramids, rods, peanuts, and spherical particles, etc., could be obtained by varying the experimental conditions. PSMA promoted the formation of strontium oxalate dihydrate (SOD) phase. Suitable pH values (pH 7 and 8) favor the formation of the peanut-shaped SrC{sub 2}O{sub 4} particles. This research may provide new insight into the control of morphologies and phase structures of strontium oxalate particles and the biomimetic synthesis of novel inorganic materials. 9. Effect of Ti and C particle sizes on reaction behavior of thermal explosion reaction of Cu−Ti−C system under Ar and air atmospheres Energy Technology Data Exchange (ETDEWEB) Liang, Yunhong; Zhao, Qian; Li, Xiujuan; Zhang, Zhihui, E-mail: [email protected]; Ren, Luquan 2016-09-15 The thermal explosion (TE) reaction behavior of Cu−Ti−C systems with different Ti and C particle sizes was investigated under air and Ar atmospheres. It was found that increasing the Ti and C particle sizes leads to higher ignition temperatures under both atmospheres and that the maximum combustion temperature decreases with increasing C particle size. The TE reaction is much easier to activate (i.e., it has a lower ignition temperature) in air because of the heat released from Ti oxidation and nitridation and Cu oxidation reactions on the Cu−Ti−C compact surface. TiC ceramic particles are successfully prepared in the bulk Cu−Ti−C compacts under both air and Ar atmospheres through a dissolution-diffusion-precipitation mechanism. Differential thermal and thermodynamic analyses show that the TE reaction ignition process in air is mainly controlled by the Ti particle size. - Highlights: • Variation of Ti and C particle sizes affects thermal reaction (TE) behaviors. • Ignition temperature under air is much lower than that under Ar atmosphere. • Heat of oxidation and nitridation reactions reduces ignition temperature under air. 10. Catapult mechanism for fast particle emission in fission and heavy ion reactions International Nuclear Information System (INIS) Maedler, P. 1984-01-01 The fission processes of slabs of nuclear matter is modelled in the Hartree-Fock time dependence approximation by adding an initial collective velocity field to the static self-consistent solution. In dependence on its amplitude either large amplitude density oscillations are excited or fission occurs. The final disintegration of the slab proceeds on a time scale 10 -22 s and is characterized by a sharp peak in the actual velocity field in the region of the ''snatching'' inner low density tails. A characteristic time later a low density lump correlated with a peak in the velocity field energies in front of the fragments. These particles are called ''catapult particles''. Recent experimental results possibly provide evidence for catapult neutrons in low-energy fission. The significance of the catapult mechanism for fast particle emission in the exit channel of heavy ion reactions is discussed 11. On the theory of direct reactions with many particle final states International Nuclear Information System (INIS) Trautmann, D.; Baur, G. 1977-01-01 We study the theory of direct reactions with many particle final states. First, we concentrate on the DWBA formulation of the break-up of deuterons on heavy nuclei below the Coulomb barrier. Because there are no free parameters, this permits a clean test of the theory by comparing it to the experimental data. The agreement is very good. The theory is applied to the break-up of antideuteronic atoms. Then the effect of virtual deuteron break-up on Rutherford scattering is studied. It is small, but it seems to be measurable. Also the deuteron break-up above the Coulomb barrier can be well explained theoretically. In this context, small effects are studied briefly. A semiclassical theory of the break-up process is given, which results in an intuitive picture and a fast computational method. Our theory lends itself in a natural way to the study of stripping reactions to unbound states. The relation of stripping into the continuum to elastic scattering of the transferred particle on the same target nucleus is explained. Then the connection of stripping to bound and unbound states is established. Finally various examples of stripping of uncharged and charged particles into the continuum are given to illustrate the theory. Resonance wave functions describing the transferred particle are discussed. In a conclusion an outlook for possible future developments of experiment and theory is given. (author) 12. Fluorine determination in human and animal bones by particle-induced gamma-ray emission International Nuclear Information System (INIS) Sastri, Chaturvedula S.; Hoffmann, Peter; Ortner, Hugo M.; Iyengar, Venkatesh; Blondiaux, Gilbert; Tessier, Yves; Petri, Hermann; Aras, Namik K.; Zaichick, Vladimir 2002-01-01 Fluorine was determined in the iliac crest bones of patients and in ribs collected from postmortem investigations by particle-induced gamma-ray emission based on the 19 F(p,pγ) 19 F reaction, using 20/2.5 MeV protons. The results indicate that for 68% of the human samples the F concentration is in the range 500-1999 μg g -1 . For comparison purposes fluorine was also determined in some animal bones; in some animal tissues lateral profiles of fluorine were measured. (abstract) 13. Charged particles produced in neutron reactions on nuclei from beryllium to gold International Nuclear Information System (INIS) Haight, R.C. 1997-01-01 Charged-particle production in reactions of neutrons with nuclei has been studied over the past several years with the spallation source of neutrons from 1 to 50 MeV at the Los Alamos Neutron Science Center (LANSCE). Target nuclides include 9Be, C, 27Al, Si, 56Fe, 59Co, 58,60Ni, 93Nb and 197Au. Proton, deuteron, triton, 3He and 4He emission spectra, angular distributions and production cross sections have been measured. Transitions from the compound nuclear reaction mechanism to precompound reactions are clearly seen in the data. The data are compared with data from the literature where available, with evaluated nuclear data libraries, and with calculations where the selection of the nuclear level density prescription is of great importance. Calculations normalized at En = 14 MeV can differ from the present data by a factor of 2 for neutron energies between 5 and 10 MeV 14. γ-Particle coincidence technique for the study of nuclear reactions Science.gov (United States) Zagatto, V. A. B.; Oliveira, J. R. B.; Allegro, P. R. P.; Chamon, L. C.; Cybulska, E. W.; Medina, N. H.; Ribas, R. V.; Seale, W. A.; Silva, C. P.; Gasques, L. R.; Zahn, G. S.; Genezini, F. A.; Shorto, J. M. B.; Lubian, J.; Linares, R.; Toufen, D. L.; Silveira, M. A. G.; Rossi, E. S.; Nobre, G. P. 2014-06-01 The Saci-Perere γ ray spectrometer (located at the Pelletron AcceleratorLaboratory - IFUSP) was employed to implement the γ-particle coincidence technique for the study of nuclear reaction mechanisms. For this, the 18O+110Pd reaction has been studied in the beam energy range of 45-54 MeV. Several corrections to the data due to various effects (energy and angle integrations, beam spot size, γ detector finite size and the vacuum de-alignment) are small and well controlled. The aim of this work was to establish a proper method to analyze the data and identify the reaction mechanisms involved. To achieve this goal the inelastic scattering to the first excited state of 110Pd has been extracted and compared to coupled channel calculations using the São Paulo Potential (PSP), being reasonably well described by it. 15. γ-Particle coincidence technique for the study of nuclear reactions Energy Technology Data Exchange (ETDEWEB) Zagatto, V.A.B., E-mail: [email protected] [Instituto de Física da Universidade de São Paulo (Brazil); Oliveira, J.R.B.; Allegro, P.R.P.; Chamon, L.C.; Cybulska, E.W.; Medina, N.H.; Ribas, R.V.; Seale, W.A.; Silva, C.P.; Gasques, L.R. [Instituto de Física da Universidade de São Paulo (Brazil); Zahn, G.S.; Genezini, F.A.; Shorto, J.M.B. [Instituto de Pesquisas Energéticas e Nucleares (Brazil); Lubian, J.; Linares, R. [Instituto de Física da Universidade Federal Fluminense (Brazil); Toufen, D.L. [Instituto Federal de Educação, Ciência e Tecnologia (Brazil); Silveira, M.A.G. [Centro Universitário da FEI (Brazil); Rossi, E.S. [Centro Universitário FIEO – UNIFIEO (Brazil); Nobre, G.P. [Lawrence Livermore National Laboratory (United States) 2014-06-01 The Saci-Perere γ ray spectrometer (located at the Pelletron AcceleratorLaboratory – IFUSP) was employed to implement the γ-particle coincidence technique for the study of nuclear reaction mechanisms. For this, the {sup 18}O+{sup 110}Pd reaction has been studied in the beam energy range of 45–54 MeV. Several corrections to the data due to various effects (energy and angle integrations, beam spot size, γ detector finite size and the vacuum de-alignment) are small and well controlled. The aim of this work was to establish a proper method to analyze the data and identify the reaction mechanisms involved. To achieve this goal the inelastic scattering to the first excited state of {sup 110}Pd has been extracted and compared to coupled channel calculations using the São Paulo Potential (PSP), being reasonably well described by it. 16. Calculations on precompound reactions with alpha particles, A(α,α')X, at incident energies around 500 MeV International Nuclear Information System (INIS) Rittershausen, W. 1987-01-01 The model of Chiang et al. (1980) for nucleon induced precompound reactions, a generalization of the Glauber theory to lower energetical processes, was extended to heavier projectiles the elementary differential cross section of which may furthermore (at fixed incident energy) depend on the momentum transfer. The so modified model was applied to reactions of the type A(α,α')X at an incident energy of about 100 MeV/nucleon, excitation energies of the nucleus in the range 6 to 60 MeV, and for scattering angles from 3 to 6 0 . Thereby the Glauber coefficients were determined by means of the optical potentials known for the treated experiments. Local nucleon momentum distributions in the target nucleus were taken from calculations of Durand et al. (1982). The momentum distributions of the alpha particles after the first α-N collision were both for normalously and for homogeneously distributed nucleon momenta calculated analytically. The distributions after the second collision were determined by folding. For the control of these results and for the eventual calculation of the distributions after more than two collisions a Monte Carlo routine was written. The additional deviation of the alpha particles in real-valued potentials of the target nucleus were regarded. The results in which no free parameter occurs agree quite well in the shape with measured data. In one case it is also valid for the absolute quantities. (orig.) [de 17. Alpha Particles Induce Autophagy in Multiple Myeloma Cells. Science.gov (United States) Gorin, Jean-Baptiste; Gouard, Sébastien; Ménager, Jérémie; Morgenstern, Alfred; Bruchertseifer, Frank; Faivre-Chauvet, Alain; Guilloux, Yannick; Chérel, Michel; Davodeau, François; Gaschet, Joëlle 2015-01-01 Radiation emitted by the radionuclides in radioimmunotherapy (RIT) approaches induce direct killing of the targeted cells as well as indirect killing through the bystander effect. Our research group is dedicated to the development of α-RIT, i.e., RIT using α-particles especially for the treatment of multiple myeloma (MM). γ-irradiation and β-irradiation have been shown to trigger apoptosis in tumor cells. Cell death mode induced by (213)Bi α-irradiation appears more controversial. We therefore decided to investigate the effects of (213)Bi on MM cell radiobiology, notably cell death mechanisms as well as tumor cell immunogenicity after irradiation. Murine 5T33 and human LP-1 MM cell lines were used to study the effects of such α-particles. We first examined the effects of (213)Bi on proliferation rate, double-strand DNA breaks, cell cycle, and cell death. Then, we investigated autophagy after (213)Bi irradiation. Finally, a coculture of dendritic cells (DCs) with irradiated tumor cells or their culture media was performed to test whether it would induce DC activation. We showed that (213)Bi induces DNA double-strand breaks, cell cycle arrest, and autophagy in both cell lines, but we detected only slight levels of early apoptosis within the 120 h following irradiation in 5T33 and LP-1. Inhibition of autophagy prevented (213)Bi-induced inhibition of proliferation in LP-1 suggesting that this mechanism is involved in cell death after irradiation. We then assessed the immunogenicity of irradiated cells and found that irradiated LP-1 can activate DC through the secretion of soluble factor(s); however, no increase in membrane or extracellular expression of danger-associated molecular patterns was observed after irradiation. This study demonstrates that (213)Bi induces mainly necrosis in MM cells, low levels of apoptosis, and autophagy that might be involved in tumor cell death. 18. Alpha-particles induce autophagy in multiple myeloma cells Directory of Open Access Journals (Sweden) Joelle Marcelle Gaschet 2015-10-01 Full Text Available Objectives: Radiations emitted by the radionuclides in radioimmunotherapy (RIT approaches induce direct killing of the targeted cells as well as indirect killing through bystander effect. Our research group is dedicated to the development of α-RIT, i.e RIT using α-particles especially for the treatment of multiple myeloma (MM. γ-irradiation and β-irradiation have been shown to trigger apoptosis in tumor cells. Cell death mode induced by 213Bi α-irradiation appears more controversial. We therefore decided to investigate the effects of 213Bi on MM cell radiobiology, notably cell death mechanisms as well as tumor cell immunogenicity after irradiation.Methods: Murine 5T33 and human LP-1 multiple myeloma (MM cell lines were used to study the effects of such α-particles. We first examined the effects of 213Bi on proliferation rate, double strand DNA breaks, cell cycle and cell death. Then, we investigated autophagy after 213Bi irradiation. Finally, a co-culture of dendritic cells (DC with irradiated tumour cells or their culture media was performed to test whether it would induce DC activation.Results: We showed that 213Bi induces DNA double strand breaks, cell cycle arrest and autophagy in both cell lines but we detected only slight levels of early apoptosis within the 120 hours following irradiation in 5T33 and LP-1. Inhibition of autophagy prevented 213Bi induced inhibition of proliferation in LP-1 suggesting that this mechanism is involved in cell death after irradiation. We then assessed the immunogenicity of irradiated cells and found that irradiated LP-1 can activate DC through the secretion of soluble factor(s, however no increase in membrane or extracellular expression of danger associated molecular patterns (DAMPs was observed after irradiation.Conclusion: This study demonstrates that 213Bi induces mainly necrosis in MM cells, low levels of apoptosis and also autophagy that might be involved in tumor cell death. 19. Particle-induced pulmonary acute phase response may be the causal link between particle inhalation and cardiovascular disease DEFF Research Database (Denmark) Saber, Anne T.; Jacobsen, Nicklas R.; Jackson, Petra 2014-01-01 Inhalation of ambient and workplace particulate air pollution is associated with increased risk of cardiovascular disease. One proposed mechanism for this association is that pulmonary inflammation induces a hepatic acute phase response, which increases risk of cardiovascular disease. Induction...... epidemiological studies. In this review, we present and review emerging evidence that inhalation of particles (e.g., air diesel exhaust particles and nanoparticles) induces a pulmonary acute phase response, and propose that this induction constitutes the causal link between particle inhalation and risk... 20. Chemical reactions induced and probed by positive muons International Nuclear Information System (INIS) Ito, Yasuo 1990-01-01 The application of μ + science, collectively called μSR, but encompassing a variety of methods including muon spin rotation, muon spin relaxation, muon spin repolarization, muon spin resonance and level-crossing resonance, to chemistry is introduced emphasizing the special aspects of processes which are 'induced and probed' by the μ + itself. After giving a general introduction to the nature and methods of muon science and a short history of muon chemistry, selected topics are given. One concerns the usefulness of muonium as hydrogen-like probes of chemical reactions taking polymerization of vinyl monomers and reaction with thiosulphate as examples. Probing solitons in polyacetylene induced and probed by μ + is also an important example which shows the unique nature of muonium. Another important topic is 'lost polarization'. Although this term is particular to muonium. Another important topic is 'lost polarization'. Although this term is particular to muon chemistry, the chemistry underlining the phenomenon of lost polarization has an importance to both radiation and hot atom chemistries. (orig.) 1. Mass and charge distributions in Fe-induced reactions International Nuclear Information System (INIS) Madani, H.; Mignerey, A.C.; Marchetti, A.A.; Weston-Dawkes, A.P.; Kehoe, W.L.; Obenshain, F. 1995-01-01 The charge and mass of the projectile-like fragments produced in the 12-MeV/nucleon 56 Fe + 165 Ho reaction were measured at a laboratory scattering angle of 16 degrees. The mass and charge distributions of the projectile-like fragments were generated as a function of total kinetic energy loss (TKEL), and characterized by their neutron and proton centroids and variances, and correlation factors. A weak drift of the system towards mass asymmetry, opposite to the direction which minimizes the potential energy of the composite system, was observed. The increase in the variances with energy loss is consistent with a nucleon exchange mechanism as a means for energy dissipation. Predictions of two nucleon exchange models, Randrup's and, Tassan-Got's models, are compared to the experimental results of the 672-MeV 56 Fe + 165 Ho reaction and to other Fe-induced reactions. The proton and neutron centroids were found to be generally better reproduced by Tassan-Got's model than by Randrup's model. The variances and correlation factor are well reproduced for asymmetric systems by both models 2. Neutron-induced particle production in the cumulative and noncumulative regions at intermediate energies International Nuclear Information System (INIS) Mashnik, S.G. 1992-01-01 The first systematic measurements of neutron-induced inclusive production of protons, deuterons, tritons and charged pions on carbon, copper, and bismuth in the bombarding energy range of 300-580 MeV and in the angular interval from 51 deg to 165 deg have been analyzed in the framework of the cascade-exciton model. The role of single-particle scattering, the effects of rescattering, the pre-equilibrium emission and 'coalescence' mechanism in particle production in the cumulative (i.e., kinematically - forbidden for quasi-free intranuclear projectile-nucleon collisions) and noncumulative regions are discussed. A week sensitivity of the inclusive distributions to the specific reaction mechanisms and a need of correlation and polarization measurements are noted. 27 refs.; 12 figs.; 1 tab 3. Validation and upgrading of the recommended cross-section data of charged particle reactions: Gamma emitter radioisotopes International Nuclear Information System (INIS) Takacs, S.; Tarkanyi, F.; Hermanne, A. 2005-01-01 An upgrade and validation test of the recommended cross-section database for production of gamma emitter radioisotopes by charged particle induced reactions, published by the IAEA in 2001, was performed. Experimental microscopic cross-section data published earlier or measured recently and not yet included in the evaluation work were collected and added to the primary database in order to improve the quality of the recommended data. The newly compiled experimental data in general supported the previous recommended data, but in a few cases they influenced the decision and resulted in different selected cross-section data sets. A Spline fitting method was used to calculate the recommended data from the selected data sets. Integral thick target yields were deduced from the newly calculated recommended cross-sections and were critically compared with the available experimental yield data 4. Direct reactions induced by 16O on 208Pb at high incident energy International Nuclear Information System (INIS) Mermaz, M.C. 1978-01-01 Direct reactions induced by 16 O mainly on 208 Pb at 20 MeV/nucleon are reviewed. The quasi-elastic transfer reaction, such as one-proton and one-neutron transfer respectively leading to 209 Bi and 209 Pb single-particle-states, is first discussed, the fragmentation of 16 O projectile on heavy targets is then envisaged. The one-nucleon transfer can be described within the framework of one-step processes using the DWBA formalism to calculate the cross sections. At high incident energy (312.6 MeV), transfer reactions involving nucleons from the deeper 1p 3/2 orbit of 16 O are kinematically favoured and well observed. At 20 MeV/A and above, a large part of the reaction cross sections seems to be due to the fragmentation of the projectile; more especially, an abrasion-ablation model have to be used in order to explain the general trend of the data (energy spectra and angular distribution) 5. Consistent interpretation of neutron-induced charged-particle emission in silicon International Nuclear Information System (INIS) Hermsdorf, D. 1982-06-01 Users requesting gas production cross sections for Silicon will be confronted with serious discrepancies taking evaluated data as well as experimental ones. To clarify the accuracies achieved at present in experiments and evaluations in this paper an intercomparison of different evaluated nuclear data files has been carried out resulting in recommendations for improvements of these files. The analysis of the experimental data base also shows contradictory measurements or in most cases a lack of data. So an interpretation of reliable measured data in terms of nuclear reaction theories has been done using statistical and direct reaction mechanism models. This study results in a consistent and comprehensive evaluated data set for neutron-induced charged-particle production in Silicon which will be incorporated in file 2015 of the SOKRATOR library. (author) 6. The measurement of cross sections of inelastic and transfer reactions with gamma-particle coincidence Energy Technology Data Exchange (ETDEWEB) Zagatto, V.A.B.; Oliveira, J.R.B.; Pereira, D.; Allegro, P.R.P.; Chamon, L.C.; Cybulska, E.W.; Medina, N.H.; Ribas, R.V.; Rossi Junior, E.S.; Seale, W.A.; Silva, C.P.; Gasques, L. [Universidade de Sao Paulo (IF/USP), SP (Brazil). Inst. de Fisica; Toufen, D.L. [Instituto Federal de Educacao, Ciencia e Tecnologia, Guarulhos, SP (Brazil); Silveira, M.A.G. [Centro Universitario da FEI, Sao Bernardo do Campo, SP (Brazil); Zahn, G.S.; Genezini, F.A.; Shorto, J.M.B. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Lubian, J.; Linares, R. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Inst. de Fisica; Nobre, G.P. [Lawrence Livermore National Laboratory, Livermore (United States) 2012-07-01 Full text: A new method was developed in Pelletron laboratory to measure gamma-particle coincidences and the chosen experiment to test this method was the {sup 18}O +{sup 110} Pd in the 46-60 MeV range. The following work aims to obtain experimental cross sections of inelastic excitation 0{sup +} {yields} 2{sup +} of {sup 110}Pd and transfer to excited states reactions (both measured by gamma-particle coincidences). The measurements were made at the Pelletron accelerator laboratory of the University of Sao Paulo with the Saci-Perere spectrometer [1], which consists of 4 GeHP Compton suppressed gamma detectors and a 4{pi} charged particle ancillary system with 11{Delta}E-E plastic phoswich scintillators (further details about the experimental procedure may be found in [2]). Calculations were performed with a new model based on the Sao Paulo Potential, specifically developed for the inclusion of dissipative processes like deep-inelastic collisions (DIC) [3,4] considering the Coulomb plus nuclear potential (with the aid of FRESCO code [5]). The experimental cross sections were obtained such as described in [6] including particle-gamma angular correlations, finite size of gamma and particle detectors as the vacuum de-alignment effects [7] (caused by hyperfine interaction) for the {sup 110}Pd inelastic reaction and for the {sup 110}Pd 2n transfer reaction. Also the effects of the beam spot size and energy loss in the target were included in these calculations. For these purposes a new code has been developed to assist in the data analysis. The gamma-particle angular correlations are calculated using the scattering amplitudes given by FRESCO. The theoretical predictions still consider 2 different types of normalization factors in its the real part: 1:0, and 0:6 as proposed in [3] for the weakly bound projectile cases. The analyses indicate that the 0:6 factor describes better the experimental data possible due to the large density of states in the transitional region. [1 7. Determination of Cross-Sections of Fast-Muon-Induced Reactions to Cosmogenic Radionuclides CERN Multimedia Hagner, T; Heisinger, B; Niedermayer, M; Nolte, E; Oberauer, L; Schonert, S; Kubik, P W 2002-01-01 %NA54 %title\\\\ \\\\We propose to measure cross-sections for fast muon-induced production of radionuclides. Firstly to study the contribution of fast-muon-induced reactions to the in-situ production of cosmogenic radionuclides in the lithosphere. Concrete is used to simulate the rock and to generate a secondary particle shower. The reaction channels to be measured are: C to^{10}$Be, O to$^{10}$Be and$^{14}$C, Si to$^{26}$Al, S to$^{26}$Al, Ca to$^{36}$Cl, Fe to$^{53}$Mn and$^{205}$Tl to$^{205}$Pb. The energy dependent cross-section can be described by one single parameter$\\sigma_0$and the energy dependence$\\rm\\overline{E}^{0.7}$on the mean energy$\\rm\\overline{E}. The irradiations of the targets is done at CERN. The produced radionuclides are measured by accelerator mass spectrometry in Munich and Zurich.\\\\ \\\\Secondly, muon induced signals can be a major source of background in experiments with low event rates located deep underground. We intent to study the produced radioactivity by fast-muon-ind... 8. Heavy-ion induced multinucleon transfer reactions in the 2s--1d shell International Nuclear Information System (INIS) Olmer, C. 1975-01-01 In order to investigate whether new nuclear structure information can be obtained from studying the direct transfer of more than two nucleons using heavy-ion projectiles, we have investigated the 28 Si( 16 O, 12 C) 32 S and 12 C( 14 N,d) 24 Mg reactions as candidates for the direct transfer of four- and twelve-nucleons, respectively. The counter telescope-position sensitive detector kinematic coincidence method--both angular distributions (22 0 less than theta/sub L/ less than 95 0 , E/sub L/ = 55.54 MeV) and excitation functions (theta/sub L/ = 26 0 , 50 less than E/sub L/ less than 63 MeV) were obtained for strongly excited states below 10 MeV in excitation in the first reaction. For the 12 C + 14 N interaction, a measurement of the angular distributions (25 0 less than theta/sub L/ less than 140 0 , E/sub L/ = 20,25 MeV) for proton, deuteron and alpha-particle emission to many low-lying states sufficed for the present purposes. Comparison of Hauser-Feshbach statistical model calculations with these data indicated that the light-particle production from the 12 C + 14 N interaction as investigated here is predominantly compound nuclear in nature. The selectively strong population of a few states in 32 S by the 28 Si-( 16 O, 12 C) 32 S reaction is primarily direct. The structure of these states was deduced from available light-ion-induced transfer reaction studies and shell model calculations; the importance of shell model configurations is indicated, and an alpha-particle transfer model can not account for the observed selectivity. Calculations of the 28 Si( 16 O, 12 C) 32 S reaction with a microscopic multinucleon transfer code indicate selectivities consistent with the present results. Moreover, the calculations suggest the presence of other, unexpected selectivities, all of which may be understood on a physical basis, and some of which appear as an extension of a similar effect seen in two-nucleon transfer reactions 9. Reactions induced by low energy electrons in cryogenic films International Nuclear Information System (INIS) Bass, A.D.; Sanche, L. 2003-01-01 We review recent research on reactions (including dissociation) initiated by low-energy electron bombardment of monolayer and multilayer molecular solids at cryogenic temperatures. With incident electrons of energies below 20 eV, dissociation is observed by the electron stimulated desorption (ESD) of anions from target films and is attributed to the processes of dissociative electron attachment (DEA) and to dipolar dissociation. It is shown that DEA to condensed molecules is sensitive to environmental factors such as the identity of co-adsorbed species and film morphology. The effects of image-charge induced polarization on cross-sections for DEA to CH3Cl are also discussed. Taking as examples, the electron-induced production of CO within multilayer films of methanol and acetone, it is shown that the detection of electronic excited states by high resolution electron energy loss spectroscopy can be used to monitor electron beam damage. In particular, the incident energy dependence of the CO indicates that below 19 eV, dissociation proceeds via the decay of transient negative ions (TNI) into electronically excited dissociative states. The electron induced dissociation of biomolecular targets is also considered, taking as examples the ribose analog tetrahydrofuran and DNA bases adenine and thymine, cytosine and guanine. The ESD of anions from such films also show dissociation via the formation of TNI. In multilayer molecular solids, fragment species resulting from dissociation, may react with neighboring molecules, as is demonstrated in anion ESD measurements from films containing O 2 and various hydrocarbon molecules. X-ray photoelectron spectroscopy measurements reported for electron irradiated monolayers of H 2 O and CF 4 on a Si - H passivated surface further show that DEA is an important initial step in the electron-induced chemisorption of fragment species 10. Laser induced sonofusion: A new road toward thermonuclear reactions Energy Technology Data Exchange (ETDEWEB) Sadighi-Bonabi, Rasoul, E-mail: [email protected] [Sharif University of Technology, P.O. Box 11365-91, Tehran (Iran, Islamic Republic of); Gheshlaghi, Maryam [Payame noor University, P.O. Box 19395-3697, Tehran (Iran, Islamic Republic of); Laser and optics research school, Nuclear Science and Technology Research Institute (NSTRL), P.O. Box 14155-1339, Tehran (Iran, Islamic Republic of) 2016-03-15 The Possibility of the laser assisted sonofusion is studied via single bubble sonoluminescence (SBSL) in Deuterated acetone (C{sub 3}D{sub 6}O) using quasi-adiabatic and hydro-chemical simulations at the ambient temperatures of 0 and −28.5 °C. The interior temperature of the produced bubbles in Deuterated acetone is 1.6 × 10{sup 6} K in hydro-chemical model and it is reached up to 1.9 × 10{sup 6} K in the laser induced SBSL bubbles. Under these circumstances, temperature up to 10{sup 7} K can be produced in the center of the bubble in which the thermonuclear D-D fusion reactions are promising under the controlled conditions. 11. The lasting effect of limonene-induced particle formation on air quality in a genuine indoor environment. Science.gov (United States) Rösch, Carolin; Wissenbach, Dirk K; von Bergen, Martin; Franck, Ulrich; Wendisch, Manfred; Schlink, Uwe 2015-09-01 Atmospheric ozone-terpene reactions, which form secondary organic aerosol (SOA) particles, can affect indoor air quality when outdoor air mixes with indoor air during ventilation. This study, conducted in Leipzig, Germany, focused on limonene-induced particle formation in a genuine indoor environment (24 m(3)). Particle number, limonene and ozone concentrations were monitored during the whole experimental period. After manual ventilation for 30 min, during which indoor ozone levels reached up to 22.7 ppb, limonene was introduced into the room at concentrations of approximately 180 to 250 μg m(-3). We observed strong particle formation and growth within a diameter range of 9 to 50 nm under real-room conditions. Larger particles with diameters above 100 nm were less affected by limonene introduction. The total particle number concentrations (TPNCs) after limonene introduction clearly exceed outdoor values by a factor of 4.5 to 41 reaching maximum concentrations of up to 267,000 particles cm(-3). The formation strength was influenced by background particles, which attenuated the formation of new SOA with increasing concentration, and by ozone levels, an increase of which by 10 ppb will result in a six times higher TPNC. This study emphasizes indoor environments to be preferred locations for particle formation and growth after ventilation events. As a consequence, SOA formation can produce significantly higher amounts of particles than transported by ventilation into the indoor air. 12. Radiation induced chemical reaction of carbon monoxide and hydrogen mixture International Nuclear Information System (INIS) Sugimoto, Shun-ichi; Nishii, Masanobu 1985-01-01 Previous studies of radiation induced chemical reactions of CO-H 2 mixture have revealed that the yields of oxygen containing products were larger than those of hydrocarbons. In the present study, methane was added to CO-H 2 mixture in order to increase further the yields of the oxygen containing products. The yields of most products except a few products such as formaldehyde increased with the addition of small amount of methane. Especially, the yields of trioxane and tetraoxane gave the maximum values when CO-H 2 mixture containing 1 mol% methane was irradiated. When large amounts of methane were added to the mixture, the yields of aldehydes and carboxylic acids having more than two carbon atoms increased, whereas those of trioxane and tetraoxane decreased. From the study at reaction temperature over the range of 200 to 473 K, it was found that the yields of aldehydes and carboxylic acids showed maxima at 323 K. The studies on the effects of addition of cationic scavenger (NH 3 ) and radical scavenger (O 2 ) on the products yields were also carried out on the CO-H 2 -CH 4 mixture. (author) 13. Production cross sections of proton-induced reactions on yttrium Energy Technology Data Exchange (ETDEWEB) Yang, Sung-Chul; Song, Tae-Yung; Lee, Young-Ouk [Nuclear Data Center, Korea Atomic Energy Research Institute, Daejeon 34057 (Korea, Republic of); Kim, Guinyun, E-mail: [email protected] [Department of Physics, Kyungpook National University, Daegu 41566 (Korea, Republic of) 2017-05-01 The production cross sections of residual radionuclides such as {sup 86,88,89g}Zr, {sup 86g,87m,87g,88}Y, {sup 83g,85g}Sr, and {sup 83,84g}Rb in the {sup 89}Y(p,x) reaction were measured using a stacked-foil activation and offline γ-ray spectrometric technique with proton energies of 57 MeV and 69 MeV at the 100 MeV proton linac in the Korea Multi-purpose Accelerator Complex (KOMAC), Gyeongju, Korea. The induced activities of the activated samples were measured using a high purity germanium (HPGe) detector, and the proton flux was determined using the {sup nat}Cu(p,x){sup 62}Zn reaction. The measured data was compared with other experimental data and the data from the TENLD-2015 library based on the TALYS code. The present results are generally lower than those in literature, but are found to be in agreement with the shape of the excitation functions. The integral yields for the thick target using the measured cross sections are given. 14. Synthesis and characterization of ZnGa2O4 particles prepared by solid state reaction International Nuclear Information System (INIS) Can, Musa Mutlu; Hassnain Jaffari, G.; Aksoy, Seda; Shah, S. Ismat; Fırat, Tezer 2013-01-01 Highlights: ► Synthesis of ZnGa 2 O 4 particles produced from metallic Zn and Ga particles. ► The structural comparison of spinel and partially inverse spinel structure in ZnGa 2 O 4 . ► The Ga atoms occupied 13% of tetrahedral site in ZnGa 2 O 4 . ► The band gap, calculated from climate point of UV–visible, was found as 4.6 ± 0.1 eV. ► The optical analyses were shown defective ZnO structure in ZnGa 2 O 4 . - Abstract: We employed solid state reaction technique to synthesize ZnGa 2 O 4 particles, produced in steps of mixing/milling the ingredients in H 2 O following thermal treating under 1200 °C. We compare spinel and partially inverse spinel structure in ZnGa 2 O 4 particles using Rietveld refinement. Crystal structure of ZnGa 2 O 4 particles was identified with two structural phases; normal spinel structure and partially inverse spinel structure using Rietveld refinement. It is found that the partially inverse spinel structures occupy nearly 13% and the rest is normal spinel structure. The obtained X-ray diffraction data show that lattice constant and the position of Oxygen atoms remain almost constant in both structures. The characterization of the particles was also improved using X-ray photoelectron spectroscopy and Fourier transforms infrared spectroscopy measurements. The optical analyses were done with UV–visible spectroscopy. The band gap, calculated from climate point of UV–visible data, was found as 4.6 ± 0.1 eV. Despite no unexpected compound (such as ZnO and Ga 2 O 3 ) in the structure, the optical analyses were shown defective ZnO structure in ZnGa 2 O 4 . 15. Shear-induced particle migration in suspensions of rods Energy Technology Data Exchange (ETDEWEB) Mondy, L.A. (Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)); Brenner, H. (Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States)); Altobelli, S.A. (The Lovelace Institutes, 2425 Ridgecrest Drive, S. E., Albuquerque, New Mexico 87108 (United States)); Abbott, J.R.; Graham, A.L. (Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)) 1994-03-01 Shear-induced migration of particles occurs in suspensions of neutrally buoyant spheres in Newtonian fluids undergoing shear in the annular space between two rotating, coaxial cylinders (a wide-gap Couette), even when the suspension is in creeping flow. Previous studies have shown that the rate of migration of spherical particles from the high-shear-rate region near the inner (rotating) cylinder to the low-shear-rate region near the outer (stationary) cylinder increases rapidly with increasing sphere size. To determine the effect of particle shape, the migration of rods suspended in Newtonian fluids was recently measured. The behavior of several suspensions was studied. Each suspension contained well-characterized, uniform rods with aspect ratios ranging from 2 to 18 at either 0.30 or 0.40 volume fraction. At the same volume fraction of solids, the steady-state, radial concentration profiles for rods were independent of aspect ratio and were indistinguishable from those obtained from suspended spheres. Only minor differences near the walls (attributable to the finite size of the rods relative to the curvature of the walls) appeared to differentiate the profiles. Data taken during the transition from a well-mixed suspension to the final steady state show that the rate of migration increased as the volume of the individual rods increased. 16. Products and kinetics of the heterogeneous reaction of suspended vinclozolin particles with ozone. Science.gov (United States) Gan, Jie; Yang, Bo; Zhang, Yang; Shu, Xi; Liu, Changgeng; Shu, Jinian 2010-11-25 Vinclozolin is a widely used fungicide that can be released into the atmosphere via application and volatilization. This paper reports an experimental investigation on the heterogeneous ozonation of vinclozolin particles. The ozonation of vinclozolin adsorbed on azelaic acid particles under pseudo-first-order conditions is investigated online with a vacuum ultraviolet photoionization aerosol time-of-flight mass spectrometer (VUV-ATOFMS). The ozonation products are analyzed with a combination of VUV-ATOFMS and GC/MS. Two main ozonation products are observed. The formation of the ozonation products results from addition of O(3) on the C-C double bond of the vinyl group. The heterogeneous reactive rate constant of vinclozolin particles under room temperature is (2.4 ± 0.4) × 10(-17) cm(3) molecules(-1) s(-1), with a corresponding lifetime at 100 ppbv O(3) of 4.3 ± 0.7 h, which is almost comparable with the estimated lifetime due to the reaction with atmospheric OH radicals (∼1.7 h). The reactive uptake coefficient for O(3) on vinclozolin particles is (6.1 ± 1.0) × 10(-4). 17. Applications of particle induced X-ray emission International Nuclear Information System (INIS) Akselsson, K. R. 1978-01-01 In Particle Induced X-ray Emission (PIXE) analysis samples are bombarded by protons or α-particles of a few MeV/u. The induced characteristic x-rays are detected with a x-ray detector e.g. a Si(Li)-detector. The energies of the x-ray peaks are characteristic for the elements in the samples and the intensities of the x-ray transitions are proportional to the abundances of the elements. The research area which first attracted those of us working with PIXE was the study of sources, transport and deposition of airborne particulates. Sources, transport, wet deposition, other applications where PIXE is already known to be competitive are trace elemental analysis of water below the ppb-level and analyses requiring a space resolution of 1-10μ. However, there is still much to do for physicists in developing the full potential of low-energy accelerators as analytical tools in multidisciplinary teams. (JIW) 18. Fireworks induced particle pollution: A spatio-temporal analysis Science.gov (United States) Kumar, M.; Singh, R. K.; Murari, V.; Singh, A. K.; Singh, R. S.; Banerjee, T. 2016-11-01 Diwali-specific firework induced particle pollution was measured in terms of aerosol mass loading, type, optical properties and vertical distribution. Entire nation exhibited an increase in particulate concentrations specifically in Indo-Gangetic Plain (IGP). Aerosol surface mass loading at middle IGP revealed an increase of 56-121% during festival days in comparison to their background concentrations. Space-borne measurements (Aqua and Terra-MODIS) typically identified IGP with moderate to high AOD (0.3-0.8) during pre-festive days which transmutes to very high AOD (0.4-1.8) during Diwali-day with accumulation of aerosol fine mode fractions (0.3-1.0). Most of the aerosol surface monitoring stations exhibited increase in PM2.5 especially on Diwali-day while PM10 exhibited increase on subsequent days. Elemental compositions strongly support K, Ba, Sr, Cd, S and P to be considered as firework tracers. The upper and middle IGP revealed dominance of absorbing aerosols (OMI-AI: 0.80-1.40) while CALIPSO altitude-orbit-cross-section profiles established the presence of polluted dust which eventually modified with association of smoke and polluted continental during extreme fireworks. Diwali-specific these observations have implications on associating fireworks induced particle pollution and human health while inclusion of these observations should improve regional air quality model. 19. Single-particle states in ^112Cd probed with the ^111Cd(d,p) reaction Science.gov (United States) Garrett, P. E.; Jamieson, D.; Demand, G. A.; Finlay, P.; Green, K. L.; Leach, K. G.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Wong, J.; Ball, G. C.; Hertenberger, R.; Wirth, H.-F.; Kr"Ucken, R.; Faestermann, T. 2009-10-01 As part of a program of detailed spectroscopy of the Cd isotopes, the single-particle neutron states in ^112Cd have been probed with the ^111Cd(d,p) reaction. Beams of polarized 22 MeV deuterons, obtained from the LMU/TUM Tandem Accelerator, bombarded a target of ^111Cd. The protons from the reaction, corresponding to excitation energies up to 3 MeV in ^112Cd, were momentum analyzed with the Q3D spectrograph. Cross sections and analyzing powers were fit to results of DWBA calculations, and spectroscopic factors were determined. The results from the experiment, and implications for the structure of ^112Cd, will be presented. 20. Light particle emission measurements in heavy ion reactions: Progress report, June 1, 1986-May 31, 1987 International Nuclear Information System (INIS) Petitt, G.A. 1987-01-01 During the past year we have completed our work on neutron emission in coincidence with fission fragments from the 158 Er system. In addition to this we have completed preliminary analysis of our results on neutron emission from products of damped reactions between 58 Ni and 165 Ho at 930 MeV. Two experiments were planned for the present contract period as discussed in our proposal for 1986-87. One of these, to measure the mass and charge distributions from projectile-like fragments (PLF) in the reactions 58 Ni + 165 Ho and 58 Ni + 58 Ni using the time-of-flight facility at the HHIRF has been successfully completed. The other, to measure momentum correlations between neutrons and charged particles produced in central collisions between 32 S + 197 Au is scheduled to be run in mid-February. 14 refs., 4 figs 1. Theory of nuclear reactions with participation of slow charged particles in solids International Nuclear Information System (INIS) Barts, B.I.; Barts, D.B.; Grinenko, A.A. 1992-01-01 In the last two years, there has been a sharp increase of interest in various aspects of the interaction of nuclear particles in solids. This is due, above all, to the sensational reports of the possibility that deuteron fusion reactions take place at normal temperatures. At the present time, it is clear that, among the various factors, an important role for the understanding of this remarkable phenomenon is played by crystal fields that significantly change the tail of the Coulomb barrier and, thus, its penetrability. Here, in connection with the problem of the cold fusion of deuterons, an analysis is made of the influence of screening of the deuteron charges by electrons of the crystal on the penetrability of the Coulomb barrier. A study is made of the reaction-enhancement method in the case when the deuterons move in the general crystal potential well near one of the minima of the crystal potential 2. Multiscale simulations of anisotropic particles combining molecular dynamics and Green's function reaction dynamics Science.gov (United States) Vijaykumar, Adithya; Ouldridge, Thomas E.; ten Wolde, Pieter Rein; Bolhuis, Peter G. 2017-03-01 The modeling of complex reaction-diffusion processes in, for instance, cellular biochemical networks or self-assembling soft matter can be tremendously sped up by employing a multiscale algorithm which combines the mesoscopic Green's Function Reaction Dynamics (GFRD) method with explicit stochastic Brownian, Langevin, or deterministic molecular dynamics to treat reactants at the microscopic scale [A. Vijaykumar, P. G. Bolhuis, and P. R. ten Wolde, J. Chem. Phys. 143, 214102 (2015)]. Here we extend this multiscale MD-GFRD approach to include the orientational dynamics that is crucial to describe the anisotropic interactions often prevalent in biomolecular systems. We present the novel algorithm focusing on Brownian dynamics only, although the methodology is generic. We illustrate the novel algorithm using a simple patchy particle model. After validation of the algorithm, we discuss its performance. The rotational Brownian dynamics MD-GFRD multiscale method will open up the possibility for large scale simulations of protein signalling networks. 3. The measurement of cross sections of inelastic and transfer reactions with gamma-particle coincidence International Nuclear Information System (INIS) Zagatto, V.A.B.; Oliveira, J.R.B.; Pereira, D.; Allegro, P.R.P.; Chamon, L.C.; Cybulska, E.W.; Medina, N.H.; Ribas, R.V.; Rossi Junior, E.S.; Seale, W.A.; Silva, C.P.; Gasques, L.; Toufen, D.L.; Silveira, M.A.G.; Zahn, G.S.; Genezini, F.A.; Shorto, J.M.B.; Lubian, J.; Linares, R. 2011-01-01 Full text: The following work aims to obtain experimental reaction cross sections of inelastic excitation and transfer to excited states reactions (both measured by gamma-particle coincidences) and its comparison with theoretical predictions based in a new model based on the Sao Paulo Potential. The measurements were made at the Pelletron accelerator laboratory of the University of Sao Paulo with the Saci-Perere spectrometer, which consists of 4 a GeHP Compton suppressed gamma detectors and a 4 π charged particle ancillary system with 11ΔΕ - Ε plastic phoswich scintillators (further details about the experimental procedure may be found in: J.R.B. Oliveira et al., XVIII International School on Nuclear Physics, Neutron Physics and Applications (2009). Theoretical angular distribution calculations (using code GOSIA) were performed with a new model based on the Sao Paulo Potential, specifically developed for the inclusion of dissipative processes like deep-inelastic collisions (DIC) considering the Coulomb plus nuclear potential (with the aid of code FRESCO). The experimental cross sections were obtained such as described in J.R.B. Oliveira et al however, in this work, the particle-gamma angular correlations and the vacuum de-alignment effects (caused by hyperfine interaction) were finally added for the 110 Pd inelastic reaction and for the 112 Pd transfer reaction. For these purposes a new code has been developed to assist in the data analysis. We take into account the particle-gamma angular correlations using the scattering amplitudes given by FRESCO, considering the vacuum de-alignment effects as proposed by A. Abragam and R. V. Pound, Phys. Rev. 92, 943 (1953). The theoretical predictions still consider 2 different types of Sao Paulo Potential, the first one has a multiplying factor equals to 1.0 in the real part of the potential and the second considers this factor equals to 0.6, as proposed in D. Pereira, J. Lubian, J.R.B. Oliveira, D.P. de Sousa and L 4. Two reactions method for accurate analysis by irradiation with charged particles International Nuclear Information System (INIS) Ishii, K.; Sastri, C.S.; Valladon, M.; Borderie, B.; Debrun, J.L. 1978-01-01 In the average stopping power method the formula error itself was negligible but systematic errors could be introduced by the stopping power data used in this formula. A method directly derived from the average stopping power method, but based on the use of two nuclear reactions, is described here. This method has a negligible formula error and does not require the use of any stopping power or range data: accurate and 'self-consistent' analysis by irradiation with charged particles is then possible. (Auth.) 5. The TDF System for Thermonuclear Plasma Reaction Rates, Mean Energies and Two-Body Final State Particle Spectra International Nuclear Information System (INIS) Warshaw, S I 2001-01-01 The rate of thermonuclear reactions in hot plasmas as a function of local plasma temperature determines the way in which thermonuclear ignition and burning proceeds in the plasma. The conventional model approach to calculating these rates is to assume that the reacting nuclei in the plasma are in Maxwellian equilibrium at some well-defined plasma temperature, over which the statistical average of the reaction rate quantity σv is calculated, where σ is the cross-section for the reaction to proceed at the relative velocity v between the reacting particles. This approach is well-understood and is the basis for much nuclear fusion and astrophysical nuclear reaction rate data. The Thermonuclear Data File (TDF) system developed at the Lawrence Livermore National Laboratory (Warshaw 1991), which is the topic of this report, contains data on the Maxwellian-averaged thermonuclear reaction rates for various light nuclear reactions and the correspondingly Maxwellian-averaged energy spectra of the particles in the final state of those reactions as well. This spectral information closely models the output particle and energy distributions in a burning plasma, and therefore leads to more accurate computational treatments of thermonuclear burn, output particle energy deposition and diagnostics, in various contexts. In this report we review and derive the theoretical basis for calculating Maxwellian-averaged thermonuclear reaction rates, mean particle energies, and output particle spectral energy distributions for these reactions in the TDF system. The treatment of the kinematics is non-relativistic. The current version of the TDF system provides exit particle energy spectrum distributions for two-body final state reactions only. In a future report we will discuss and describe how output particle energy spectra for three- and four-body final states can be developed for the TDF system. We also include in this report a description of the algorithmic implementation of the TDF 6. Alpha particles induce expression of immunogenic markers on tumour cells International Nuclear Information System (INIS) Gorin, J.B.; Gouard, S.; Cherel, M.; Davodeau, F.; Gaschet, J.; Morgenstern, A.; Bruchertseifer, F. 2013-01-01 The full text of the publication follows. Radioimmunotherapy (RIT) is an approach aiming at targeting the radioelements to tumours, usually through the use of antibodies specific for tumour antigens. The radiations emitted by the radioelements then induce direct killing of the targeted cells as well as indirect killing through bystander effect. Interestingly, it has been shown that ionizing radiations, in some settings of external radiotherapy, can foster an immune response directed against tumour cells. Our research team is dedicated to the development of alpha RIT, i.e RIT using alpha particle emitters, we therefore decided to study the effects of such particles on tumour cells in regards to their immunogenicity. First, we studied the effects of bismuth 213, an alpha emitter, on cellular death and autophagy in six different tumour cell lines. Then, we measured the expression of 'danger' signals and MHC molecules at the cell surface to determine whether irradiation with 213 Bi could cause the tumour cells to be recognized by the immune system. Finally a co-culture of dendritic cells with irradiated tumour cells was performed to test whether it would induce dendritic cells to mature. No apoptosis was detected within 48 hours after irradiation in any cell line, however half of them exhibited signs of autophagy. No increase in membrane expression of 'danger' signals was observed after treatment with 213 Bi, but we showed an increase in expression of MHC class I and II for some cell lines. Moreover, the co-culture experiment indicated that the immunogenicity of a human adenocarcinoma cell line (LS 174T) was enhanced in vitro after irradiation with alpha rays. These preliminary data suggest that alpha particles could be of interest in raising an immune response associated to RIT. (authors) 7. Particle transport model sensitivity on wave-induced processes Science.gov (United States) Staneva, Joanna; Ricker, Marcel; Krüger, Oliver; Breivik, Oyvind; Stanev, Emil; Schrum, Corinna 2017-04-01 Different effects of wind waves on the hydrodynamics in the North Sea are investigated using a coupled wave (WAM) and circulation (NEMO) model system. The terms accounting for the wave-current interaction are: the Stokes-Coriolis force, the sea-state dependent momentum and energy flux. The role of the different Stokes drift parameterizations is investigated using a particle-drift model. Those particles can be considered as simple representations of either oil fractions, or fish larvae. In the ocean circulation models the momentum flux from the atmosphere, which is related to the wind speed, is passed directly to the ocean and this is controlled by the drag coefficient. However, in the real ocean, the waves play also the role of a reservoir for momentum and energy because different amounts of the momentum flux from the atmosphere is taken up by the waves. In the coupled model system the momentum transferred into the ocean model is estimated as the fraction of the total flux that goes directly to the currents plus the momentum lost from wave dissipation. Additionally, we demonstrate that the wave-induced Stokes-Coriolis force leads to a deflection of the current. During the extreme events the Stokes velocity is comparable in magnitude to the current velocity. The resulting wave-induced drift is crucial for the transport of particles in the upper ocean. The performed sensitivity analyses demonstrate that the model skill depends on the chosen processes. The results are validated using surface drifters, ADCP, HF radar data and other in-situ measurements in different regions of the North Sea with a focus on the coastal areas. The using of a coupled model system reveals that the newly introduced wave effects are important for the drift-model performance, especially during extremes. Those effects cannot be neglected by search and rescue, oil-spill, transport of biological material, or larva drift modelling. 8. Changes in the surface electronic states of semiconductor fine particles induced by high energy ion irradiation Energy Technology Data Exchange (ETDEWEB) Yamaki, Tetsuya; Asai, Keisuke; Ishigure, Kenkichi [Tokyo Univ. (Japan); Shibata, Hiromi 1997-03-01 The changes in the surface electronic states of Q-sized semiconductor particles in Langmuir-Blodgett (LB) films, induced by high energy ion irradiation, were examined by observation of ion induced emission and photoluminescence (PL). Various emission bands attributed to different defect sites in the band gap were observed at the initial irradiation stage. As the dose increased, the emissions via the trapping sites decreased in intensity while the band-edge emission developed. This suggests that the ion irradiation would remove almost all the trapping sites in the band gap. The low energy emissions, which show a multiexponential decay, were due to a donor-acceptor recombination between the deeply trapped carriers. It was found that the processes of formation, reaction, and stabilization of the trapping sites would predominantly occur under the photooxidizing conditions. (author) 9. Evaluation of Neutron Induced Reactions for 32 Fission Products Energy Technology Data Exchange (ETDEWEB) Kim, Hyeong Il 2007-02-15 sections for almost all reaction channels including photon production, energy spectra of emitted particles and their angular distributions, as well as recoils. The evaluated data were converted into ENDF-6 formatted files, checked by a set of the CSEWG checking codes, processed with the code NJOY-99.161, and subject to test runs with the code MCNP5 to ensure that the files can be used in transport calculations. All evaluations were adopted by the new U.S. evaluated library, ENDF/B-VII.0, released in December 2006. 10. Evaluation of Neutron Induced Reactions for 32 Fission Products International Nuclear Information System (INIS) Kim, Hyeong Il 2007-02-01 Neutron cross sections for 32 fission products were evaluated in the neutron-incident energy range from 10 -5 eV to 20 MeV. The list of fission products consists of the priority materials for several applications, extended to cover complete isotopic chains for three elements. The full list includes 8 individual isotopes, 95 Mo, 101 Ru, 103 Rh, 105 Pd, 109 Ag, 131 Xe, 133 Cs, 141 Pr, and 24 isotopes in complete isotopic chains for Nd (8), Sm (9) and Dy (7). Our evaluation methodology covers both the low energy region and the fast neutron region.In the low energy region, our evaluations are based on the latest data published in the Atlas of Neutron Resonances. This resource was used to infer both the thermal values and the resolved resonance parameters that were validated against the capture resonance integrals. In the unresolved resonance region we performed the additional evaluation by using the averages of the resolved resonances and adjusting them to the experimental data.In the fast neutron region our evaluations are based on the nuclear reaction model code EMPIRE-2.19 validated against the experimental data. EMPIRE is the modular system of codes consisting of many nuclear reaction models, including the spherical and deformed Optical Model, Hauser-Feshbach theory with the width fluctuation correction and complete gamma-ray emission cascade, DWBA, Multi-step Direct and Multi-step Compound models, and several versions of the phenomenological preequilibrium models. The code is equipped with a power full GUI, allowing an easy access to support libraries such as RIPL and CSISRS, the graphical package, as well the utility codes for formatting and checking. In general, in our calculations we used the Reference Input Parameter Library, RIPL, for the initial set model parameters. These parameters were properly adjusted to reproduce the available experimental data taken from the CSISRS library. Our evaluations cover cross sections for almost all reaction channels 11. Quasifree (p , 2 p ) Reactions on Oxygen Isotopes: Observation of Isospin Independence of the Reduced Single-Particle Strength Science.gov (United States) Atar, L.; Paschalis, S.; Barbieri, C.; Bertulani, C. A.; Díaz Fernández, P.; Holl, M.; Najafi, M. A.; Panin, V.; Alvarez-Pol, H.; Aumann, T.; Avdeichikov, V.; Beceiro-Novo, S.; Bemmerer, D.; Benlliure, J.; Boillos, J. M.; Boretzky, K.; Borge, M. J. G.; Caamaño, M.; Caesar, C.; Casarejos, E.; Catford, W.; Cederkall, J.; Chartier, M.; Chulkov, L.; Cortina-Gil, D.; Cravo, E.; Crespo, R.; Dillmann, I.; Elekes, Z.; Enders, J.; Ershova, O.; Estrade, A.; Farinon, F.; Fraile, L. M.; Freer, M.; Galaviz Redondo, D.; Geissel, H.; Gernhäuser, R.; Golubev, P.; Göbel, K.; Hagdahl, J.; Heftrich, T.; Heil, M.; Heine, M.; Heinz, A.; Henriques, A.; Hufnagel, A.; Ignatov, A.; Johansson, H. T.; Jonson, B.; Kahlbow, J.; Kalantar-Nayestanaki, N.; Kanungo, R.; Kelic-Heil, A.; Knyazev, A.; Kröll, T.; Kurz, N.; Labiche, M.; Langer, C.; Le Bleis, T.; Lemmon, R.; Lindberg, S.; Machado, J.; Marganiec-Gałązka, J.; Movsesyan, A.; Nacher, E.; Nikolskii, E. Y.; Nilsson, T.; Nociforo, C.; Perea, A.; Petri, M.; Pietri, S.; Plag, R.; Reifarth, R.; Ribeiro, G.; Rigollet, C.; Rossi, D. M.; Röder, M.; Savran, D.; Scheit, H.; Simon, H.; Sorlin, O.; Syndikus, I.; Taylor, J. T.; Tengblad, O.; Thies, R.; Togano, Y.; Vandebrouck, M.; Velho, P.; Volkov, V.; Wagner, A.; Wamers, F.; Weick, H.; Wheldon, C.; Wilson, G. L.; Winfield, J. S.; Woods, P.; Yakorev, D.; Zhukov, M.; Zilges, A.; Zuber, K.; R3B Collaboration 2018-01-01 Quasifree one-proton knockout reactions have been employed in inverse kinematics for a systematic study of the structure of stable and exotic oxygen isotopes at the R3B /LAND setup with incident beam energies in the range of 300 - 450 MeV /u . The oxygen isotopic chain offers a large variation of separation energies that allows for a quantitative understanding of single-particle strength with changing isospin asymmetry. Quasifree knockout reactions provide a complementary approach to intermediate-energy one-nucleon removal reactions. Inclusive cross sections for quasifree knockout reactions of the type O A (p ,2 p )N-1A have been determined and compared to calculations based on the eikonal reaction theory. The reduction factors for the single-particle strength with respect to the independent-particle model were obtained and compared to state-of-the-art ab initio predictions. The results do not show any significant dependence on proton-neutron asymmetry. 12. Subway particles are more genotoxic than street particles and induce oxidative stress in cultured human lung cells. Science.gov (United States) Karlsson, Hanna L; Nilsson, Lennart; Möller, Lennart 2005-01-01 Epidemiological studies have shown an association between airborne particles and a wide range of adverse health effects. The mechanisms behind these effects include oxidative stress and inflammation. Even though traffic gives rise to high levels of particles in the urban air, people are exposed to even higher levels in the subway. However, there is a lack of knowledge regarding how particles from different urban subenvironments differ in toxicity. The main aim of the present study was to compare the ability of particles from a subway station and a nearby very busy urban street, respectively, to damage DNA and to induce oxidative stress. Cultured human lung cells (A549) were exposed to particles, DNA damage was analyzed using single cell gel electrophoresis (the comet assay), and the ability to induce oxidative stress was measured as 8-oxo-7,8-dihydro-2'-deoxyguanosine (8-oxodG) formation in lung cell DNA. We found that the subway particles were approximately eight times more genotoxic and four times more likely to cause oxidative stress in the lung cells. When the particles, water extracts from the particles, or particles treated with the metal chelator deferoxamine mesylate were incubated with 2'-deoxyguanosine (dG) and 8-oxodG was analyzed, we found that the oxidative capacity of the subway particles was due to redox active solid metals. Furthermore, analysis of the atomic composition showed that the subway particles to a dominating degree (atomic %) consisted of iron, mainly in the form of magnetite (Fe3O4). By using electron microscopy, the interaction between the particles and the lung cells was shown. The in vitro reactivity of the subway particles in combination with the high particle levels in subway systems give cause of concern due to the high number of people that are exposed to subway particles on a daily basis. To what extent the subway particles cause health effects in humans needs to be further evaluated. 13. Atomic collisions by neutrons-induced charged particles in water, protein and nucleic acid International Nuclear Information System (INIS) Bergman, R. 1976-01-01 The action of slow charged particles is peculiar in that atomic collisions are commonly invlolved. In atomic collisions, which are rare events when fast particles interact with matter, displacement of atoms and chemical bond-breakage is possible. Sufficiently energetic neutrons generate charged recoil particles in matter. Some of these are slow as compared to orbital electrons, but the energy transferred to such slow particles is generally relatively small. Yet, it contributes significantly to the dose absorbed from 0.1-30 keV neutrons. In tissue all recoils induced by neutrons of less than 30 keV are slow, and above 0.1 keV the absorbed dose due to collisiondominates over that due to capture reactions. The aim of the present paper is to identify those intervals of neutron energy in which atomic collision damage is most probable in living matter. The results of calculations presented here indicate that atomic collisions should be most significant for 0.5-3 keV neutrons. (author) 14. Reaction pathways of producing and losing particles in atmospheric pressure methane nanosecond pulsed needle-plane discharge plasma Science.gov (United States) Zhao, Yuefeng; Wang, Chao; Li, Li; Wang, Lijuan; Pan, Jie 2018-03-01 In this work, a two-dimensional fluid model is built up to numerically investigate the reaction pathways of producing and losing particles in atmospheric pressure methane nanosecond pulsed needle-plane discharge plasma. The calculation results indicate that the electron collisions with CH4 are the key pathways to produce the neutral particles CH2 and CH as well as the charged particles e and CH3+. CH3, H2, H, C2H2, and C2H4 primarily result from the reactions between the neutral particles and CH4. The charge transfer reactions are the significant pathways to produce CH4+, C2H2+, and C2H4+. As to the neutral species CH and H and the charged species CH3+, the reactions between themselves and CH4 contribute to substantial losses of these particles. The ways responsible for losing CH3, H2, C2H2, and C2H4 are CH3 + H → CH4, H2 + CH → CH2 + H, CH4+ + C2H2 → C2H2+ + CH4, and CH4+ + C2H4 → C2H4+ + CH4, respectively. Both electrons and C2H4+ are consumed by the dissociative electron-ion recombination reactions. The essential reaction pathways of losing CH4+ and C2H2+ are the charge transfer reactions. 15. Reactions of charged and neutral recoil particles following nuclear transformations. Progress report No. 11, September 1976--August 1977 International Nuclear Information System (INIS) Ache, H.J. 1977-09-01 The status of the following programs is reported: study of the stereochemistry of halogen atom reactions produced via (n,γ) nuclear reactions with diastereomeric molecules in the condensed phase; decay-induced labelling of compounds of biochemical interest; reactions of energetic tritium species in graphite; and positron lifetime measurements in γ-irradiated organic solids 16. Ionizing radiation induced attachment reactions of nucleic acids and their components International Nuclear Information System (INIS) Myers, L.S. Jr. 1975-01-01 An extensive bibliographic review is given of experimental and theoretical data on radiation-induced attachment reactions of nucleic acids and their components. Mechanisms of these reactions are reviewed. The reactions with water, formate, and alcohols, with amines and other small molecules, and with radiation sensitizers and nucleic acid-nucleic acid reactions are discussed. Studies of the reaction mechanisms show that many of the reactions occur by radical-molecule reactions, but radical-radical reactions also occur. Radiation modifiers become attached to nucleic acids in vitro and in vivo and there are indications that attachment may be necessary for the action of some sensitizers. (U.S.) 17. Light particle emission measurements in heavy ion reactions: Progress report, June 1, 1988--May 31, 1989 International Nuclear Information System (INIS) Petitt, G.A. 1989-01-01 We have completed another successful year of experimental work at the Heavy Ion Research Facility (HHIRF) and at Georgia State University (GSU). Since submitting our previous progress report we have completed our paper on neutron emission from products of the reaction 58 Ni + 165 Ho and it has been submitted to Physical Review C. Some of the details of these results are discussed below. We have installed the Vaxstation computer system for which we received supplemental funding from DOE during 1988-89 and it is being used to analyze the Ni + Ho data using the codes Pace and a modified version of Lilita, both of which we have been able to transfer to our Vaxstation systems from the Vax at ORNL with very minimal modification. The Exabyte tape drive which we ordered with the computer system was finally delivered at the end of January after months of delays. It is now being used to scan data tapes from our experiment to study neutron-neutron and neutron-charged-particle momentum correlations using the reaction 32 S + 197 Au at 25 MeV/nucleon. This data analysis can now proceed at a fast pace. Finally, we have continued our developmental work on the Hili detector system at ORNL, and have participated in experiments to study the predictions of the Dyabatic Dynamics model of particle emission using the Ni + Ni system and the HILI detector system 18. Development of automatic nuclear emulsion plate analysis system and its application to elementary particle reactions, 2 International Nuclear Information System (INIS) Ushida, Noriyuki; Otani, Masashi; Kumazaki, Noriyasu 1984-01-01 This system is composed of precise coordinate measuring apparatuses, a stage controller and various peripherals, employing NOVA 4/C as the host computer. The analyzed results are given as the output to a printer or an XY plotter. The data required for experiment, sent from Nagoya University and others, are received by the host computer through an acoustic coupler, and stored in floppy disks. This paper contains simple explanation on the monitor for the events which occur immediately after the on-line measurement ''MTF 1'', the XY plotter and the acoustic coupler, which hold important position in the system in spite of low cost, due to the development of useful program, as those were not described in the previous paper. The three-dimensional reconstruction of tracks and various errors, corrective processing and analytical processing after corrective processing as off-line processing are also described. In addition, the application of the system was made to the E-531 neutrino experiment in Fermi National Accelerator Laboratory, which attempted to measure the life of the charm particles generated in neutrino reaction with a composite equipment composed of nuclear plates and various counters. First, the outline of the equipment, next, the location of neutrino reaction and the surveillance of charm particle decay using MTF program as the analyzing method at the target, and thirdly, the emulsion-counter data fitting are explained, respectively. (Wakatsuki, Y.) 19. Light charged particle production induced by fast neutrons (En=25-65 MeV) on 209Bi International Nuclear Information System (INIS) Raeymackers, Erwin; Slypen, Isabelle; Benck, Sylvie; Meulders, Jean-Pierre; Nica, Ninel; Corcalciuc, Valentin 2002-01-01 This paper presents the experimental set-up and data reduction procedures regarding the measurement of double-differential cross sections for light charged particle production in fast neutron induced reactions (n, px), (n, dx), (n, tx) and (n, αx) on bismuth in the incident neutron energy range 25-65 MeV and at laboratory angles from 20deg to 160deg. preliminary double-differential and energy-differential cross sections for hydrogen isotopes are presented. (author) 20. Development of a system of measuring double-differential cross sections for proton-induced reactions Energy Technology Data Exchange (ETDEWEB) Harada, M.; Watanabe, Y.; Sato, K. [Kyushu Univ., Fukuoka (Japan); Meigo, S. 1997-03-01 We report the present status of a counter telescope and a data acquisition system which are being developed for the measurement of double-differential cross sections of all light-charged particles emitted from proton-induced reactions on {sup 12}C at incident energies less than 90 MeV. The counter telescope consists of an active collimator made of a plastic scintillator, two thin silicon {Delta}E-detectors and a CsI(Tl) E-detectors with photo-diode readout. Signals from each detector are processed using the data acquisition system consisting of the front-end electronics (CAMAC) and two computers connected with the ethernet LAN: a personal computer as the data collector and server, and a UNIX workstation as the monitor and analyzer. (author) 1. Improved Simulation of the Pre-equilibrium Triton Emission in Nuclear Reactions Induced by Nucleons Science.gov (United States) Konobeyev, A. Yu.; Fischer, U.; Pereslavtsev, P. E.; Blann, M. 2014-04-01 A new approach is proposed for the calculation of non-equilibrium triton energy distributions in nuclear reactions induced by nucleons of intermediate energies. It combines models describing the nucleon pick-up, the coalescence and the triton knock-out processes. Emission and absorption rates for excited particles are represented by the pre-equilibrium hybrid model. The model of Sato, Iwamoto, Harada is used to describe the nucleon pick-up and the coalescence of nucleons from exciton configurations starting from (2p,1h) states. The contribution of the direct nucleon pick-up is described phenomenologically. Multiple pre-equilibrium emission of tritons is accounted for. The calculated triton energy distributions are compared with available experimental data. 2. Energetic particle induced desorption of water vapor cryo-condensate International Nuclear Information System (INIS) Menon, M.M.; Owen, L.W.; Simpkins, J.E.; Uckan, T.; Mioduszewski, P.K. 1990-01-01 An in-vessel cryo-condensation pump is being designed for the Advanced Divertor configuration of the DIII-D tokamak. To assess the importance of possible desorption of water vapor from the cryogenic surfaces of the pump due to impingement of energetic particles from the plasma, a 77 K surface on which a thin layer of water vapor was condensed was exposed to a tenuous plasma (density = 2 x 10 10 cm -3 , electron temperature = 3 eV). Significant desorption of the condensate occurred, suggesting that impingement of energeticparticles (10 eV) at flux levels of ∼10 16 cm 2 s -1 on cryogenic surfaces could potentially induce impurity problems in the tokamak plasma. A pumping configuration is presented in which this problem is minimized without sacrificing the pumping speed 3. Thermodynamics of gravitationally induced particle creation scenario in DGP braneworld Energy Technology Data Exchange (ETDEWEB) Jawad, Abdul; Rani, Shamaila; Rafique, Salman [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan) 2018-01-15 In this paper, we discuss the thermodynamical analysis for gravitationally induced particle creation scenario in the framework of DGP braneworld model. For this purpose, we consider apparent horizon as the boundary of the universe. We take three types of entropy such as Bakenstein entropy, logarithmic corrected entropy and power law corrected entropy with ordinary creation rate Γ. We analyze the first law and generalized second law of thermodynamics analytically for these entropies which hold under some constraints. The behavior of total entropy in each case is also discussed which implies the validity of generalized second law of thermodynamics. Also, we check the thermodynamical equilibrium condition for two phases of creation rate, that is constant and variable Γ and found its vitality in all cases of entropy. (orig.) 4. Thermodynamics of gravitationally induced particle creation scenario in DGP braneworld International Nuclear Information System (INIS) Jawad, Abdul; Rani, Shamaila; Rafique, Salman 2018-01-01 In this paper, we discuss the thermodynamical analysis for gravitationally induced particle creation scenario in the framework of DGP braneworld model. For this purpose, we consider apparent horizon as the boundary of the universe. We take three types of entropy such as Bakenstein entropy, logarithmic corrected entropy and power law corrected entropy with ordinary creation rate Γ. We analyze the first law and generalized second law of thermodynamics analytically for these entropies which hold under some constraints. The behavior of total entropy in each case is also discussed which implies the validity of generalized second law of thermodynamics. Also, we check the thermodynamical equilibrium condition for two phases of creation rate, that is constant and variable Γ and found its vitality in all cases of entropy. (orig.) 5. Antitumor bystander effect induced by radiation-inducible target gene therapy combined with α particle irradiation International Nuclear Information System (INIS) Liu Hui; Jin Chufeng; Wu Yican; Ge Shenfang; Wu Lijun; FDS Team 2012-01-01 In this work, we investigated the bystander effect of the tumor and normal cells surrounding the target region caused by radiation-inducible target gene therapy combined with α-particle irradiation. The receptor tumor cell A549 and normal cell MRC-5 were co-cultured with the donor cells irradiated to 0.5 Gy or the non-irradiated donor cells, and their survival and apoptosis fractions were evaluated. The results showed that the combined treatment of Ad-ET and particle irradiation could induce synergistic antitumor effect on A549 tumor cell, and the survival fraction of receptor cells co-cultured with the irradiated cells decreased by 6%, compared with receptor cells co-cultured with non-irradiated cells, and the apoptosis fraction increased in the same circumstance, but no difference was observed with the normal cells. This study demonstrates that Ad-ET combined with α-particle irradiation can significantly cause the bystander effect on neighboring tumor cells by inhibiting cell growth and inducing apoptosis, without obvious toxicity to normal cells. This suggests that combining radiation-inducible TRAIL gene therapy and irradiation may improve tumor treatment efficacy by specifically targeting tumor cells and even involving the neighboring tumor cells. (authors) 6. [Reaction mechanism studies of heavy ion induced nuclear reactions]: Annual progress report, October 1987 International Nuclear Information System (INIS) Mignerey, A.C. 1987-10-01 The experiments which this group has been working on seek to define the reaction mechanisms responsible for complex fragment emission in heavy ion reactions. The reactions studied are La + La, La + Al, and La + Cu at 46.8 MeV/u; and Ne + Ag and Ne + Au reactions at 250 MeV/u. Another experimental program at the Oak Ridge Hollifield Heavy Ion Research Facility (HHIRF) is designed to measure the excitation energy division between reaction products in asymmetric deep inelastic reactions. A brief description is given of progress to date, the scientific goals of this experiment and the plastic phoswich detectors developed for this experiment 7. Thermodynamic implications of the gravitationally induced particle creation scenario Energy Technology Data Exchange (ETDEWEB) Saha, Subhajit [Indian Institute of Science Education and Research Kolkata, Department of Physical Sciences, Mohanpur, West Bengal (India); Mondal, Anindita [S N Bose National Centre for Basic Sciences, Department of Astrophysics and Cosmology, Kolkata, West Bengal (India) 2017-03-15 A rigorous thermodynamic analysis has been done as regards the apparent horizon of a spatially flat Friedmann-Lemaitre-Robertson-Walker universe for the gravitationally induced particle creation scenario with constant specific entropy and an arbitrary particle creation rate Γ. Assuming a perfect fluid equation of state p = (γ - 1)ρ with (2)/(3) ≤ γ ≤ 2, the first law, the generalized second law (GSL), and thermodynamic equilibrium have been studied, and an expression for the total entropy (i.e., horizon entropy plus fluid entropy) has been obtained which does not contain Γ explicitly. Moreover, a lower bound for the fluid temperature T{sub f} has also been found which is given by T{sub f} ≥ 8 (((3γ)/(2)-1)/((2)/(γ)-1)) H{sup 2}. It has been shown that the GSL is satisfied for (Γ)/(3H) ≤ 1. Further, when Γ is constant, thermodynamic equilibrium is always possible for (1)/(2) < (Γ)/(3H) < 1, while for (Γ)/(3H) ≤ min {(1)/(2), (2γ-2)/(3γ-2)} and (Γ)/(3H) ≥ 1, equilibrium can never be attained. Thermodynamic arguments also lead us to believe that during the radiation phase, Γ ≤ H. When Γ is not a constant, thermodynamic equilibrium holds if H ≥ (27)/(4) γ{sup 2}H{sup 3} (1-(Γ)/(3H)){sup 2}, however, such a condition is by no means necessary for the attainment of equilibrium. (orig.) 8. Light-induced electronic non-equilibrium in plasmonic particles. Science.gov (United States) Kornbluth, Mordechai; Nitzan, Abraham; Seideman, Tamar 2013-05-07 We consider the transient non-equilibrium electronic distribution that is created in a metal nanoparticle upon plasmon excitation. Following light absorption, the created plasmons decohere within a few femtoseconds, producing uncorrelated electron-hole pairs. The corresponding non-thermal electronic distribution evolves in response to the photo-exciting pulse and to subsequent relaxation processes. First, on the femtosecond timescale, the electronic subsystem relaxes to a Fermi-Dirac distribution characterized by an electronic temperature. Next, within picoseconds, thermalization with the underlying lattice phonons leads to a hot particle in internal equilibrium that subsequently equilibrates with the environment. Here we focus on the early stage of this multistep relaxation process, and on the properties of the ensuing non-equilibrium electronic distribution. We consider the form of this distribution as derived from the balance between the optical absorption and the subsequent relaxation processes, and discuss its implication for (a) heating of illuminated plasmonic particles, (b) the possibility to optically induce current in junctions, and (c) the prospect for experimental observation of such light-driven transport phenomena. 9. Thermodynamic implications of the gravitationally induced particle creation scenario International Nuclear Information System (INIS) Saha, Subhajit; Mondal, Anindita 2017-01-01 A rigorous thermodynamic analysis has been done as regards the apparent horizon of a spatially flat Friedmann-Lemaitre-Robertson-Walker universe for the gravitationally induced particle creation scenario with constant specific entropy and an arbitrary particle creation rate Γ. Assuming a perfect fluid equation of state p = (γ - 1)ρ with (2)/(3) ≤ γ ≤ 2, the first law, the generalized second law (GSL), and thermodynamic equilibrium have been studied, and an expression for the total entropy (i.e., horizon entropy plus fluid entropy) has been obtained which does not contain Γ explicitly. Moreover, a lower bound for the fluid temperature T f has also been found which is given by T f ≥ 8 (((3γ)/(2)-1)/((2)/(γ)-1)) H 2 . It has been shown that the GSL is satisfied for (Γ)/(3H) ≤ 1. Further, when Γ is constant, thermodynamic equilibrium is always possible for (1)/(2) < (Γ)/(3H) < 1, while for (Γ)/(3H) ≤ min {(1)/(2), (2γ-2)/(3γ-2)} and (Γ)/(3H) ≥ 1, equilibrium can never be attained. Thermodynamic arguments also lead us to believe that during the radiation phase, Γ ≤ H. When Γ is not a constant, thermodynamic equilibrium holds if H ≥ (27)/(4) γ 2 H 3 (1-(Γ)/(3H)) 2 , however, such a condition is by no means necessary for the attainment of equilibrium. (orig.) 10. Baryon resonances in pion- and photon-induced hadronic reactions International Nuclear Information System (INIS) Roenchen, Deborah 2014-01-01 The aim of the present work is the analysis of the baryon spectrum in the medium-energy regime. At those energies, a perturbative treatment of Quantum Chromodynamics, that is feasible in the high-energy regime, is not possible. Chiral perturbation theory, the low-energy effective theory of the strong interaction, is limited to the lowest excited states and does not allow to analyze the complete resonance region. For the latter purpose, dynamical coupled-channel approaches provide an especially suited framework. In the present study, we apply the Juelich model, a dynamical coupled-channel model developed over the years, to analyze pion- and photon-induced hadronic reactions in a combined approach. In the Juelich model, the interaction of the mesons and baryons is built of t- and u-channel exchange diagrams based on an effective Lagrangian. Genuine resonances are included as s-channel states. The scattering potential is unitarized in a Lippmann-Schwinger-type equation. Analyticity is preserved, which is a prerequisite for a reliable extraction of resonance parameters in terms of pole positions and residues in the complex energy plane. Upon giving an introduction to the subject in Chap. 1 and showing selected results in Chap. 2, we will describe the simultaneous analysis of elastic πN scattering and the reactions π - p → ηn, K 0 Λ, K + Σ - , K 0 Σ 0 and π + p→K + Σ + within the Juelich framework in Chap. 3. The free parameters of the model are adjusted to the GWU/SAID analysis of elastic πN scattering and, in case of the inelastic reactions, to experimental data. Partial waves up to J=9/2 are included and we consider the world data set from threshold up to E∝2.3 GeV. We show our fit results compared to differential and total cross sections, to polarizations and to measurements of the spin-rotation parameter. Finally, we present the results of a pole search in the complex energy plane of the scattering amplitude and discuss the extracted resonance 11. Nucleon-induced reactions at intermediate energies: new data at 96 MeV and theoretical status Energy Technology Data Exchange (ETDEWEB) Blideanu, V.; Lecolley, F.R.; Lecolley, J.F.; Lefort, T.; Marie, N.; Ban, G.; Louvel, M. [Caen Univ., Lab. de Physique Corpusculaire, ENSICAEN, IN2P3-CNRS ISMRA, 14 (France); Atac, A.; Bergenwall, B.; Blomgren, J.; Dangtip, S.; Hildebrand, A.; Hohansson, C.; Klug, J.; Nilsson, L.; Ollson, N.; Pomp, S.; Tippawan, U.; Osterlund, M. [Uppsala Univ., Nykoeping (Sweden). Dept. of Neutron Research; Tippawan, U. [Chiang Mai University, Fast Neutron Research Facility (Thailand); Elmgren, K.; Olsson, N. [Swedish Defense Research Agency, Stokholm (Sweden); Eudes, Ph.; Guertin, A.; Haddad, F.; Kirchner, T.; Lebrun, C.; Riviere, G. [Nantes Univ., Subatech, 44 (France); Foucher, Y. [CEA Saclay, Dept. d' Astrophysique, de Physique des Particules de Physique Nucleaire et de l' Instrumentation Associee, 91- Gif sur Yvette (France); Jonsson, O.; Prokofiev, A.V.; Renberg, P.U. [Uppsala Univ., Svedberg Laboratory (Sweden); Kerveno, M.; Stuttge, L. [IReS, Strasbourg (France); Le Brun, Ch. [Laboratoire de Physique Subatomique et de Cosmologie, 38 - Grenoble (France); Nadel-Turonski, P. [Uppsala Univ. (Sweden). Dept. of Radiation Sciences; Slypen, I. [Universite Catholique de Louvain (UCL), Institut de Physique Nucleaire, Louvain-la-Neuve (Belgium) 2004-04-01 Double-differential cross sections for light charged particle production (up to A = 4) were measured in 96 MeV neutron-induced reactions, at TSL laboratory cyclotron in Uppsala (Sweden). Measurements for three targets, Fe, Pb, and U, were performed using two independent devices, SCANDAL and MEDLEY. The data were recorded with low energy thresholds and for a wide annular range (20 - 160 degrees). The normalization procedure used to extract the cross sections is based on the np elastic scattering reaction that we measured and for which we present experimental results. A good control of the systematic uncertainties affecting the results is achieved. Calculations using the exciton model are reported. Two different theoretical approaches proposed to improve its predictive power regarding the complex particle emission are tested. The capabilities of each approach is illustrated by comparison with the 96 MeV data that we measured, and with other experimental results available in the literature. (authors) 12. Pre-recombination quenching of the radiation induced fluorescence as the approach to study kinetics of ion-molecular reactions International Nuclear Information System (INIS) Borovkov, V.I.; Ivanishko, I.S. 2011-01-01 This study deals with the geminate ion recombination in the presence of bulk scavengers, that is the so-called scavenger problem, as well as with the effect of the scavenging reaction on the radiation-induced recombination fluorescence. have proposed a method to determine the rate constant of the bulk reaction between neutral scavengers and one of the geminate ions if the ion-molecular reaction prevented the formation of electronically excited states upon recombination involving a newly formed ion. If such pre-recombination quenching of the radiation-induced fluorescence took place, it manifested itself as a progressive decrease in the decay of the fluorescence intensity. The relative change in the fluorescence decay as caused by the scavengers was believed to be closely related to the kinetics of the scavenging reaction. The goal of the present study is to support this method, both computationally and experimentally because there are two factors, which cast doubt on the intuitively obvious approach to the scavenger problem: spatial correlations between the particles involved and the drift of the charged reagent in the electric field of its geminate partner. Computer simulation of geminate ions recombination with an explicit modeling of the motion trajectories of scavengers has been performed for media of low dielectric permittivity, i.e. for the maximal Coulomb interaction between the ions. The simulation has shown that upon continuous diffusion of the particles involved, the joint effect of the two above factors can be considered as insignificant with a high accuracy. Besides, it is concluded then that the method of pre-recombination quenching could be applied to study parallel and consecutive reactions where the yields of excited states in the reaction pathways are different with the use of very simple analytical relations of the formal chemical kinetics. The conclusion has been confirmed experimentally by the example of the reactions of electron transfer from 13. Developments in the phenomenology of two-to-three particle reactions International Nuclear Information System (INIS) Berger, E.L. 1975-07-01 Recent progress in understanding data on two to three particle hadron reactions is described. The use of an s-channel azimuthal angle selection is advocated to identify and separate different exchange mechanisms which contribute to the same final state at low subenergy. Solutions to the neutral Q cross-over problem in the Deck model are discussed, and experimental tests are proposed. Methods are offered for enhancing resonance signals in the presence of a large Deck exchange background. The need for absorptive corrections to the usual Deck model is stressed in the light of new FNAL data on diffractive neutron dissociation and ISR data on proton dissociation. Results of an explicit absorbed pion exchange Deck model calculation are compared with data. (U.S.) 14. Diffusion-limited reactions of hard-core particles in one dimension Science.gov (United States) Bares, P.-A.; Mobilia, M. 1999-02-01 We investigate three different methods to tackle the problem of diffusion-limited reactions (annihilation) of hard-core classical particles in one dimension. We first extend an approach devised by Lushnikov [Sov. Phys. JETP 64, 811 (1986)] and calculate for a single species the asymptotic long-time and/or large-distance behavior of the two-point correlation function. Based on a work by Grynberg and Stinchcombe [Phys. Rev. E 50, 957 (1994); Phys. Rev. Lett. 74, 1242 (1995); 76, 851 (1996)], which was developed to treat stochastic adsorption-desorption models, we provide in a second step the exact two-point (one- and two-time) correlation functions of Lushnikov's model. We then propose a formulation of the problem in terms of path integrals for pseudo- fermions. This formalism can be used to advantage in the multispecies case, especially when applying perturbative renormalization group techniques. 15. Hadron correlations in nuclear reactions with production of cumulative protons induced by π- mesons with momentum of 6.0 GeV/c International Nuclear Information System (INIS) Bayukov, Yu.D.; Vlasov, A.V.; Gavrilov, V.B. 1981-01-01 Hardonic correlations were investigated in reactions with the proton backward production induced by 6.0-GeV/c π - mesons on nuclei Be, C, Al, Cu, Cd, Pb, U. The studied correlations indicate an essential role of multiple interactions of the incident particle in production of cumulative protons [ru 16. The factor that determines photo-induced crystalline-state reaction International Nuclear Information System (INIS) Takenaka, Y. 1995-01-01 The photo-induced crystalline-state reaction of cobaloxime complexes were investigated by X-ray diffraction method. The reactivity or the reaction rate is dependent only on the volume of the reaction cavity. The hydrogen bond formation of the reactive group and the difference of the base ligand have no effect. (author) 17. Small angle particle-particle correlation measurements in the reactions 280 MeV 40Ar+27Al and 670 MeV 55Mn+12C International Nuclear Information System (INIS) Milosevich, Zoran; Vardaci, Emanuele; DeYoung, Paul A.; Brown, Craig M.; Kaplan, Morton; Whitfield, James P.; Peterson, Donald; Dykstra, Christopher; Barton, Matthew; Karol, Paul J.; McMahan, Margaret A. 2001-01-01 Small-angle particle-particle correlations were measured in the two matching reactions 280 MeV 40 Ar+ 27 Al and 670 MeV 55 Mn+ 12 C. These two reactions were used to produce the composite nucleus, 67 Ga*, at the same initial excitation energy of 127 MeV, but with different entrance channel angular momentum distributions. A simple trajectory model was used to compute the average emission times between various particle pairs, and comparisons with the data show that there is a significant difference in the deexcitation of the composite nucleus formed from the two reactions. Statistical model calculations were compared to the experimental observations with the added constraint that the model input parameters were consistent with those derived from observed charged-particle energy spectra and angular distributions. It was found that the calculated correlation functions were insensitive to the input spin distributions, but agreed fairly well with the data from the lower-spin system. The higher-spin reaction data were poorly reproduced by the calculations 18. Theoretical approach to study the light particles induced production routes of 22Na International Nuclear Information System (INIS) Eslami, M.; Kakavand, T.; Mirzaii, M. 2015-01-01 Highlights: • Excitation function of 22 Na via thirty-three various reactions. • Various theoretical frameworks along with adjustments are employed in the calculations. • The results are given at energy range from the threshold up to 100 MeV. • The results are compared with each other and corresponding experimental data. - Abstract: To create a roadmap for the industrial-scale production of sodium-22, various production routes of this radioisotope involving light charged-particle-induced reactions at the bombarding energy range of threshold to a maximum of 100 MeV have been calculated. The excitation functions are calculated by using various nuclear models. Reaction pre-equilibrium process calculations have been made in the framework of the hybrid and geometry dependent hybrid models using ALICE/ASH code, and in the framework of the exciton model using TALYS-1.4 code. To calculate the compound nucleus evaporation process, both Weisskopf–Ewing and Hauser–Feshbach theories have been employed. The cross sections have also separately been estimated with five different level density models at the whole projectile energies. A comparison with calculations based on the codes, on one hand, and experimental data, on the other hand, is arranged and discussed 19. Persistent Skin Reactions and Aluminium Hypersensitivity Induced by Childhood Vaccines. Science.gov (United States) Salik, Elaha; Løvik, Ida; Andersen, Klaus E; Bygum, Anette 2016-11-02 There is increasing awareness of reactions to vaccination that include persistent skin reactions. We present here a retrospective investigation of long-lasting skin reactions and aluminium hypersensitivity in children, based on medical records and questionnaires sent to the parents. In the 10-year period 2003 to 2013 we identified 47 children with persistent skin reactions caused by childhood vaccinations. Most patients had a typical presentation of persisting pruritic subcutaneous nodules. Five children had a complex diagnostic process involving paediatricians, orthopaedics and plastic surgeons. Two patients had skin biopsies performed from their skin lesions, and 2 patients had the nodules surgically removed. Forty-two children had a patch-test performed with 2% aluminium chloride hexahydrate in petrolatum and 39 of them (92%) had a positive reaction. The persistent skin reactions were treated with potent topical corticosteroids and disappeared slowly. Although we advised families to continue vaccination of their children, one-third of parents omitted or postponed further vaccinations. 20. Experimental and Theoretical Investigation of Shock-Induced Reactions in Energetic Materials Energy Technology Data Exchange (ETDEWEB) Kay, Jeffrey J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Park, Samuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kohl, Ian Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knepper, Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Farrow, Darcie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tappan, Alexander S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States) 2017-09-01 In this work, shock-induced reactions in high explosives and their chemical mechanisms were investigated using state-of-the-art experimental and theoretical techniques. Experimentally, ultrafast shock interrogation (USI, an ultrafast interferometry technique) and ultrafast absorption spectroscopy were used to interrogate shock compression and initiation of reaction on the picosecond timescale. The experiments yielded important new data that appear to indicate reaction of high explosives on the timescale of tens of picoseconds in response to shock compression, potentially setting new upper limits on the timescale of reaction. Theoretically, chemical mechanisms of shock-induced reactions were investigated using density functional theory. The calculations generated important insights regarding the ability of several hypothesized mechanisms to account for shock-induced reactions in explosive materials. The results of this work constitute significant advances in our understanding of the fundamental chemical reaction mechanisms that control explosive sensitivity and initiation of detonation. 1. A disposable electrochemical immunosensor for prolactin involving affinity reaction on streptavidin-functionalized magnetic particles International Nuclear Information System (INIS) Moreno-Guzman, Maria; Gonzalez-Cortes, Araceli; Yanez-Sedeno, Paloma; Pingarron, Jose M. 2011-01-01 A novel electrochemical immunosensor was developed for the determination of the hormone prolactin. The design involved the use of screen-printed carbon electrodes and streptavidin-functionalized magnetic particles. Biotinylated anti-prolactin antibodies were immobilized onto the functionalized magnetic particles and a sandwich-type immunoassay involving prolactin and anti-prolactin antibody labelled with alkaline phosphatase was employed. The resulting bio-conjugate was trapped on the surface of the screen-printed electrode with a small magnet and prolactin quantification was accomplished by differential pulse voltammetry of 1-naphtol formed in the enzyme reaction using 1-naphtyl phosphate as alkaline phosphatase substrate. All variables involved in the preparation of the immunosensor and in the electrochemical detection step were optimized. The calibration plot for prolactin exhibited a linear range between 10 and 2000 ng mL -1 with a slope value of 7.0 nA mL ng -1 . The limit of detection was 3.74 ng mL -1 . Furthermore, the modified magnetic beads-antiprolactin conjugates showed an excellent stability. The immunosensor exhibited also a high selectivity with respect to other hormones. The analytical usefulness of the immnunosensor was demonstrated by analyzing human sera spiked with prolactin at three different concentration levels. 2. A disposable electrochemical immunosensor for prolactin involving affinity reaction on streptavidin-functionalized magnetic particles Energy Technology Data Exchange (ETDEWEB) Moreno-Guzman, Maria; Gonzalez-Cortes, Araceli [Department of Analytical Chemistry, Faculty of Chemistry, University Computense of Madrid, 28040 Madrid (Spain); Yanez-Sedeno, Paloma, E-mail: [email protected] [Department of Analytical Chemistry, Faculty of Chemistry, University Computense of Madrid, 28040 Madrid (Spain); Pingarron, Jose M. [Department of Analytical Chemistry, Faculty of Chemistry, University Computense of Madrid, 28040 Madrid (Spain) 2011-04-29 A novel electrochemical immunosensor was developed for the determination of the hormone prolactin. The design involved the use of screen-printed carbon electrodes and streptavidin-functionalized magnetic particles. Biotinylated anti-prolactin antibodies were immobilized onto the functionalized magnetic particles and a sandwich-type immunoassay involving prolactin and anti-prolactin antibody labelled with alkaline phosphatase was employed. The resulting bio-conjugate was trapped on the surface of the screen-printed electrode with a small magnet and prolactin quantification was accomplished by differential pulse voltammetry of 1-naphtol formed in the enzyme reaction using 1-naphtyl phosphate as alkaline phosphatase substrate. All variables involved in the preparation of the immunosensor and in the electrochemical detection step were optimized. The calibration plot for prolactin exhibited a linear range between 10 and 2000 ng mL{sup -1} with a slope value of 7.0 nA mL ng{sup -1}. The limit of detection was 3.74 ng mL{sup -1}. Furthermore, the modified magnetic beads-antiprolactin conjugates showed an excellent stability. The immunosensor exhibited also a high selectivity with respect to other hormones. The analytical usefulness of the immnunosensor was demonstrated by analyzing human sera spiked with prolactin at three different concentration levels. 3. Single particle transfer reactions: what can they tell us about vibrational states International Nuclear Information System (INIS) Hering, W.R. 1975-01-01 The topic discussed concerns single particle transfer reactions (SPTR) which are, in general, used to study SP states. However, good SP states are rare objects in nature and people who try to look for them have often to settle with something less than ideal. Indeed the picture of a pure SP state is physically not even reasonable. It means that a nucleon is moving around a core nucleus which stays in its ground state: a process which one could call equivalent to elastic scattering of a nucleon which is not free but rather in a bound state. However it is shown that inelastic scattering is a very strong competitor to elastic scattering if the nucleus possesses states of high collectivity. Thus one would expect inelastic scattering to happen also while the nucleon is bound. This is a very intuitive picture of what is called the fragmentation of SP states. A final state psi sub(B) is populated by the transfer reaction A + a → B + b where psi sub(B) = α 1 phi 1 phi sub(A)(0) + α 2 phi 2 phi sub(A)(lambda). Hence the population of psi sub(B) automatically involves the collective state phi sub(A)(lambda). A discussion of how one can get information about phi sub(A)(lambda) out of the experimental data is given. (Auth.) 4. Reactions and single-particle structure of nuclei near the drip lines International Nuclear Information System (INIS) Hansen, P.G.; Sherrill, B.M. 2001-01-01 The techniques that have allowed the study of reactions of nuclei situated at or near the neutron or proton drip line are described. Nuclei situated just inside the drip line have low nucleon separation energies and, at most, a few bound states. If the angular momentum in addition is small, large halo states are formed where the wave function of the valency nucleon extends far beyond the nuclear radius. We begin with examples of the properties of nuclear halos and of their study in radioactive-beam experiments. We then turn to the continuum states existing above the particle threshold and also discuss the possibility of exciting them from the halo states in processes that may be thought of as 'collateral damage'. Finally, we show that the experience from studies of halo states has pointed to knockout reactions as a new way to perform spectroscopic studies of more deeply bound non-halo states. Examples are given of measurements of l values and spectroscopic factors 5. R-Matrix Codes for Charged-particle Induced Reactionsin the Resolved Resonance Region Energy Technology Data Exchange (ETDEWEB) Leeb, Helmut [Technical Univ. of Wien, Vienna (Austria); Dimitriou, Paraskevi [Intl Atomic Energy Agency (IAEA), Vienna (Austria); Thompson, Ian J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2017-01-01 A Consultant’s Meeting was held at the IAEA Headquarters, from 5 to 7 December 2016, to discuss the status of R-matrix codes currently used in calculations of charged-particle induced reaction cross sections at low energies. The meeting was a follow-up to the R-matrix Codes meeting held in December 2015, and served the purpose of monitoring progress in: the development of a translation code to enable exchange of input/output parameters between the various codes in different formats, fitting procedures and treatment of uncertainties, the evaluation methodology, and finally dissemination. The details of the presentations and technical discussions, as well as additional actions that were proposed to achieve all the goals of the meeting are summarized in this report. 6. Persistent Skin Reactions and Aluminium Hypersensitivity Induced by Childhood Vaccines DEFF Research Database (Denmark) Salik, Elaha; Løvik, Ida; Andersen, Klaus E 2016-01-01 There is increasing awareness of reactions to vaccination that include persistent skin reactions. We present here a retrospective investigation of long-lasting skin reactions and aluminium hypersensitivity in children, based on medical records and questionnaires sent to the parents. In the 10-year...... period 2003 to 2013 we identified 47 children with persistent skin reactions caused by childhood vaccinations. Most patients had a typical presentation of persisting pruritic subcutaneous nodules. Five children had a complex diagnostic process involving paediatricians, orthopaedics and plastic surgeons...... treated with potent topical corticosteroids and disappeared slowly. Although we advised families to continue vaccination of their children, one-third of parents omitted or postponed further vaccinations.... 7. Thermodynamics inducing massive particles' tunneling and cosmic censorship International Nuclear Information System (INIS) Zhang, Baocheng; Cai, Qing-yu; Zhan, Ming-sheng 2010-01-01 By calculating the change of entropy, we prove that the first law of black hole thermodynamics leads to the tunneling probability of massive particles through the horizon, including the tunneling probability of massive charged particles from the Reissner-Nordstroem black hole and the Kerr-Newman black hole. Novelly, we find the trajectories of massive particles are close to that of massless particles near the horizon, although the trajectories of massive charged particles may be affected by electromagnetic forces. We show that Hawking radiation as massive particles tunneling does not lead to violation of the weak cosmic-censorship conjecture. (orig.) 8. Thermodynamics inducing massive particles' tunneling and cosmic censorship Energy Technology Data Exchange (ETDEWEB) Zhang, Baocheng [Chinese Academy of Sciences, State Key Laboratory of Magnetic Resonances and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Wuhan (China); Graduate University of Chinese Academy of Sciences, Beijing (China); Cai, Qing-yu [Chinese Academy of Sciences, State Key Laboratory of Magnetic Resonances and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Wuhan (China); Zhan, Ming-sheng [Chinese Academy of Sciences, State Key Laboratory of Magnetic Resonances and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Wuhan (China); Chinese Academy of Sciences, Center for Cold Atom Physics, Wuhan (China) 2010-08-15 By calculating the change of entropy, we prove that the first law of black hole thermodynamics leads to the tunneling probability of massive particles through the horizon, including the tunneling probability of massive charged particles from the Reissner-Nordstroem black hole and the Kerr-Newman black hole. Novelly, we find the trajectories of massive particles are close to that of massless particles near the horizon, although the trajectories of massive charged particles may be affected by electromagnetic forces. We show that Hawking radiation as massive particles tunneling does not lead to violation of the weak cosmic-censorship conjecture. (orig.) 9. Abstract ID: 240 A probabilistic-based nuclear reaction model for Monte Carlo ion transport in particle therapy. Science.gov (United States) Maria Jose, Gonzalez Torres; Jürgen, Henniger 2018-01-01 In order to expand the Monte Carlo transport program AMOS to particle therapy applications, the ion module is being developed in the radiation physics group (ASP) at the TU Dresden. This module simulates the three main interactions of ions in matter for the therapy energy range: elastic scattering, inelastic collisions and nuclear reactions. The simulation of the elastic scattering is based on the Binary Collision Approximation and the inelastic collisions on the Bethe-Bloch theory. The nuclear reactions, which are the focus of the module, are implemented according to a probabilistic-based model developed in the group. The developed model uses probability density functions to sample the occurrence of a nuclear reaction given the initial energy of the projectile particle as well as the energy at which this reaction will take place. The particle is transported until the reaction energy is reached and then the nuclear reaction is simulated. This approach allows a fast evaluation of the nuclear reactions. The theory and application of the proposed model will be addressed in this presentation. The results of the simulation of a proton beam colliding with tissue will also be presented. Copyright © 2017. 10. Connections between lepton-induced and hadron-induced multiparticle reactions International Nuclear Information System (INIS) Brodsky, S.J.; Gunion, J.F. 1976-01-01 Jet production is studied as a link between hadron- and lepton-induced reactions and interpreted in terms of models of underlying quark dynamics. We discuss how fragmentation distributions, quantum number flow, the charged-momentum vector, and quark counting rules can discriminate among various possible jet structures. We also review recent work on the possible relationship of the rising hadron multiplicity to quark confinement and color gauge theories. A number of new tests of quark models in hadron, photon, and lepton collisions are discussed. (orig.) [de 11. ReaDDy--a software for particle-based reaction-diffusion dynamics in crowded cellular environments. Directory of Open Access Journals (Sweden) Johannes Schöneberg Full Text Available We introduce the software package ReaDDy for simulation of detailed spatiotemporal mechanisms of dynamical processes in the cell, based on reaction-diffusion dynamics with particle resolution. In contrast to other particle-based reaction kinetics programs, ReaDDy supports particle interaction potentials. This permits effects such as space exclusion, molecular crowding and aggregation to be modeled. The biomolecules simulated can be represented as a sphere, or as a more complex geometry such as a domain structure or polymer chain. ReaDDy bridges the gap between small-scale but highly detailed molecular dynamics or Brownian dynamics simulations and large-scale but little-detailed reaction kinetics simulations. ReaDDy has a modular design that enables the exchange of the computing core by efficient platform-specific implementations or dynamical models that are different from Brownian dynamics. 12. Table of nuclear reactions and subsequent radioactive dacays induced by 14-MeV neutrons International Nuclear Information System (INIS) Tsukada, Kineo 1977-09-01 Compilation of the data on nuclear reactions and subsequent radioactive decays induced by 14-MeV neutrons is presented in tabular form for most of the isotopes available in nature and for some of the artificially-produced isotopes, including the following items: Nuclide (isotopic abundance), type of nuclear reaction, reaction Q-value, reaction product, type of decay, decay Q-value, half-life of reaction product, decay product, maximum reaction cross section, neutron energy for maximum cross section, reaction cross section for 14 MeV neutrons, saturated radioactivity induced by irradiation of a neutron flux of 1 n/cm 2 sec for a mol of atoms, and reference for the cross section. The mass number dependence of (n, γ), (n, 2n), (n, p), (n, d), (n, t), (n, 3 He) and (n, α) reaction cross sections for 14-MeV neutrons is given in figures to show general trends of the cross sections 13. Optically induced rotation of Rayleigh particles by vortex beams with different states of polarization International Nuclear Information System (INIS) Li, Manman; Yan, Shaohui; Yao, Baoli; Liang, Yansheng; Lei, Ming; Yang, Yanlong 2016-01-01 Optical vortex beams carry optical orbital angular momentum (OAM) and can induce an orbital motion of trapped particles in optical trapping. We show that the state of polarization (SOP) of vortex beams will affect the details of this optically induced orbital motion to some extent. Numerical results demonstrate that focusing the vortex beams with circular, radial or azimuthal polarizations can induce a uniform orbital motion on a trapped Rayleigh particle, while in the focal field of the vortex beam with linear polarization the particle experiences a non-uniform orbital motion. Among the formers, the vortex beam with circular polarization induces a maximum optical torque on the particle. Furthermore, by varying the topological charge of the vortex beams, the vortex beam with circular polarization gives rise to an optimum torque superior to those given by the other three vortex beams. These facts suggest that the circularly polarized vortex beam is more suitable for rotating particles. - Highlights: • States of polarization of vortex beams affect the optically induced orbital motion of particles. • The dependences of the force and orbital torque on the topological charge, the size and the absorptivity of particles were calculated. • Focused vortex beams with circular, radial or azimuthal polarizations induce a uniform orbital motion on particles. • Particles experience a non-uniform orbital motion in the focused linearly polarized vortex beam. • The circularly polarized vortex beam is a superior candidate for rotating particles. 14. Kinetics of the radiation-induced exchange reactions of H2, D2, and T2: a review International Nuclear Information System (INIS) Pyper, J.W.; Briggs, C.K. 1978-01-01 Mixtures of H 2 --T 2 or D 2 --T 2 will exchange to produce HT or DT due to catalysis by the tritium β particle. The kinetics of the reaction D 2 + T 2 = 2DT may play an important role in designing liquid or solid targets of D 2 --DT--T 2 for implosion fusion, and distillation schemes for tritium cleanup systems in fusion reactors. Accordingly, we have critically reviewed the literature for information on the kinetics and mechanism of radiation-induced self-exchange reactions among the hydrogens. We found data for the reaction H 2 + T 2 = 2HT in the gas phase and developed a scheme based on these data to predict the halftime to equilibrium for any gaseous H 2 + T 2 mixture at ambient temperature with an accuracy of +-10 percent. The overall order of the H 2 + T 2 = 2HT reaction is 1.6 based on an initial rate treatment of the data. The most probable mechanism for radiation-induced self-exchange reaction is an ion-molecule chain mechanism 15. Spectral Induced Polarization of Disseminated Pyrite Particles in Soil Science.gov (United States) Slater, L. D.; Kessouri, P.; Seleznev, N. V. 2017-12-01 Disseminated metallic particles in soil, particularly pyrite, occur naturally or are enhanced by anthropogenic activities. Detecting their presence and quantifying their concentration and location is of interest for numerous applications such as remediation of hydrocarbon contamination, mine tailings assessment, detection of oil traps, and archaeological studies. Because pyrite is a semiconductor, spectral induced polarization (SIP) is a promising geophysical method for sensing it in porous media. Previous studies have identified relations between pyrite properties (e.g., volumetric content, grain size) and SIP parameters (e.g., chargeability, relaxation time). However, the effect of pyrite grains in porous media on the SIP response is not fully understood over the entire low-frequency range. We tested the relationship between the presence of pyrite grains and the change in electrical properties of the medium through an extended series of laboratory measurements: (1) variation of grain size, (2) variation of grain concentration, (3) variation of electrolyte conductivity, (4) change in the diffusion properties of the host medium. For the fourth set of measurements, we compared sand columns to agar gel columns. Our experimental design included more than 20 different samples with multiple repeats to ensure representative results. We confirm the strong relation between grain size and relaxation time and that between grain concentration and chargeability in both the sand and agar gel samples. Furthermore, our results shed light on the significance of the diffusion coefficient and the recently hypothesized role of pyrite grains as resistors at frequencies lower than the relaxation frequency. 16. Experimental particle acceleration by water evaporation induced by shock waves Science.gov (United States) Scolamacchia, T.; Alatorre Ibarguengoitia, M.; Scheu, B.; Dingwell, D. B.; Cimarelli, C. 2010-12-01 Shock waves are commonly generated during volcanic eruptions. They induce sudden changes in pressure and temperature causing phase changes. Nevertheless, their effects on flowfield properties are not well understood. Here we investigate the role of gas expansion generated by shock wave propagation in the acceleration of ash particles. We used a shock tube facility consisting of a high-pressure (HP) steel autoclave (450 mm long, 28 mm in internal diameter), pressurized with Ar gas, and a low-pressure tank at atmospheric conditions (LP). A copper diaphragm separated the HP autoclave from a 180 mm tube (PVC or acrylic glass) at ambient P, with the same internal diameter of the HP reservoir. Around the tube, a 30 cm-high acrylic glass cylinder, with the same section of the LP tank (40 cm), allowed the observation of the processes occurring downstream from the nozzle throat, and was large enough to act as an unconfined volume in which the initial diffracting shock and gas jet expand. All experiments were performed at Pres/Pamb ratios of 150:1. Two ambient conditions were used: dry air and air saturated with steam. Carbon fibers and glass spheres in a size range between 150 and 210 μm, were placed on a metal wire at the exit of the PVC tube. The sudden decompression of the Ar gas, due to the failure of the diaphragm, generated an initial air shock wave. A high-speed camera recorded the processes between the first 100 μsec and several ms after the diaphragm failure at frame rates ranging between 30,000 and 50,000 fps. In the experiments with ambient air saturated with steam, the high-speed camera allowed to visualize the condensation front associated with the initial air shock; a maximum velocity of 788 m/s was recorded, which decreases to 524 m/s at distance of 0.5 ±0.2 cm, 1.1 ms after the diaphragm rupture. The condensation front preceded the Ar jet front exhausting from the reservoir, by 0.2-0.5 ms. In all experiments particles velocities following the initial 17. Study of deep inelastic reactions on sd-shell nuclei with 100 MeV α-particles International Nuclear Information System (INIS) Seniwongse, G. 1985-04-01 Energy spectra and angular distributions of light particles (p, d, t, 3 He, α) were measured. As projectiles α-particles with the incident energy of 100 MeV were used. The measurement data result from an inclusive measurement of the reactions on 24 Mg, 25 Mg, 26 Mg, 27 Al, 28 Si. The double differential cross sections and the angular distributions were analyzed in the framework of the exciton-coalescence model. Thereby model parameters as the initial exciton number n 0 one-particle state density, and coalescence radii were determined. From the model analysis it can be concluded that n 0 =5 describes the data optimally contrarily to earlier results. The proton spectra can be explained by different one-particle state densities with pairing effects. The probability for the formation of complex particles seems to be independent from the structure of the target nuclei studied here. The calculated cross sections agree well with the measured values. This is valid both for the angle-integrated spectra and for the angular distributions. The agreement was especially well for the angle-integrated cross sections of the (α, p) reaction over the whole spectrum. For the complex particles the agreement in the energy of the produced particle was well up to about 60 MeV, i.e. before the superposition from the breakup respectively direct reactions begins. These reactions are indeed not regarded in the model. The measurement data and the calculated angular distributions agree for all types of particles at measurement angles below about 60 0 well. At larger angles the calculated values are too large. The reasons for this are not yet clear. (orig.) [de 18. Experimental investigations concerning the three particle reaction 19B(d,3α) at 360 keV International Nuclear Information System (INIS) Nocken, U. 1976-01-01 In this paper a complete energy angular correlation measurement of the three-particle reaction 10 B(d,3α) with an incident energy of Esub(d) = 360 keV is reported. By the measurement of coincidence events in two detectors complanar to the incident beam under 24 different angles, the 'Dalitz-plane' is covered in a wide region. An exact theory of three-particle reactions with compound nuclei in the final state does not exist, therefore three wellknown model theories are used for comparison. (orig./WL) [de 19. Particle dispersion and mixing induced by breaking internal gravity waves Science.gov (United States) Bouruet-Aubertot, Pascale; Koudella, C.; Staquet, C.; Winters, K. B. 2001-01-01 The purpose of this paper is to analyze diapycnal mixing induced by the breaking of an internal gravity wave — the primary wave — either standing or propagating. To achieve this aim we apply two different methods. The first method consists of a direct estimate of vertical eddy diffusion from particle dispersion while the second method relies upon potential energy budgets [Winters, K.B., Lombard, P.N., Riley, J.J., D'Asaro, E.A., 1995. J. Fluid Mech. 289, 115-128; Winters, K.B., D'Asaro, E.A., 1996. J. Fluid Mech. 317, 179-193]. The primary wave we consider is of small amplitude and is statically stable, a case for which the breaking process involves two-dimensional instabilities. The dynamics of the waves have been previously analyzed by means of two-dimensional direct numerical simulations [Bouruet-Aubertot, P., Sommeria, J., Staquet, C., 1995. J. Fluid Mech. 285, 265-301; Bouruet-Aubertot, P., Sommeria, J., Staquet, C., 1996. Dyn. Atmos. Oceans 29, 41-63; Koudella, C., Staquet, C., 1998. In: Davis, P. (Ed.), Proceedings of the IMA Conference on Mixing and Dispersion on Stably-stratified Flows, Dundee, September 1996. IMA Publication]. High resolution three-dimensional calculations of the same wave are also reported here [Koudella, C., 1999]. A local estimate of mixing is first inferred from the time evolution of sets of particles released in the flow during the breaking regime. We show that, after an early evolution dominated by shear effects, a diffusion law is reached and the dispersion coefficient is fairly independent of the initial seeding location of the particles in the flow. The eddy diffusion coefficient, K, is then estimated from the diapycnal diffusive flux. A good agreement with the value inferred from particle dispersion is obtained. This finding is of particular interest regarding the interpretation of in situ estimates of K inferred either from tracer dispersion or from microstructure measurements. Computation of the Cox number, equal to the 20. Measurement of double differential cross sections of charged particle emission reactions by incident DT neutrons. Correction for energy loss of charged particle in sample materials International Nuclear Information System (INIS) Takagi, Hiroyuki; Terada, Yasuaki; Murata, Isao; Takahashi, Akito 2000-01-01 In the measurement of charged particle emission spectrum induced by neutrons, correcting the energy loss of charged particle in sample materials becomes a very important inverse problem. To deal with this inverse problem, we have applied the Bayesian unfolding method to correct the energy loss, and tested the performance of the method. Although this method is very simple, it was confirmed from the test that the performance was not inferior to other methods at all, and therefore the method could be a powerful tool for charged particle spectrum measurement. (author) 1. Influence of radiation induced defect clusters on silicon particle detectors Energy Technology Data Exchange (ETDEWEB) Junkes, Alexandra 2011-10-15 The Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) addresses some of today's most fundamental questions of particle physics, like the existence of the Higgs boson and supersymmetry. Two large general-purpose experiments (ATLAS, CMS) are installed to detect the products of high energy protonproton and nucleon-nucleon collisions. Silicon detectors are largely employed in the innermost region, the tracking area of the experiments. The proven technology and large scale availability make them the favorite choice. Within the framework of the LHC upgrade to the high-luminosity LHC, the luminosity will be increased to L=10{sup 35} cm{sup -2}s{sup -1}. In particular the pixel sensors in the innermost layers of the silicon trackers will be exposed to an extremely intense radiation field of mainly hadronic particles with fluences of up to {phi}{sub eq}=10{sup 16} cm{sup -2}. The radiation induced bulk damage in silicon sensors will lead to a severe degradation of the performance during their operational time. This work focusses on the improvement of the radiation tolerance of silicon materials (Float Zone, Magnetic Czochralski, epitaxial silicon) based on the evaluation of radiation induced defects in the silicon lattice using the Deep Level Transient Spectroscopy and the Thermally Stimulated Current methods. It reveals the outstanding role of extended defects (clusters) on the degradation of sensor properties after hadron irradiation in contrast to previous works that treated effects as caused by point defects. It has been found that two cluster related defects are responsible for the main generation of leakage current, the E5 defects with a level in the band gap at E{sub C}-0.460 eV and E205a at E{sub C}-0.395 eV where E{sub C} is the energy of the edge of the conduction band. The E5 defect can be assigned to the tri-vacancy (V{sub 3}) defect. Furthermore, isochronal annealing experiments have shown that the V{sub 3} defect 2. Influence of radiation induced defect clusters on silicon particle detectors International Nuclear Information System (INIS) Junkes, Alexandra 2011-10-01 The Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) addresses some of today's most fundamental questions of particle physics, like the existence of the Higgs boson and supersymmetry. Two large general-purpose experiments (ATLAS, CMS) are installed to detect the products of high energy protonproton and nucleon-nucleon collisions. Silicon detectors are largely employed in the innermost region, the tracking area of the experiments. The proven technology and large scale availability make them the favorite choice. Within the framework of the LHC upgrade to the high-luminosity LHC, the luminosity will be increased to L=10 35 cm -2 s -1 . In particular the pixel sensors in the innermost layers of the silicon trackers will be exposed to an extremely intense radiation field of mainly hadronic particles with fluences of up to Φ eq =10 16 cm -2 . The radiation induced bulk damage in silicon sensors will lead to a severe degradation of the performance during their operational time. This work focusses on the improvement of the radiation tolerance of silicon materials (Float Zone, Magnetic Czochralski, epitaxial silicon) based on the evaluation of radiation induced defects in the silicon lattice using the Deep Level Transient Spectroscopy and the Thermally Stimulated Current methods. It reveals the outstanding role of extended defects (clusters) on the degradation of sensor properties after hadron irradiation in contrast to previous works that treated effects as caused by point defects. It has been found that two cluster related defects are responsible for the main generation of leakage current, the E5 defects with a level in the band gap at E C -0.460 eV and E205a at E C -0.395 eV where E C is the energy of the edge of the conduction band. The E5 defect can be assigned to the tri-vacancy (V 3 ) defect. Furthermore, isochronal annealing experiments have shown that the V 3 defect exhibits a bistability, as does the leakage current. In oxygen 3. Toxicogenomic analysis of the particle dose- and size-response relationship of silica particles-induced toxicity in mice International Nuclear Information System (INIS) Lu Xiaoyan; Jin Tingting; Jin Yachao; Wu Leihong; Hu Bin; Tian Yu; Fan Xiaohui 2013-01-01 This study investigated the relationship between particle size and toxicity of silica particles (SP) with diameters of 30, 70, and 300 nm, which is essential to the safe design and application of SP. Data obtained from histopathological examinations suggested that SP of these sizes can all induce acute inflammation in the liver. In vivo imaging showed that intravenously administrated SP are mainly present in the liver, spleen and intestinal tract. Interestingly, in gene expression analysis, the cellular response pathways activated in the liver are predominantly conserved independently of particle dose when the same size SP are administered or are conserved independently of particle size, surface area and particle number when nano- or submicro-sized SP are administered at their toxic doses. Meanwhile, integrated analysis of transcriptomics, previous metabonomics and conventional toxicological results support the view that SP can result in inflammatory and oxidative stress, generate mitochondrial dysfunction, and eventually cause hepatocyte necrosis by neutrophil-mediated liver injury. (paper) 4. Influence of Magnolol on the bystander effect induced by alpha-particle irradiation Energy Technology Data Exchange (ETDEWEB) Wong, T.P.W.; Law, Y.L. [Department of Physics and Materials Science, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong (Hong Kong); Tse, A.K.W.; Fong, W.F. [Research and Development Division, School of Chinese Medicine, Hong Kong Baptist University, Baptist University Road, Kowloon Tong (Hong Kong); Yu, K.N. [Department of Physics and Materials Science, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong (Hong Kong)], E-mail: [email protected] 2010-04-15 In this work, the influence of Magnolol on the bystander effect in alpha-particle irradiated Chinese hamster ovary (CHO) cells was examined. The bystander effect was studied through medium transfer experiments. Cytokinesis-block micronucleus (CBMN) assay was performed to quantify the chromosome damage induced by alpha-particle irradiation. Our results showed that the alpha-particle induced micronuclei (MN) frequencies were suppressed with the presence of Magnolol. 5. Cement analysis by particle-induced prompt photon spectrometry: comparison of the effect of different charged particle beams International Nuclear Information System (INIS) Gihwala, D.; Peisach, M. 1985-01-01 Standard cements were analysed by particle-induced prompt photon spectrometry (PIPPS) using 4,75-MeV protons, 5-MeV 4 He+ ions, and 2-MeV deuterons. Precision and sensitivity attainable were compared. Protons and alpha-particles were comparable for the determination of F, Na, Mg and Si. Protons were preferred for P and Ca, and alpha-particles for the direct determination of O. Sources of interference are discussed with particular reference to delayed gamma-ray emission from deuteron bombardment 6. Quantitative surface analysis using deuteron-induced nuclear reactions International Nuclear Information System (INIS) Afarideh, Hossein 1991-01-01 The nuclear reaction analysis (NRA) technique consists of looking at the energies of the reaction products which uniquely define the particular elements present in the sample and it analysis the yield/energy distribution to reveal depth profiles. A summary of the basic features of the nuclear reaction analysis technique is given, in particular emphasis is placed on quantitative light element determination using (d,p) and (d,alpha) reactions. The experimental apparatus is also described. Finally a set of (d,p) spectra for the elements Z=3 to Z=17 using 2 MeV incident deutrons is included together with example of more applications of the (d,alpha) spectra. (author) 7. Quid-Induced Lichenoid Reactions: A Prevalence Study OpenAIRE Vishal Dang; Madhav Nagpal 2011-01-01 White lesions of the oral mucosa are of concern to the dental surgeon in view of the fact that some of these may be potentially malignant. Oral lichen plane: (OLP) and oral lichenoid reactions (OLR) share similar clinical appearances but need to be carefully distinguished because of their different etiologies and clinical behaviour. This study screened 5.017 population, in a house-to-house field survey, for tobacco use and investigated the prevalence of oral lichenoid reactions in the 98 quid... 8. Multifragmentation in intermediate energy 129Xe-induced heavy-ion reactions International Nuclear Information System (INIS) Tso, Kin. 1996-05-01 The 129 Xe-induced reactions on nat Cu, 89 Y, 165 Ho, and 197 Au at bombarding energies of E/A = 40 ampersand 60 MeV have been studied theoretically and experimentally in order to establish the underlying mechanism of multifragmentation at intermediate energy heavy-Ion collisions. Nuclear disks formed in central heavy-ion collisions, as simulated by means of Boltzmann-like kinetic equations, break up into several fragments due to a new kind of Rayleigh-like surface instability. A sheet of liquid, stable in the limit of non-interacting surfaces, is shown to become unstable due to surface-surface interactions. The onset of this instability is determined analytically. A thin bubble behaves like a sheet and is susceptible to the surface instability through the crispation mode. The Coulomb effects associated with the depletion of charges in the central cavity of nuclear bubbles are investigated. The onset of Coulomb instability is demonstrated for perturbations of the radial mode. Experimental intermediate-mass-fragment multiplicity distributions for the 129 Xe-induced reactions are shown to be binomial at each transverse energy. From these distributions, independent of the specific target, an elementary binary decay probability p can be extracted that has a thermal dependence. Thus it is inferred that multifragmentation is reducible to a combination of nearly independent emission processes. If sequential decay is assumed, the increase of p with transverse energy implies a contraction of the emission time scale. The sensitivity of p to the lower Z threshold in the definition of intermediate-mass-fragments points to a physical Poisson simulations of the particle multiplicities show that the weak auto-correlation between the fragment multiplicity and the transverse energy does not distort a Poisson distribution into a binomial distribution. The effect of device efficiency on the experimental results has also been studied 9. Multifragmentation in intermediate energy 129Xe-induced heavy-ion reactions Energy Technology Data Exchange (ETDEWEB) Tso, Kin [Univ. of California, Berkeley, CA (United States) 1996-05-01 The 129Xe-induced reactions on natCu, 89Y, 165Ho, and 197Au at bombarding energies of E/A = 40 & 60 MeV have been studied theoretically and experimentally in order to establish the underlying mechanism of multifragmentation at intermediate energy heavy-Ion collisions. Nuclear disks formed in central heavy-ion collisions, as simulated by means of Boltzmann-like kinetic equations, break up into several fragments due to a new kind of Rayleigh-like surface instability. A sheet of liquid, stable in the limit of non-interacting surfaces, is shown to become unstable due to surface-surface interactions. The onset of this instability is determined analytically. A thin bubble behaves like a sheet and is susceptible to the surface instability through the crispation mode. The Coulomb effects associated with the depletion of charges in the central cavity of nuclear bubbles are investigated. The onset of Coulomb instability is demonstrated for perturbations of the radial mode. Experimental intermediate-mass-fragment multiplicity distributions for the 129Xe-induced reactions are shown to be binomial at each transverse energy. From these distributions, independent of the specific target, an elementary binary decay probability p can be extracted that has a thermal dependence. Thus it is inferred that multifragmentation is reducible to a combination of nearly independent emission processes. If sequential decay is assumed, the increase of p with transverse energy implies a contraction of the emission time scale. The sensitivity of p to the lower Z threshold in the definition of intermediate-mass-fragments points to a physical Poisson simulations of the particle multiplicities show that the weak auto-correlation between the fragment multiplicity and the transverse energy does not distort a Poisson distribution into a binomial distribution. The effect of device efficiency on the experimental results has also been studied. 10. Physiological environment induce quick response - slow exhaustion reactions Directory of Open Access Journals (Sweden) Noriko eHiroi 2011-09-01 Full Text Available In vivo environments are highly crowded and inhomogeneous, which may affect reaction processes in cells. In this study we examined the effects of intracellular crowding and an inhomogeneity on the behavior of in vivo reactions by calculating the spectral dimension (ds, which can be translated into the reaction rate function. We compared estimates of anomaly parameters obtained from Fluorescence Correlation Spectroscopy (FCS data with fractal dimensions derived from Transmission Electron Microscopy (TEM image analysis. FCS analysis indicated that the anomalous property was linked to physiological structure. Subsequent TEM analysis provided an in vivo illustration; soluble molecules likely percolate between intracellular clusters, which are constructed in a self-organizing manner. We estimated a cytoplasmic spectral dimension ds to be 1.39 ± 0.084. This result suggests that in vivo reactions initially run faster than the same reactions in a homogeneous space; this conclusion is consistent with the anomalous character indicated by FCS analysis. We further showed that these results were compatible with our Monte-Carlo simulation in which the anomalous behavior of mobile molecules correlates with the intracellular environment, leading to description as a percolation cluster, as demonstrated using TEM analysis. We confirmed by the simulation that the above-mentioned in vivo like properties are different from those of homogeneously concentrated environments. Additionally, simulation results indicated that crowding level of an environment might affect diffusion rate of reactant. Such knowledge of the spatial information enables us to construct realistic models for in vivo diffusion and reaction systems. 11. Motorcycle exhaust particles induce airway inflammation and airway hyperresponsiveness in BALB/C mice. Science.gov (United States) Lee, Chen-Chen; Liao, Jiunn-Wang; Kang, Jaw-Jou 2004-06-01 A number of large studies have reported that environmental pollutants from fossil fuel combustion can cause deleterious effects to the immune system, resulting in an allergic reaction leading to respiratory tract damage. In this study, we investigated the effect of motorcycle exhaust particles (MEP), a major pollutant in the Taiwan urban area, on airway inflammation and airway hyperresponsiveness in laboratory animals. BALB/c mice were instilled intratracheally (i.t.) with 1.2 mg/kg and 12 mg/kg of MEP, which was collected from two-stroke motorcycle engines. The mice were exposed 3 times i.t. with MEP, and various parameters for airway inflammation and hyperresponsiveness were sequentially analyzed. We found that MEP would induce airway and pulmonary inflammation characterized by infiltration of eosinophils, neutrophils, lymphocytes, and macrophages in bronchoalveolar lavage fluid (BALF) and inflammatory cell infiltration in lung. In addition, MEP treatment enhanced BALF interleukin-4 (IL-4), IL-5, and interferon-gamma (IFN-gamma) cytokine levels and serum IgE production. Bronchial response measured by unrestrained plethysmography with methacholine challenge showed that MEP treatment induced airway hyperresponsiveness (AHR) in BALB/c mice. The chemical components in MEP were further fractionated with organic solvents, and we found that the benzene-extracted fraction exerts a similar biological effect as seen with MEP, including airway inflammation, increased BALF IL-4, serum IgE production, and induction of AHR. In conclusion, we present evidence showing that the filter-trapped particles emitted from the unleaded-gasoline-fueled two-stroke motorcycle engine may induce proinflammatory and proallergic response profiles in the absence of exposure to allergen. 12. Selective population of high-j states via heavy-ion-induced transfer reactions International Nuclear Information System (INIS) Bond, P.D. 1982-01-01 One of the early hopes of heavy-ion-induced transfer reactions was to populate states not seen easily or at all by other means. To date, however, I believe it is fair to say that spectroscopic studies of previously unknown states have had, at best, limited success. Despite the early demonstration of selectivity with cluster transfer to high-lying states in light nuclei, the study of heavy-ion-induced transfer reactions has emphasized the reaction mechanism. The value of using two of these reactions for spectroscopy of high spin states is demonstrated: 143 Nd( 16 O, 15 O) 144 Nd and 170 Er( 16 O, 15 Oγ) 171 Er 13. Flow field induced particle accumulation inside droplets in rectangular channels. Science.gov (United States) Hein, Michael; Moskopp, Michael; Seemann, Ralf 2015-07-07 Particle concentration is a basic operation needed to perform washing steps or to improve subsequent analysis in many (bio)-chemical assays. In this article we present field free, hydrodynamic accumulation of particles and cells in droplets flowing within rectangular micro-channels. Depending on droplet velocity, particles either accumulate at the rear of the droplet or are dispersed over the entire droplet cross-section. We show that the observed particle accumulation behavior can be understood by a coupling of particle sedimentation to the internal flow field of the droplet. The changing accumulation patterns are explained by a qualitative change of the internal flow field. The topological change of the internal flow field, however, is explained by the evolution of the droplet shape with increasing droplet velocity altering the friction with the channel walls. In addition, we demonstrate that accumulated particles can be concentrated, removing excess dispersed phase by splitting the droplet at a simple channel junction. 14. Coincidence measurements of intermediate mass fragments produced in /sup 32/S-induced reactions on Ag at E/A = 22.5 MeV International Nuclear Information System (INIS) Fields, D.J.; Lynch, W.G.; Nayak, T.K. 1986-01-01 Single- and two-particle inclusive cross sections for the production of light nuclei and intermediate mass fragments, 3< or =Z< or =24, were measured at angles well beyond the grazing angle for /sup 32/S-induced reactions on Ag at 720 MeV. Information about fragment multiplicities and reaction dynamics was extracted from measurements of light particles, intermediate mass fragments, and targetlike residues in coincidence with intermediate mass fragments. Incomplete linear momentum transfer and non-compound-particle emission are important features of collisions producing intermediate mass fragments. About half of the incident kinetic energy in these collisions is converted into internal excitation. The mean multiplicity of intermediate mass fragments is of the order of 1. Particle correlations are strongly enhanced in the plane which contains the intermediate mass fragment and the beam axis 15. Breakup-fusion analyses of light ion induced stripping reactions to both bound and unbound regions International Nuclear Information System (INIS) Lee, Y.J. 1987-01-01 The breakup-fusion theory developed recently by our group at the University of Texas has been very successful in explaining observed continuum spectra of particles emitted from breakup type reactions, such as (d,p), (h,p), (h,d), (α,p), and (α,t) reactions. The aim of the present work is to extend the breakup-fusion formalism to calculate the usual stripping reaction, in which a nucleon or a nucleon-cluster is transferred into abound orbit in the target nucleus. With this extension, it is now possible to calculate the spectra of particles emitted from stripping type reactions. We particularly explore the possibility of using the breakup-fusion theory as a spectroscopic tool to obtain information about single particle states in both bound and unbound regions. For this purpose, we extend the theory so as to include the spin-orbit interaction between the transferred particle and the target which has been neglected in all the breakup-fusion studies made in the past. We then apply the thus extended breakup-fusion theory to analyze data of (d,p) and (α,t) reactions. The results of the calculations fit the observed spectra very well and the BF method is shown indeed to be useful for extracting information about the single particle states observed as bumps in both the continuum and discrete regions 16. Radiotherapy-Induced Skin Reactions Induce Fibrosis Mediated by TGF-β1 Cytokine Directory of Open Access Journals (Sweden) Cherley Borba Vieira de Andrade 2017-04-01 Full Text Available Purpose: This study aimed to investigate radiation-induced lesions on the skin in an experimental animal model. Methods and Materials: Cutaneous wounds were induced in Wistar rats by 4 MeV energy electron beam irradiation, using a dose rate of 240 cGy/min, for 3 different doses (10 Gy, 40 Gy, and 60 Gy. The skin was observed 5, 10, and 25 days (D after ionizing radiation exposition. Results: Infiltrate inflammatory process was observed in D5 and D10, for the 40 Gy and 60 Gy groups, and a progressive increase of transforming growth factor β1 is associated with this process. It could also be noted a mischaracterization of collagen fibers at the high-dose groups. Conclusion: It was observed that the lesions caused by ionizing radiation in rats were very similar to radiodermatitis in patients under radiotherapy treatment. Advances in Knowledge: This study is important to develop strategies to prevent radiation-induced skin reactions. 17. Light-particle emission as a probe of the rotational degrees of freedom in deep-inelastic reactions International Nuclear Information System (INIS) Sobotka, L.G. 1982-05-01 The emission of alpha particles in coincidence with the most deeply inelastic heavy-ion reactions has been studied for 181 Ta + 165 Ho at 1354 MeV laboratory energy and /sup nat/Ag + 84 Kr at 664 MeV. Alpha particle energy spectra and angular distributions, in coincidence with a projectile-like fragment, were acquired both in the reaction plane and out of the reaction plane at a fixed in-plane angle. The in-plane data for both systems are employed to show that the bulk of the alpha particles in coincidence with the deep-inelastic exit channel can be explained by evaporation from the fully accelerated fragments. Average velocity diagrams, α-particle energy spectra as a function of angle in several rest frames, and α-particle angular distributions are presented. The out-of-plane alpha particle angular distributions and the gamma-ray multiplicities are used to study the transfer and partitioning of angular momentum between the two fragments. For the /sup nat/Ag + 84 Kr system, individual fragment spins are extracted form the alpha particle angular distributions as a function of mass asymmetry while the sum of the fragment spins is derived from the gamma-ray multiplicities. These data, together with the fragment kinetic energies, are consistent with rigid rotation of an intermediate complex consisting of two substantially deformed spheroids in near proximity. These data also indicate that some angular momentum fractionation exists at the largest asymmetries examined. Out-of-plane alpha particle distributions, gamma-ray multiplicities, fragment spins as well as the formalism for the spin evaluation at various levels of sophistication are presented 18. Light-particle emission as a probe of the rotational degrees of freedom in deep-inelastic reactions Energy Technology Data Exchange (ETDEWEB) Sobotka, L.G. 1982-05-01 The emission of alpha particles in coincidence with the most deeply inelastic heavy-ion reactions has been studied for /sup 181/Ta/sup +/ /sup 165/Ho at 1354 MeV laboratory energy and /sup nat/Ag + /sup 84/Kr at 664 MeV. Alpha particle energy spectra and angular distributions, in coincidence with a projectile-like fragment, were acquired both in the reaction plane and out of the reaction plane at a fixed in-plane angle. The in-plane data for both systems are employed to show that the bulk of the alpha particles in coincidence with the deep-inelastic exit channel can be explained by evaporation from the fully accelerated fragments. Average velocity diagrams, ..cap alpha..-particle energy spectra as a function of angle in several rest frames, and ..cap alpha..-particle angular distributions are presented. The out-of-plane alpha particle angular distributions and the gamma-ray multiplicities are used to study the transfer and partitioning of angular momentum between the two fragments. For the /sup nat/Ag + /sup 84/Kr system, individual fragment spins are extracted form the alpha particle angular distributions as a function of mass asymmetry while the sum of the fragment spins is derived from the gamma-ray multiplicities. These data, together with the fragment kinetic energies, are consistent with rigid rotation of an intermediate complex consisting of two substantially deformed spheroids in near proximity. These data also indicate that some angular momentum fractionation exists at the largest asymmetries examined. Out-of-plane alpha particle distributions, gamma-ray multiplicities, fragment spins as well as the formalism for the spin evaluation at various levels of sophistication are presented. 19. Microtubule self-organisation by reaction-diffusion processes causes collective transport and organisation of cellular particles Directory of Open Access Journals (Sweden) Demongeot Jacques 2004-06-01 Full Text Available Abstract Background The transport of intra-cellular particles by microtubules is a major biological function. Under appropriate in vitro conditions, microtubule preparations behave as a 'complex' system and show 'emergent' phenomena. In particular, they form dissipative structures that self-organise over macroscopic distances by a combination of reaction and diffusion. Results Here, we show that self-organisation also gives rise to a collective transport of colloidal particles along a specific direction. Particles, such as polystyrene beads, chromosomes, nuclei, and vesicles are carried at speeds of several microns per minute. The process also results in the macroscopic self-organisation of these particles. After self-organisation is completed, they show the same pattern of organisation as the microtubules. Numerical simulations of a population of growing and shrinking microtubules, incorporating experimentally realistic reaction dynamics, predict self-organisation. They forecast that during self-organisation, macroscopic parallel arrays of oriented microtubules form which cross the reaction space in successive waves. Such travelling waves are capable of transporting colloidal particles. The fact that in the simulations, the aligned arrays move along the same direction and at the same speed as the particles move, suggest that this process forms the underlying mechanism for the observed transport properties. Conclusions This process constitutes a novel physical chemical mechanism by which chemical energy is converted into collective transport of colloidal particles along a given direction. Self-organisation of this type provides a new mechanism by which intra cellular particles such as chromosomes and vesicles can be displaced and simultaneously organised by microtubules. It is plausible that processes of this type occur in vivo. 20. Microtubule self-organisation by reaction-diffusion processes causes collective transport and organisation of cellular particles Science.gov (United States) Glade, Nicolas; Demongeot, Jacques; Tabony, James 2004-01-01 Background The transport of intra-cellular particles by microtubules is a major biological function. Under appropriate in vitro conditions, microtubule preparations behave as a 'complex' system and show 'emergent' phenomena. In particular, they form dissipative structures that self-organise over macroscopic distances by a combination of reaction and diffusion. Results Here, we show that self-organisation also gives rise to a collective transport of colloidal particles along a specific direction. Particles, such as polystyrene beads, chromosomes, nuclei, and vesicles are carried at speeds of several microns per minute. The process also results in the macroscopic self-organisation of these particles. After self-organisation is completed, they show the same pattern of organisation as the microtubules. Numerical simulations of a population of growing and shrinking microtubules, incorporating experimentally realistic reaction dynamics, predict self-organisation. They forecast that during self-organisation, macroscopic parallel arrays of oriented microtubules form which cross the reaction space in successive waves. Such travelling waves are capable of transporting colloidal particles. The fact that in the simulations, the aligned arrays move along the same direction and at the same speed as the particles move, suggest that this process forms the underlying mechanism for the observed transport properties. Conclusions This process constitutes a novel physical chemical mechanism by which chemical energy is converted into collective transport of colloidal particles along a given direction. Self-organisation of this type provides a new mechanism by which intra cellular particles such as chromosomes and vesicles can be displaced and simultaneously organised by microtubules. It is plausible that processes of this type occur in vivo. PMID:15176973 1. A characterisation of the magnetically induced movement of NdFeB-particles in magnetorheological elastomers Science.gov (United States) Schümann, M.; Borin, D. Y.; Huang, S.; Auernhammer, G. K.; Müller, R.; Odenbach, S. 2017-09-01 Magnetorheological elastomers are a type of smart hybrid material where elastic properties of a soft elastomer matrix are combined with magnetic properties of magnetic micro particles. This combination leads to a complex interplay of magnetic and elastic phenomena, of which the magnetorheological effect is the best described. In this paper, magnetically hard NdFeB-particles were used to obtain remanent magnetic properties. X-ray microtomography has been utilised to analyse the particle movement induced by magnetic fields. A particle tracking was performed; thus, it was possible to characterise the movement of individual particles. Beyond that, a comprehensive analysis of the orientation of all particles was performed at different states of magnetisation and global particle arrangements. For the first time, this method was successfully applied to a magnetorheological material with a technically relevant amount of magnetic NdFeB-particles. A significant impact of the magnetic field on the rotation and translation of the particles was shown. 2. Encouragement of Enzyme Reaction Utilizing Heat Generation from Ferromagnetic Particles Subjected to an AC Magnetic Field. Science.gov (United States) Suzuki, Masashi; Aki, Atsushi; Mizuki, Toru; Maekawa, Toru; Usami, Ron; Morimoto, Hisao 2015-01-01 We propose a method of activating an enzyme utilizing heat generation from ferromagnetic particles under an ac magnetic field. We immobilize α-amylase on the surface of ferromagnetic particles and analyze its activity. We find that when α-amylase/ferromagnetic particle hybrids, that is, ferromagnetic particles, on which α-amylase molecules are immobilized, are subjected to an ac magnetic field, the particles generate heat and as a result, α-amylase on the particles is heated up and activated. We next prepare a solution, in which α-amylase/ferromagnetic particle hybrids and free, nonimmobilized chitinase are dispersed, and analyze their activities. We find that when the solution is subjected to an ac magnetic field, the activity of α-amylase immobilized on the particles increases, whereas that of free chitinase hardly changes; in other words, only α-amylase immobilized on the particles is selectively activated due to heat generation from the particles. 3. Encouragement of Enzyme Reaction Utilizing Heat Generation from Ferromagnetic Particles Subjected to an AC Magnetic Field. Directory of Open Access Journals (Sweden) Masashi Suzuki Full Text Available We propose a method of activating an enzyme utilizing heat generation from ferromagnetic particles under an ac magnetic field. We immobilize α-amylase on the surface of ferromagnetic particles and analyze its activity. We find that when α-amylase/ferromagnetic particle hybrids, that is, ferromagnetic particles, on which α-amylase molecules are immobilized, are subjected to an ac magnetic field, the particles generate heat and as a result, α-amylase on the particles is heated up and activated. We next prepare a solution, in which α-amylase/ferromagnetic particle hybrids and free, nonimmobilized chitinase are dispersed, and analyze their activities. We find that when the solution is subjected to an ac magnetic field, the activity of α-amylase immobilized on the particles increases, whereas that of free chitinase hardly changes; in other words, only α-amylase immobilized on the particles is selectively activated due to heat generation from the particles. 4. Particle size dependence on oxygen reduction reaction activity of electrodeposited TaOx catalysts in acidic media KAUST Repository Seo, J. 2013-11-13 The size dependence of the oxygen reduction reaction activity was studied for TaOx nanoparticles electrodeposited on carbon black for application to polymer electrolyte fuel cells (PEFCs). Compared with a commercial Ta2O5 material, the ultrafine oxide nanoparticles exhibited a distinctively high onset potential different from that of the bulky oxide particles. 5. Ultrafine and fine particle formation in a naturally ventilated office as a result of reactions between ozone and scented products DEFF Research Database (Denmark) Toftum, Jørn; Dijken, F. v. 2003-01-01 Ultrafine and fine particle formation as a result of chemical reactions between ozone and four different air fresheners and a typical lemon-scented domestic cleaner was studied in a fully furnished, naturally ventilated office. The study showed that under conditions representative of those... 6. Numerical simulation by a random particle method of Deuterium-Tritium fusion reactions in a plasma* Directory of Open Access Journals (Sweden) Charles Fréderique 2013-01-01 Full Text Available We propose and we justify a Monte-Carlo algorithm which solves a spatially homogeneous kinetic equation of Boltzmann type that models the fusion reaction between a deuterium ion and a tritium ion, and giving an α particle and a neutron. The proposed algorithm is validated with the use of explicit solutions of the kinetic model obtained by replacing the fusion cross-section by a Maxwellian cross section. On propose et on justifie un algorithme de type Monte-Carlo permettant de résoudre un modèle cinétique homogène en espace de type Boltzmann modélisant la réaction de fusion entre un ion deutérium et un ion tritium, et donnant une particule α et un neutron. L’algorithme proposé est par ailleurs validé via des solutions explicites du modèle cinétique obtenues en remplaçant la section efficace de fusion par une section efficace maxwellienne. 7. Method and device for thermal control of biological and chemical reactions using magnetic particles or magnetic beads and variable magnetic fields OpenAIRE Zilch, C.; Gerdes, W.; Bauer, J.; Holschuh, K. 2009-01-01 The invention relates to a method for the thermal control of at least one temperature-dependent enzymatic reaction in the presence of magnetic particles, particularly nanoparticles, or magnetic beads, in vitro by heating the magnetic beads or magnetic particles to at least one defined target temperature using alternating magnetic fields. The thermally controllable enzymatic reaction carried out with the method according to the invention is preferably a PCR reaction or another reaction for elo... 8. Population of Nuclei Via 7Li-Induced Binary Reactions International Nuclear Information System (INIS) Clark, Rodney M.; Phair, Larry W.; Descovich, M.; Cromaz, Mario; Deleplanque, M.A.; Fall on, Paul; Lee, I-Yang; Macchiavelli, A.O.; McMahan, Margaret A.; Moretto, Luciano G.; Rodriguez-Vieitez, E.; Sinha, Shrabani; Stephens, Frank S.; Ward, David; Wiedeking, Mathis 2005-01-01 The authors have investigated the population of nuclei formed in binary reactions involving 7 Li beams on targets of 160 Gd and 184 W. The 7 Li + 184 W data were taken in the first experiment using the LIBERACE Ge-array in combination with the STARS Si ΔE-E telescope system at the 88-Inch Cyclotron of the Lawrence Berkeley National Laboratory. By using the Wilczynski binary transfer model, in combination with a standard evaporation model, they are able to reproduce the experimental results. This is a useful method for predicting the population of neutron-rich heavy nuclei formed in binary reactions involving beams of weakly bound nuclei formed in binary reactions involving beams of weakly bound nuclei and will be of use in future spectroscopic studies 9. Geraniin suppresses RANKL-induced osteoclastogenesis in vitro and ameliorates wear particle-induced osteolysis in mouse model Energy Technology Data Exchange (ETDEWEB) Xiao, Fei; Zhai, Zanjing; Jiang, Chuan; Liu, Xuqiang; Li, Haowei; Qu, Xinhua [Department of Orthopedics, Shanghai Key Laboratory of Orthopedic Implant, Shanghai Ninth People' s Hospital, Shanghai Jiaotong University School of Medicine, Shanghai (China); Ouyang, Zhengxiao [Department of Orthopedics, Shanghai Key Laboratory of Orthopedic Implant, Shanghai Ninth People' s Hospital, Shanghai Jiaotong University School of Medicine, Shanghai (China); Department of Orthopaedics, Hunan Provincial Tumor Hospital and Tumor Hospital of Xiangya School of Medicine, Central South University, Changsha, Hunan 410013 (China); Fan, Qiming; Tang, Tingting [Department of Orthopedics, Shanghai Key Laboratory of Orthopedic Implant, Shanghai Ninth People' s Hospital, Shanghai Jiaotong University School of Medicine, Shanghai (China); Qin, An, E-mail: [email protected] [Department of Orthopedics, Shanghai Key Laboratory of Orthopedic Implant, Shanghai Ninth People' s Hospital, Shanghai Jiaotong University School of Medicine, Shanghai (China); Gu, Dongyun, E-mail: [email protected] [Department of Orthopedics, Shanghai Key Laboratory of Orthopedic Implant, Shanghai Ninth People' s Hospital, Shanghai Jiaotong University School of Medicine, Shanghai (China); Engineering Research Center of Digital Medicine and Clinical Translation, Ministry of Education of PR China (China); School of Biomedical Engineering, Shanghai Jiao Tong University, 1954 Huashan Road, Shanghai 200030 (China) 2015-01-01 Wear particle-induced osteolysis and subsequent aseptic loosening remains the most common complication that limits the longevity of prostheses. Wear particle-induced osteoclastogenesis is known to be responsible for extensive bone erosion that leads to prosthesis failure. Thus, inhibition of osteoclastic bone resorption may serve as a therapeutic strategy for the treatment of wear particle induced osteolysis. In this study, we demonstrated for the first time that geraniin, an active natural compound derived from Geranium thunbergii, ameliorated particle-induced osteolysis in a Ti particle-induced mouse calvaria model in vivo. We also investigated the mechanism by which geraniin exerts inhibitory effects on osteoclasts. Geraniin inhibited RANKL-induced osteoclastogenesis in a dose-dependent manner, evidenced by reduced osteoclast formation and suppressed osteoclast specific gene expression. Specially, geraniin inhibited actin ring formation and bone resorption in vitro. Further molecular investigation demonstrated geraniin impaired osteoclast differentiation via the inhibition of the RANKL-induced NF-κB and ERK signaling pathways, as well as suppressed the expression of key osteoclast transcriptional factors NFATc1 and c-Fos. Collectively, our data suggested that geraniin exerts inhibitory effects on osteoclast differentiation in vitro and suppresses Ti particle-induced osteolysis in vivo. Geraniin is therefore a potential natural compound for the treatment of wear particle induced osteolysis in prostheses failure. - Highlights: • Geraniin suppresses osteoclasts formation and function in vitro. • Geraniin impairs RANKL-induced nuclear factor-κB and ERK signaling pathway. • Geraniin suppresses osteolysis in vivo. • Geraniin may be used for treating osteoclast related diseases. 10. Cross sections and thermonuclear reaction rates of proton-induced reactions on 37Cl International Nuclear Information System (INIS) Weber, R.O.; Tingwell, C.I.W.; Mitchell, L.W.; Sevior, M.E.; Sargood, D.G. 1984-01-01 The yields of γ-rays from the reactions of 37 Cl(p,γ) 38 Ar and 37 Cl(p,αγ) 34 S have been measured as a of bombarding energy over the ranges 0.65 - 2.15 MeV and 1.25 -2.15 MeV respectively, and the yield of neutrons from 37 Cl(p,n) 37 Ar from threshold to 2.50 MeV. The results are compared with global statistical-model calculations and thermonuclear reaction rates are calculated for the temperature range 5 x 10 8 - 10 10 K. The significance of these thermonuclear reaction rates for stellar nucleosynthesis calculations is discussed 11. Synergistic effects in radiation-induced particle ejection from solid surfaces International Nuclear Information System (INIS) Itoh, Noriaki 1990-01-01 A description is given on radiation-induced particle ejection from solid surfaces, emphasizing synergistic effects arising from multi-species particle irradiation and from irradiation under complex environments. First, it is pointed out that synergisms can be treated by introducing the effects of material modification on radiation-induced particle ejection. As examples of the effects of surface modification on the sputtering induced by elastic encounters, sputtering of alloys and chemical sputtering of graphite are briefly discussed. Then the particle ejection induced by electronic encounters is explained emphasizing the difference in the behaviors from materials to materials. The possible synergistic effects of electronic and elastic encounters are also described. Lastly, we point out the importance of understanding the elementary processes of material-particle interaction and of developing computer codes describing material behaviors under irradiation. (author) 12. Quasifree (p, 2p) Reactions on Oxygen Isotopes: Observation of Isospin Independence of the Reduced Single-Particle Strength. Science.gov (United States) Atar, L; Paschalis, S; Barbieri, C; Bertulani, C A; Díaz Fernández, P; Holl, M; Najafi, M A; Panin, V; Alvarez-Pol, H; Aumann, T; Avdeichikov, V; Beceiro-Novo, S; Bemmerer, D; Benlliure, J; Boillos, J M; Boretzky, K; Borge, M J G; Caamaño, M; Caesar, C; Casarejos, E; Catford, W; Cederkall, J; Chartier, M; Chulkov, L; Cortina-Gil, D; Cravo, E; Crespo, R; Dillmann, I; Elekes, Z; Enders, J; Ershova, O; Estrade, A; Farinon, F; Fraile, L M; Freer, M; Galaviz Redondo, D; Geissel, H; Gernhäuser, R; Golubev, P; Göbel, K; Hagdahl, J; Heftrich, T; Heil, M; Heine, M; Heinz, A; Henriques, A; Hufnagel, A; Ignatov, A; Johansson, H T; Jonson, B; Kahlbow, J; Kalantar-Nayestanaki, N; Kanungo, R; Kelic-Heil, A; Knyazev, A; Kröll, T; Kurz, N; Labiche, M; Langer, C; Le Bleis, T; Lemmon, R; Lindberg, S; Machado, J; Marganiec-Gałązka, J; Movsesyan, A; Nacher, E; Nikolskii, E Y; Nilsson, T; Nociforo, C; Perea, A; Petri, M; Pietri, S; Plag, R; Reifarth, R; Ribeiro, G; Rigollet, C; Rossi, D M; Röder, M; Savran, D; Scheit, H; Simon, H; Sorlin, O; Syndikus, I; Taylor, J T; Tengblad, O; Thies, R; Togano, Y; Vandebrouck, M; Velho, P; Volkov, V; Wagner, A; Wamers, F; Weick, H; Wheldon, C; Wilson, G L; Winfield, J S; Woods, P; Yakorev, D; Zhukov, M; Zilges, A; Zuber, K 2018-02-02 Quasifree one-proton knockout reactions have been employed in inverse kinematics for a systematic study of the structure of stable and exotic oxygen isotopes at the R^{3}B/LAND setup with incident beam energies in the range of 300-450 MeV/u. The oxygen isotopic chain offers a large variation of separation energies that allows for a quantitative understanding of single-particle strength with changing isospin asymmetry. Quasifree knockout reactions provide a complementary approach to intermediate-energy one-nucleon removal reactions. Inclusive cross sections for quasifree knockout reactions of the type ^{A}O(p,2p)^{A-1}N have been determined and compared to calculations based on the eikonal reaction theory. The reduction factors for the single-particle strength with respect to the independent-particle model were obtained and compared to state-of-the-art ab initio predictions. The results do not show any significant dependence on proton-neutron asymmetry. 13. Evaluation of isomeric excitation functions in neutron induced reactions International Nuclear Information System (INIS) Grudzevich, O.; Ignatyuk, A.; Zolotarev, K. 1992-01-01 The possibilities of isomer levels experimental excitation functions description with theoretical models are discussed. It is shown that the experimental data in many cases can be described by theoretical models quite well without parameter fitting. However, large discrepancies are observed for some reactions. In our opinion, these discrepancies are due to uncertainties of discrete level schemes, schemes of gamma-transitions between levels and spin dependence of level density. Small values of isomeric ratios (< 0.1) have been described with the largest errors. The simple formulae for energy dependence of isomeric ratio for (n,g) reaction has been proposed. (author). 53 refs, 10 figs, 8 tabs 14. Light induced electron transfer reactions of metal complexes International Nuclear Information System (INIS) Sutin, N.; Creutz, C. 1980-01-01 Properties of the excited states of tris(2,2'-bipyridine) and tris(1,10-phenanthroline) complexes of chromium(III), iron(II), ruthenium(II), osmium(II), rhodium(III), and iridium(III) are described. The electron transfer reactions of the ground and excited states are discussed and interpreted in terms of the driving force for the reaction and the distortions of the excited states relative to the corresponding ground states. General considerations relevant to the conversion of light into chemical energy are presented and progress in the use of polypyridine complexes to effect the light decomposition of water into hydrogen and oxygen is reviewed 15. Passing particle toroidal precession induced by electric field in a tokamak International Nuclear Information System (INIS) Andreev, V. V.; Ilgisonis, V. I.; Sorokina, E. A. 2013-01-01 Characteristics of a rotation of passing particles in a tokamak with radial electric field are calculated. The expression for time-averaged toroidal velocity of the passing particle induced by the electric field is derived. The electric-field-induced additive to the toroidal velocity of the passing particle appears to be much smaller than the velocity of the electric drift calculated for the poloidal magnetic field typical for the trapped particle. This quantity can even have the different sign depending on the azimuthal position of the particle starting point. The unified approach for the calculation of the bounce period and of the time-averaged toroidal velocity of both trapped and passing particles in the whole volume of plasma column is presented. The results are obtained analytically and are confirmed by 3D numerical calculations of the trajectories of charged particles 16. Passing particle toroidal precession induced by electric field in a tokamak Energy Technology Data Exchange (ETDEWEB) Andreev, V. V. [Peoples' Friendship University of Russia, Ordzhonikidze St. 3, Moscow 117198 (Russian Federation); Ilgisonis, V. I.; Sorokina, E. A. [Peoples' Friendship University of Russia, Ordzhonikidze St. 3, Moscow 117198 (Russian Federation); NRC “Kurchatov Institute”, Kurchatov Sq. 1, Moscow 123182 (Russian Federation) 2013-12-15 Characteristics of a rotation of passing particles in a tokamak with radial electric field are calculated. The expression for time-averaged toroidal velocity of the passing particle induced by the electric field is derived. The electric-field-induced additive to the toroidal velocity of the passing particle appears to be much smaller than the velocity of the electric drift calculated for the poloidal magnetic field typical for the trapped particle. This quantity can even have the different sign depending on the azimuthal position of the particle starting point. The unified approach for the calculation of the bounce period and of the time-averaged toroidal velocity of both trapped and passing particles in the whole volume of plasma column is presented. The results are obtained analytically and are confirmed by 3D numerical calculations of the trajectories of charged particles. 17. Charged-particle thermonuclear reaction rates: IV. Comparison to previous work International Nuclear Information System (INIS) Iliadis, C.; Longland, R.; Champagne, A.E.; Coc, A. 2010-01-01 We compare our Monte Carlo reaction rates (see Paper II of this issue) to previous results that were obtained by using the classical method of computing thermonuclear reaction rates. For each reaction, the comparison is presented using two types of graphs: the first shows the change in reaction rate uncertainties, while the second displays our new results normalized to the previously recommended reaction rate. We find that the rates have changed significantly for almost all reactions considered here. The changes are caused by (i) our new Monte Carlo method of computing reaction rates (see Paper I of this issue), and (ii) newly available nuclear physics information (see Paper III of this issue). 18. Pre-equilibrium emission of nucleons from reactions induced by medium-energy heavy ions International Nuclear Information System (INIS) Korolija, M.; Holuh, E.; Cindro, N.; Hilscher, D. 1984-01-01 Recent data on fast-nucleon emission in heavy-ion-induced reactions are analysed successfully in terms of pre-equilibrium models; it is shown that the relevant parameters of those models preserve the physical meaning they have in light-ion-induced reactions. The initial exciton number obtained from a Griffin-plot analysis and the initial number of degrees of freedom, which is the relevant parameter of the modified HMB model, appear to be approximately equal for a given reaction at a given energy. It is inferred that, for heavy-ion reactions, the determination of such a parameter is substantially dominated by the centre-of-mass energy per nucleon above the Coulomb barrier, in contrast with the results of nucleon-induced reactions 19. Agrobacterium -induced hypersensitive necrotic reaction in plant cells African Journals Online (AJOL) High necrosis and poor survival rate of target plant tissues are some of the major factors that affect the efficiency of Agrobacterium-mediated T-DNA transfer into plant cells. These factors may be the result of, or linked to, hypersensitive defense reaction in plants to Agrobacterium infection, which may involve the recognition ... 20. Two-pion production in photon-induced reactions Indian Academy of Sciences (India) photoproduction from nuclei is also used to investigate the in-medium modification of meson–meson interactions. ... the observation of an in-medium modification of the vector meson masses can pro- vide a unique .... similar behavior is found in (γ,π+π0) reactions, shown in the right panel of figure 3. Additionally, the peak in ... 1. Two-pion production in photon-induced reactions Indian Academy of Sciences (India) A deeper understanding of the situation is anticipated from a detailed experimental study of meson photoproduction from nuclei in exclusive reactions. In the energy regime above the (1232) resonance, the dominant double pion production channels are of particular interest. Double pion photoproduction from nuclei is ... 2. Distortion effects in pion-induced knock-out reactions International Nuclear Information System (INIS) Jain, B.K. The cross-section for (π + ,π + p) reaction on 12 C is calculated in DWIA at 100 and 180 MeV incident energy. The effect of pion distortion is found to be strong. Around 180 MeV the effect is strongly absorptive while around 10O MeV it is mainly dispersive. (auth.) 3. A primer for electroweak induced low-energy nuclear reactions Indian Academy of Sciences (India) paper is devoted to delineating the unifying features and to an overall synthesis of .... reaction e− + p → n + νe for electrons and protons of very low kinetic energy ..... of Z protons and N = (A − Z) neutrons where A is the total number of nucleons. 4. Determination of neutral current couplings from neutrino-induced semi-inclusive pion and inclusive reactions International Nuclear Information System (INIS) Hung, P.Q. 1977-01-01 It is shown that by looking at data from neutrino-induced semi-inclusive pion and inclusive reactions on isoscalar targets along, one can determine completely the neutral current couplings. Predictions for various models are also presented. (Auth.) 5. Neutron-induced capture cross sections via the surrogate reaction method International Nuclear Information System (INIS) Boutoux, G.; Jurado, B.; Aiche, M.; Barreau, G.; Capellan, N.; Companis, I.; Czajkowski, S.; Dassie, D.; Haas, B.; Mathieu, L.; Meot, V.; Bail, A.; Bauge, E.; Daugas, J. M.; Faul, T.; Gaudefroy, L.; Morel, P.; Pillet, N.; Roig, O.; Romain, P.; Taieb, J.; Theroine, C.; Burke, J.T.; Companis, I.; Derkx, X.; Gunsing, F.; Matea, I.; Tassan-Got, L.; Porquet, M.G.; Serot, O. 2011-01-01 The surrogate reaction method is an indirect way of determining cross sections for nuclear reactions that proceed through a compound nucleus. This technique enables neutron-induced cross sections to be extracted for nuclear reactions on short-lived unstable nuclei that otherwise can not be measured. This technique has been successfully applied to determine the neutron-induced fission cross sections of several short-lived nuclei. In this work, we investigate whether this powerful technique can also be used to determine of neutron-induced capture cross sections. For this purpose we use the surrogate reaction 174 Yb( 3 He, pγ) 176 Lu to infer the well known 175 Lu(n, γ) cross section and compare the results with the directly measured neutron-induced data. This surrogate experiment has been performed in March 2010. The experimental technique used and the first preliminary results will be presented. (authors) 6. Integral excitation functions for proton and alpha induced reactions on target elements 22 <= Z <= 28 International Nuclear Information System (INIS) Brinkmann, G. 1979-01-01 In the framework of a systematic study which is also important for certain cosmological questions a series of integral excitation functions of p- and α-induced nuclear reactions on target elements 22 [de 7. Aloe vera Induced Biomimetic Assemblage of Nucleobase into Nanosized Particles Science.gov (United States) Chauhan, Arun; Zubair, Swaleha; Sherwani, Asif; Owais, Mohammad 2012-01-01 Aim Biomimetic nano-assembly formation offers a convenient and bio friendly approach to fabricate complex structures from simple components with sub-nanometer precision. Recently, biomimetic (employing microorganism/plants) synthesis of metal and inorganic materials nano-particles has emerged as a simple and viable strategy. In the present study, we have extended biological synthesis of nano-particles to organic molecules, namely the anticancer agent 5-fluorouracil (5-FU), using Aloe vera leaf extract. Methodology The 5-FU nano- particles synthesized by using Aloe vera leaf extract were characterized by UV, FT-IR and fluorescence spectroscopic techniques. The size and shape of the synthesized nanoparticles were determined by TEM, while crystalline nature of 5-FU particles was established by X-ray diffraction study. The cytotoxic effects of 5-FU nanoparticles were assessed against HT-29 and Caco-2 (human adenocarcinoma colorectal) cell lines. Results Transmission electron microscopy and atomic force microscopic techniques confirmed nano-size of the synthesized particles. Importantly, the nano-assembled 5-FU retained its anticancer action against various cancerous cell lines. Conclusion In the present study, we have explored the potential of biomimetic synthesis of nanoparticles employing organic molecules with the hope that such developments will be helpful to introduce novel nano-particle formulations that will not only be more effective but would also be devoid of nano-particle associated putative toxicity constraints. PMID:22403622 8. Fragment mass distribution of proton-induced spallation reaction with intermediate energy International Nuclear Information System (INIS) Fan Sheng; Ye Yanlin; Xu Chuncheng; Chen Tao; Sobolevsky, N.M. 2000-01-01 The test of part benchmark of SHIELD code is finished. The fragment cross section and mass distribution and excitation function of the residual nuclei from proton-induced spallation reaction on thin Pb target with intermediate energy have been calculated by SHIELD code. And the results are in good agreement with measured data. The fragment mass distribution of the residual nuclei from proton-induced spallation reaction on thick Pb target with incident energy 1.6 GeV have been simulated 9. Particle trapping induced by the interplay between coherence and decoherence International Nuclear Information System (INIS) Yi Sangyong; Choi, Mahn-Soo; Kim, Sang Wook 2009-01-01 We propose a novel scheme to trap a particle based on a delicate interplay between coherence and decoherence. If the decoherence occurs as a particle is located in the scattering region and subsequently the appropriate destructive interference takes place, the particle can be trapped in the scattering area. We consider two possible experimental realizations of such trapping: a ring attached to a single lead and a ring attached to two leads. Our scheme has nothing to do with a quasi-bound state of the system, but has a close analogy with the weak localization phenomena in disordered conductors. 10. Development of a Reference Database for Particle-Induced Gamma-ray Emission spectroscopy Energy Technology Data Exchange (ETDEWEB) Dimitriou, P., E-mail: [email protected] [International Atomic Energy Agency, Wagramerstrasse 5, A-1400 Vienna (Austria); Becker, H.-W. [Ruhr Universität Bochum, Gebäude NT05/130, Postfach 102148, Bochum 44721 (Germany); Bogdanović-Radović, I. [Department of Experimental Physics, Institute Rudjer Boskovic, Bijenicka Cesta 54, 10000 Zagreb (Croatia); Chiari, M. [Istituto Nazionale di Fisica Nucleare, Via Sansone 1, Sesto Fiorentino, 50019 Firenze (Italy); Goncharov, A. [Kharkov Institute of Physics and Technology, National Science Center, Akademicheskaya Str.1, Kharkov 61108 (Ukraine); Jesus, A.P. [Departamento de Física, Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa (Portugal); Kakuee, O. [Nuclear Science and Technology Research Institute, End of North Karegar Ave., PO Box 14395-836, Tehran (Iran, Islamic Republic of); Kiss, A.Z. [Institute of Nuclear Research (ATOMKI), Bem ter 18/c, PO Box 51, 4001 Debrecen (Hungary); Lagoyannis, A. [National Center of Scientific Research “Demokritos”, Agia Paraskevi, P.O. Box 60228, 15310 Athens (Greece); Räisänen, J. [Division of Materials Physics, Department of Physics, University of Helsinki, PO Box 43, 00014 University of Helsinki (Finland); Strivay, D. [Institut de Physique Nucleaire, Atomique et de Spectroscopie, Universite de Liège, Sart Tilman, B15 4000 Liège (Belgium); Zucchiatti, A. [Centro de Micro Análisis de Materiales, Universidad Autónoma de Madrid, Faraday 3, Madrid 28049 (Spain) 2016-03-15 Particle-Induced Gamma-ray Emission (PIGE) is a powerful analytical technique that exploits the interactions of rapid charged particles with nuclei located near a sample surface to determine the composition and structure of the surface regions of solids by measurement of characteristic prompt γ rays. The potential for depth profiling of this technique has long been recognized, however, the implementation has been limited owing to insufficient knowledge of the physical data and lack of suitable user-friendly computer codes for the applications. Although a considerable body of published data exists in the nuclear physics literature for nuclear reaction cross sections with γ rays in the exit channel, there is no up-to-date, comprehensive compilation specifically dedicated to IBA applications. A number of PIGE cross-section data had already been uploaded to the Ion Beam Analysis Nuclear Data Library (IBANDL) ( (http://www-nds.iaea.org/ibandl)) by members of the IBA community by 2011, however a preliminary survey of this body of unevaluated experimental data has revealed numerous discrepancies beyond the uncertainty limits reported by the authors. Using the resources and coordination provided by the IAEA, a concerted effort to improve the situation was made within the Coordinated Research Project on the Development of a Reference Database for PIGE spectroscopy, from 2011 to 2015. The aim of the CRP was to create a data library for Ion Beam Analysis that contains reliable and usable data on charged particle γ-ray emission cross sections that would be made freely available to the user community. As the CRP has reached its completion, we shall present its main achievements, including the results of nuclear cross-section evaluations and the development of a computer code that will become available to the public allowing for the implementation of a standardless PIGE technique. 11. HLA-A*3101 and carbamazepine-induced hypersensitivity reactions in Europeans. LENUS (Irish Health Repository) McCormack, Mark 2011-03-24 Carbamazepine causes various forms of hypersensitivity reactions, ranging from maculopapular exanthema to severe blistering reactions. The HLA-B*1502 allele has been shown to be strongly correlated with carbamazepine-induced Stevens-Johnson syndrome and toxic epidermal necrolysis (SJS-TEN) in the Han Chinese and other Asian populations but not in European populations. 12. 32S-induced reactions at 10 MeV/u bombarding energy International Nuclear Information System (INIS) Betz, J.; Graef, H.; Novotny, R.; Pelte, D.; Winkler, U. 1983-01-01 The deep-inelastic processes of the reactions 32 S+ 28 Si, sup(nat)S, 40 Ca, 58 Ni, 74 Ge are studied at 10 MeV/u bombarding energy employing a kinematical coincidence spectrometer. From the measured energies, momenta, masses and atomic numbers of two heavy fragments the corresponding parameters for the unobserved reaction products and the reaction Q-values are deduced. It is found that the reactions generally show the pattern of a normal deep-inelastic process which is followed by the evaporation of several light particles. But with much less intensities other processes also seem to occur: three-fragment excit channels and incomplete energy damping which is correlated with the emission of a few light particles of high momenta. (orig.) 13. Ion-beam-induced reactions in metal-thin-film-/BP system International Nuclear Information System (INIS) Kobayashi, N.; Kumashiro, Y.; Revesz, P.; Mayer, J.W. 1989-01-01 Ion-beam-induced reactions in Ni thin films on BP(100) have been investigated and compared with the results of the thermal reaction. The full reaction of Ni layer with BP induced by energetic heavy ion bombardments (600 keV Xe) was observed at 200degC and the formation of the crystalline phase corresponding to a composition of Ni 4 BP was observed. Amorphous layer with the same composition was formed by the bombardments below RT. For thermally annealed samples the reaction of the Ni layer on BP started at temperatures between 350degC and 400degC and full reaction was observed at 450degC. Metal-rich ternary phase or mixed binary phase is thought to be the first crystalline phase formed both in the ion-beam-induced and in the thermally induced reactions. The crystalline phase has the same composition and X-ray diffraction pattern both for ion-beam-induced and thermal reactions. Linear dependence of the reacted thickness on the ion fluence was also observed. The authors would like to express their sincere gratitude to Jian Li and Shi-Qing Wang for X-ray diffraction measurements at Cornell University. One of the authors (N.K.) acknowledge the Agency of Science and Technology of Japan for the financial support of his stay at Cornell. We also acknowledge Dr. H. Tanoue at ETL for his help in ion bombardment experiments. (author) 14. Neutral current induced reactions in the Gargamelle experiment International Nuclear Information System (INIS) Doninck, W. van 1976-07-01 In the heavy liquid bubble chamber Gargamelle at CERN, 3 candidates are found for the pure leptonic neutral current process anti upsilonsub(μ)e - → anti upsilon sub(μ)e - . For the inclusive semileptonic reactions upsilon N → μ - X and anti upsilon N → μ + X the ratios of the neutral current to the charged current cross sections are measured to be Rsub(upsilon) = 0.25 +- 0.04 and Rsub(anti upsilon) = 0.56 +- 0.09. These inclusive results differ from the prediction of parity conserving models by more than 3 standard deviations. All three reactions are compatible with the Weinberg-Salam-model and a value of sin 2 THETAsub(ω) near 0.3. (BJ) [de 15. Fission dynamics of superheavy nuclei formed in uranium induced reactions International Nuclear Information System (INIS) Gurjit Kaur; Sandhu, Kirandeep; Sharma, Manoj K. 2017-01-01 The compound nuclear system follows symmetric fission if the competing processes such as quasi-elastic, deep inelastic, quasi-fission etc are absent. The contribution of quasi fission events towards the fusion-fission mechanism depends on the entrance channel asymmetry of reaction partners, deformations and orientations of colliding nuclei beside the dependence on energy and angular momentum. Usually the 209 Bi and 208 Pb targets are opted for the production of superheavy nuclei with Z CN =104-113. The nuclei in same mass/charge range can also be synthesized using actinide targets + light projectiles (i.e. asymmetric reaction partners) via hot fusion interactions. These actinide targets are prolate deformed which prefer the compact configurations at above barrier energies, indicating the occurrence of symmetric fission events. Here an attempt is made to address the dynamics of light superheavy system (Z CN =104-106), formed via hot fusion interactions involving actinide targets 16. Quid-Induced Lichenoid Reactions: A Prevalence Study Directory of Open Access Journals (Sweden) Vishal Dang 2011-01-01 Full Text Available White lesions of the oral mucosa are of concern to the dental surgeon in view of the fact that some of these may be potentially malignant. Oral lichen plane: (OLP and oral lichenoid reactions (OLR share similar clinical appearances but need to be carefully distinguished because of their different etiologies and clinical behaviour. This study screened 5.017 population, in a house-to-house field survey, for tobacco use and investigated the prevalence of oral lichenoid reactions in the 98 quid users. Six subjects with clinical/clinical and histopathological criteria compatible with the diagnosis of OLR were identified. All these subjects were users of ′Gutka′, a unique chewable variant of tobacco quid containing areca nut and catechu. Statistical analysis revealed a significant association between quid habit and lesion occurrence (p < 0.005. 17. Depletion-induced biaxial nematic states of boardlike particles International Nuclear Information System (INIS) Belli, S; Van Roij, R; Dijkstra, M 2012-01-01 With the aim of investigating the stability conditions of biaxial nematic liquid crystals, we study the effect of adding a non-adsorbing ideal depletant on the phase behavior of colloidal hard boardlike particles. We take into account the presence of the depletant by introducing an effective depletion attraction between a pair of boardlike particles. At fixed depletant fugacity, the stable liquid-crystal phase is determined through a mean-field theory with restricted orientations. Interestingly, we predict that for slightly elongated boardlike particles a critical depletant density exists, where the system undergoes a direct transition from an isotropic liquid to a biaxial nematic phase. As a consequence, by tuning the depletant density, an easy experimental control parameter, one can stabilize states of high biaxial nematic order even when these states are unstable for pure systems of boardlike particles. (paper) 18. Multifragment emission in light-ion induced reactions International Nuclear Information System (INIS) Pollacco, E.C.; Volant, C.; Dayras, R.; Legrain, R.; Cassagnou, Y.; Norbeck, E. 1991-01-01 Multifragment events for IMFs (3 ≤ Z ≤ 12) with multiplicity up to four have been observed in the reaction of 0.90 and 3.6 GeV 3 He ions with nat Ag nuclei. Events are detected in which IMFs account for up to 75% of the total charge of the system and extend up to total kinetic energies of 400 MeV. Fragment energy spectra and angular distributions are found to be dependent on event multiplicity 19. pi-pi correlations in photon-induced reactions NARCIS (Netherlands) Messchendorp, J 2003-01-01 Differential cross sections of the-reactions A(gamma,pi(0) pi(0)) and A(gamma, pi(0) pi(+) + pi(0) pi(-)) with A=H-1, C-12, and Pb-nat are presented. A significant. nuclear-mass dependence of the pipi invariant-mass distribution is found in the pi(0) pi(0) channel. The dependence is not observed in 20. Robotic reactions: Delay-induced patterns in autonomous vehicle systems Science.gov (United States) Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco 2010-02-01 Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation. 1. Cold fusion in symmetric 90Zr induced reactions International Nuclear Information System (INIS) Keller, J.G. 1985-02-01 At the velocity filter SHIP of the Society for Heavy Ion Research in Darmstadt cross sections for evaporation-residue-nucleus formation in the reactions 90 Zr+ 89 Y, sup(90,92,96)Zr, 94 Mo were measured. In four of the reactions leading to the compound nuclei 179 Au, 180 Hg, 182 Hg, and 184 Pb for the first time in reactions of two heavy partners with mass numbers >20 radiative capture, i.e. deexcitation only by emission of γ radiation, was observed. A comparison of the measured cross sections for radiative capture with evaporation calculations leads to the final conclusion that either the γ-strength in the different compound nuclei is very different, or that the energy or angular momentum dependence of the level-density is wrongly described by the Fermi gas model at energies between 5 and 20 MeV. From the cross sections for evaporation-residue-nucleus formation fusion probabilities for central collisions were derived. The fusion probabilities show a strong dependence of the sub-barrier fusion from the nuclear structure of the contributing reaction partners. The slope of the fusion probability below the classical fusion barrier cannot be consistently described even by newer models. Below the lowest fusion barrier the fusion probability decreases with decreasing energy remarkably faster that predicted by a WKB calculation. This indicates that either the shape of the barrier is different from that predicted by the potentials, or that the mass dependence of the tunnel effect is not correctly described by the WKB calculation. (orig.) [de 2. Robotic reactions: delay-induced patterns in autonomous vehicle systems. Science.gov (United States) Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco 2010-02-01 Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation. 3. Fluctuation-Induced Pattern Formation in a Surface Reaction DEFF Research Database (Denmark) Starke, Jens; Reichert, Christian; Eiswirth, Markus 2006-01-01 Spontaneous nucleation, pulse formation, and propagation failure have been observed experimentally in CO oxidation on Pt(110) at intermediate pressures (\\approx 10^{-2}mbar). This phenomenon can be reproduced with a stochastic model which includes temperature effects. Nucleation occurs randomly...... due to fluctuations in the reaction processes, whereas the subsequent damping out essentially follows the deterministic path. Conditions for the occurence of stochastic effects in the pattern formation during CO oxidation on Pt are discussed.... 4. Aging induced changes on NEXAFS fingerprints in individual combustion particles Directory of Open Access Journals (Sweden) V. Zelenay 2011-11-01 Full Text Available Soot particles can significantly influence the Earth's climate by absorbing and scattering solar radiation as well as by acting as cloud condensation nuclei. However, despite their environmental (as well as economic and political importance, the way these properties are affected by atmospheric processing of the combustion exhaust gases is still a subject of discussion. In this work, individual soot particles emitted from two different vehicles, a EURO 2 transporter, a EURO 3 passenger car, and a wood stove were investigated on a single-particle basis. The emitted exhaust, including the particulate and the gas phase, was processed in a smog chamber with artificial solar radiation. Single particle specimens of both unprocessed and aged soot were characterized using near edge X-ray absorption fine structure spectroscopy (NEXAFS and scanning electron microscopy. Comparison of NEXAFS spectra from the unprocessed particles and those resulting from exhaust photooxidation in the chamber revealed changes in the carbon functional group content. For the wood stove emissions, these changes were minor, related to the relatively mild oxidation conditions. For the EURO 2 transporter emissions, the most apparent change was that of carboxylic carbon from oxidized organic compounds condensing on the primary soot particles. For the EURO 3 car emissions oxidation of primary soot particles upon photochemical aging has likely contributed as well. Overall, the changes in the NEXAFS fingerprints were in qualitative agreement with data from an aerosol mass spectrometer. Furthermore, by taking full advantage of our in situ microreactor concept, we show that the soot particles from all three combustion sources changed their ability to take up water under humid conditions upon photochemical aging of the exhaust. Due to the selectivity and sensitivity of the NEXAFS technique for the water mass, also small amounts of water taken up into the internal voids of agglomerated 5. Shock-induced fast reactions of zinc nanoparticles and RDX Energy Technology Data Exchange (ETDEWEB) Xue Mian; Wu Jinghe; Ye Song; Yang Xiangdong [Institute of Atomic and Molecular Physics, Sichuan University, Chengdu 610065 (China); Hu Dong; Wang Yanping; Zhu Wenjun; Li Chengbing [National Key Laboratory for Shock Wave and Detonation Physics Research, Institute of Fluid Physics, CAEP, Mianyang 621900 (China)], E-mail: [email protected] 2008-02-21 Fast reactions of zinc nanoparticles and RDX were investigated in normal incident shock waves. The emergence time and emission spectra intensity of partial products such as NO{sub 2}, H, C{sub 2}, O, CO, CH{sub 2}O, CO{sub 2}, H{sub 2}O and ZnO were observed by a TDS5054 oscilloscope. The results indicate that NO{sub 2} appears first in each experiment, which is in agreement with the theoretical results. The addition of zinc nanoparticles to RDX can not only shorten the ignition delay time by 20% but also double the shockwave diffusion velocity to 2180 {+-} 50 m s{sup -1} and triple the temperature to 2020 {+-} 60 K. The emergence time of products shortens by around 10-40% and the emission spectra intensity of H{sub 2}O and CH{sub 2}O rises by about three times and one times, respectively. CO{sub 2}, H{sub 2}O and O{sub 2} in various concentrations were introduced into the zinc-RDX reaction, respectively, which indicate that O{sub 2} made the ignition delay time shorten by over 30%, the effect of H{sub 2}O was not prominent while CO{sub 2} made the ignition delay time lag by around 30%. The results indicate that the Zn-O{sub 2} reaction mainly occurs in O{sub 2}, CO{sub 2} and H{sub 2}O. 6. Shock-induced fast reactions of zinc nanoparticles and RDX International Nuclear Information System (INIS) Xue Mian; Wu Jinghe; Ye Song; Yang Xiangdong; Hu Dong; Wang Yanping; Zhu Wenjun; Li Chengbing 2008-01-01 Fast reactions of zinc nanoparticles and RDX were investigated in normal incident shock waves. The emergence time and emission spectra intensity of partial products such as NO 2 , H, C 2 , O, CO, CH 2 O, CO 2 , H 2 O and ZnO were observed by a TDS5054 oscilloscope. The results indicate that NO 2 appears first in each experiment, which is in agreement with the theoretical results. The addition of zinc nanoparticles to RDX can not only shorten the ignition delay time by 20% but also double the shockwave diffusion velocity to 2180 ± 50 m s -1 and triple the temperature to 2020 ± 60 K. The emergence time of products shortens by around 10-40% and the emission spectra intensity of H 2 O and CH 2 O rises by about three times and one times, respectively. CO 2 , H 2 O and O 2 in various concentrations were introduced into the zinc-RDX reaction, respectively, which indicate that O 2 made the ignition delay time shorten by over 30%, the effect of H 2 O was not prominent while CO 2 made the ignition delay time lag by around 30%. The results indicate that the Zn-O 2 reaction mainly occurs in O 2 , CO 2 and H 2 O 7. Production of new particles in e+e- reactions at LEP I energies International Nuclear Information System (INIS) Dobado, A. 1987-01-01 The possibility of lep I of producing new particles is considered. We arrive at the general conclusion that lep I may make it possible to complete the detection of the particles that make up the ''standard model'' and, in addition, to discover some supersymmetric particle or to rule out most of the supersymmetric models. (author) 8. Composite-particle emission in the reaction p+Au at 2.5 GeV Energy Technology Data Exchange (ETDEWEB) Letourneau, A.; Bohm, A.; Galin, J.; Lott, B.; Peghaire, A. [Grand Accelerateur National d' Ions Lourds (GANIL), 14 - Caen (France); Enke, M.; Herbach, C.M.; Hilscher, D.; Jahnke, U.; Tishchenko, V. [Hahn Meitner Institute, Berlin (Germany); Filges, D.; Goldenbaum, F.; Neef, R.D.; Nunighoff, K.; Paul, N.; Sterzenbach, G. [Institut fur Kernphysik, Julich (Germany); Pienkowski, L. [Warsaw Universitaire, Heavy Ion Lab. (Poland); Toke, J.; Schroder, U. [Rochester, University, New York (United States) 2002-06-01 The emission of composite-particles is studied in the reaction p+Au at E{sub p} = 2.5 GeV, in addition to neutrons and protons. Most particle energy spectra feature an evaporation spectrum superimposed on an exponential high-energy, non-statistical component. Comparisons are first made with the predictions by a two-stage hybrid reaction model, where an intra-nuclear cascade (INC) simulation is followed by a statistical evaporation process. The high-energy proton component is identified as product of the fast pre-equilibrium INC, since it is rather well reproduced by the INCL2.0 intra-nuclear cascade calculations simulating the first reaction stage. The low-energy spectral components are well understood in terms of sequential particle evaporation from the hot nuclear target remnants of the fast INC. Evaporation is modeled using the statistical code GEMINI. Implementation of a simple coalescence model in the INC code can provide a reasonable description of the multiplicities of high-energy composite particles such as {sup 2-3}H and {sup 3}He. However, this is done at the expense of {sup 1}H which then fails to reproduce the experimental energy spectra. (authors) 9. Characterization of Inherent Particles and Mechanism of Thermal Stress Induced Particle Formation in HSV-2 Viral Vaccine Candidate. Science.gov (United States) Li, Lillian; Kirkitadze, Marina; Bhandal, Kamaljit; Roque, Cristopher; Yang, Eric; Carpick, Bruce; Rahman, Nausheen 2017-11-10 Vaccine formulations may contain visible and/or subvisible particles, which can vary in both size and morphology. Extrinsic particles, which are particles not part of the product such as foreign contaminants, are generally considered undesirable and should be eliminated or controlled in injectable products. However, biological products, in particular vaccines, may also contain particles that are inherent to the product. Here we focus on the characterization of visible and subvisible particles in a live, replication-deficient viral vaccine candidate against HSV genital herpes in an early developmental stage. HSV-2 viral vaccine was characterized using a panel of analytical methods, including Fourier transform infrared spectroscopy (FTIR), sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), Western blot, liquid chromatography-mass spectrometry (LC-MS), light microscopy, transmission electron microscopy (TEM), micro-flow imaging (MFI), dynamic light scattering (DLS), right angle light scattering (RALS), and intrinsic fluorescence. Particles in HSV-2 vaccine typically ranged from hundreds of nanometers to hundreds of micrometers in size and were determined to be inherent to the product. The infectious titer did not correlate with any trend in subvisible particle concentration and size distribution as shown by DLS, MFI, and TEM under stressed conditions. This suggested that particle changes in the submicron range were related to HSV-2 virion structure and had direct impact on biological activity. It was also observed that subvisible and visible particles could induce aggregation in the viral product. The temperature induced aggregation was observed by RALS, intrinsic fluorescence, and DLS. The increase of subvisible particle size with temperature could be fitted to a two-step thermokinetic model. Visible and subvisible particles were found to be inherent to the HSV-2 viral vaccine product. The mechanism of protein aggregation was discussed and a two 10. Role of the reaction of stabilized Criegee intermediates with peroxy radicals in particle formation and growth in air. Science.gov (United States) Zhao, Yue; Wingen, Lisa M; Perraud, Véronique; Greaves, John; Finlayson-Pitts, Barbara J 2015-05-21 Ozonolysis of alkenes is an important source of secondary organic aerosol (SOA) in the atmosphere. However, the mechanisms by which stabilized Criegee intermediates (SCI) react to form and grow the particles, and in particular the contributions from oligomers, are not well understood. In this study, ozonolysis of trans-3-hexene (C6H12), as a proxy for small alkenes, was investigated with an emphasis on the mechanisms of particle formation and growth. Ozonolysis experiments were carried out both in static Teflon chambers (18-20 min reaction times) and in a glass flow reactor (24 s reaction time) in the absence and presence of OH or SCI scavengers, and under different relative humidity (RH) conditions. The chemical composition of polydisperse and size-selected SOA particles was probed using different mass spectrometric techniques and infrared spectroscopy. Oligomers having SCI as the chain unit are found to be the dominant components of such SOA particles. The formation mechanism for these oligomers suggested by our results follows the sequential addition of SCI to organic peroxy (RO2) radicals, in agreement with previous studies by Moortgat and coworkers. Smaller particles are shown to have a relatively greater contribution from longer oligomers. Higher O/C ratios are observed in smaller particles and are similar to those of oligomers resulting from RO2 + nSCI, supporting a significant role for longer oligomers in particle nucleation and early growth. Under atmospherically relevant RH of 30-80%, water vapor suppresses oligomer formation through scavenging SCI, but also enhances particle nucleation. Under humid conditions, or in the presence of formic or hydrochloric acid as SCI scavengers, peroxyhemiacetals are formed by the acid-catalyzed particle phase reaction between oligomers from RO2 + nSCI and a trans-3-hexene derived carbonyl product. In contrast to the ozonolysis of trans-3-hexene, oligomerization involving RO2 + nSCI does not appear to be prevalent in the 11. Excitation functions for some Ne induced reactions with Holmium: incomplete fusion vs complete fusion International Nuclear Information System (INIS) Agarwal, Avinash; Kumar, Munish; Sharma, Anjali; Rizvi, I.A.; Ahamad, Tauseef; Ghugre, S.S.; Sinha, A.K.; Chaubey, A.K. 2010-01-01 Reactions induced by 20 Ne are expected to be considerably more complex than those of 12 C, and 16 O. As a part of the ongoing program to understand CF and ICF reaction mechanisms, it is of great interest to see whether the same experimental technique yield similarly valuable information for 20 Ne induced reactions. In this present work an attempt has been made to measure the excitation functions for fifteen evaporation residues (ERs) identified in the interaction of 20 Ne + 165 Ho system in the energy range 4 -7 MeV/A 12. Air bubble-induced detachment of polystyrene particles with different sizes from collector surfaces in a parallel plate flow chamber NARCIS (Netherlands) Gomez-Suarez, C; van der Mei, HC; Busscher, HJ 2001-01-01 Particle size was found to be an important factor in air bubble-induced detachment of colloidal particles from collector surfaces in a parallel plate flow chamber and generally polystyrene particles with a diameter of 806 nm detached less than particles with a diameter of 1400 nm. Particle 13. Parametrization of angular correlation function of final particles and gamma quanta at the gamma quanta detection out off reaction plane International Nuclear Information System (INIS) Zelenskaya, N.S.; Teplov, I.B. 1980-01-01 A possibility for determining all the elements of a density matrix for reactions and inelastic particle scattering with the production of even-even nucleus in the 2 + state is analyzed on the base of studying angular correlation function in different planes of gamma quantum escape. Angular correlations are considered in the coordinate system, where an incident beam of particles is directed along the Z axis, and the reaction plane coincides with the xZ plane. Given is the summary of the number of angular correlation function parameters and the number of Asub(kx) spin-tensor components (or amplitude combinations) which these parameters depend on. Analytical expressions for the function of angular correlation of finite particles and gamma quanta have been obtained. It is shown, that the angular correlation function shape and, correspondingly, reliability of determining its parameters from the experiment in different planes differ. The angular correlation function of finite particles and gamma quanta for any reaction with the production of even-even nuclei in the 2 + state irrespective of the reaction mechanism is defined by five parameters. Dependence of the parameters on azimuthal angle of gamma quantum escape is determined analytically. Orientation of gamma quantum registration plane in relation to the reaction plane is determined from the azimuthal angle phisub(γ). For complete reduction of the density matrix of an arbitrary reaction it is necessary to measure the function of angular correlation of finite particles and gamma quanta emitted by a finite nucleus during the transition from the 2 + state to the 0 + main state in two planes one of which can be a plane with phisub(γ)=45 deg, and the other has not to coincide with phisub(γ)=90 deg. For inelastic scattering of spinless particles the density matrix reduction is related to measuring the angular correlation function in two planes of gamma quanta escape, where phi sub(γ) not equal to 0 phi sub(γ0 deg. The 14. Production of slow particle in 1.7 AGeV 84Kr induced emulsion interaction International Nuclear Information System (INIS) Li Huiling; Zhang Donghai; Li Xueqin; Jia Huiming 2008-01-01 The production of slow particle in 1.7 AGeV 84 Kr induced emulsion interaction was studied. The experimental results show that the average multiplicity of black, grey and heavily ionized track particle increases with the increase of impact centrality and target size. The average multiplicity of grey track particle and heavily ionized track particle increases with the increase of the number of black track particle. The average multiplicity of heavily ionized track particle increases with the increase of the number of grey track particle, but average multiplicity of black track particle increases with the increase of the number of grey track particle and then saturated. The average multiplicity of grey track particle increases with the increase of the number of heavily ionized track particle, but average multiplicity of black track particle increases with the increase of the number of heavily ionized track particle and then saturated. Those experimental results can be well explained by using the nuclear impact geometry model. (authors) 15. Cavitation-induced reactions in high-pressure carbon dioxide NARCIS (Netherlands) Kuijpers, M.W.A.; van Eck, D.; Kemmere, M.F.; Keurentjes, J.T.F. 2002-01-01 The feasibility of ultrasound-induced in situ radical formation in liquid carbon dioxide was demonstrated. The required threshold pressure for cavitation could be exceeded at a relatively low acoustic intensity, as the high vapor pressure of CO2 counteracts the hydrostatic pressure. With the use of 16. Characterization of nuclear physics targets using Rutherford backscattering and particle induced X-ray emission International Nuclear Information System (INIS) Rubehn, T.; Wozniak, G.J.; Phair, L.; Moretto, L.G.; Yu, K.M. 1997-01-01 Rutherford backscattering and particle induced X-ray emission have been utilized to precisely characterize targets used in nuclear fission experiments. The method allows for a fast and non-destructive determination of target thickness, homogeneity and element composition. (orig.) 17. Mass and charge distributions in chlorine-induced nuclear reactions International Nuclear Information System (INIS) Marchetti, A.A. 1991-01-01 Projectile-like fragments were detected and characterized in terms of A, Z, and energy for the reactions 37 Cl on 40 Ca and 209 Bi at E/A = 7.3 MeV, and 35 Cl, on 209 Bi at E/A = 15 MeV, at angles close to the grazing angle. Mass and charge distributions were generated in the N-Z plane as a function of energy loss, and have been parameterized in terms of their centroids, variances, and coefficients of correlation. Due to experimental problems, the mass resolution corresponding to the 31 Cl on 209 Bi reaction was very poor. This prompted the study and application of a deconvolution technique for peak enhancement. The drifts of the charge and mass centroids for the system 37 Cl on 40 Ca are consistent with a process of mass and charge equilibration mediated by nucleon exchange between the two partners, followed by evaporation. The asymmetric systems show a strong drift towards larger asymmetry, with the production of neutron-rich nuclei. It was concluded that this is indicative of a net transfer of protons from the light to the heavy partner, and a net flow of neutrons in the opposite direction. The variances for all systems increase with energy loss, as it would be expected from a nucleon exchange mechanism; however, the variances for the reaction 37 Cl on 40 Ca are higher than those expected from that mechanism. The coefficients of correlation indicate that the transfer of nucleons between projectile and target is correlated. The results were compared to the predictions of two current models based on a stochastic nucleon exchange mechanism. In general, the comparisons between experimental and predicted variances support this mechanism; however, the need for more realistic driving forces in the model calculations is indicated by the disagreement between predicted and experimental centroids 18. Dynamics of GeV light-ion-induced reactions International Nuclear Information System (INIS) Kwiatkowski, K.; Bracken, D.S.; Foxford, E.R.; Ginger, D.S.; Hsi, W.C.; Morley, K.B.; Viola, V.E.; Wang, G.; Korteling, R.G.; Legrain, R. 1996-09-01 Recent results from studies of the 1.8 - 4.8 GeV 3 He + nat Ag, 197 Au reactions at LNS with the ISiS detector array have shown evidence for a saturation in deposition energy and multifragmentation from a low-density source. The collision dynamics have been examined in the context of intranuclear cascade and BUU models, while breakup phenomena have been compared with EES and SMM models. Fragment-fragment correlations and isotope ratios are also investigated. (K.A.) 19. Averaged currents induced by alpha particles in an InSb compound semiconductor detector International Nuclear Information System (INIS) Kanno, Ikuo; Hishiki, Shigeomi; Kogetsu, Yoshitaka; Nakamura, Tatsuya; Katagiri, Masaki 2008-01-01 Very fast pulses due to alpha particle incidence were observed by an undoped-type InSb Schottky detector. This InSb detector was operated without applying bias voltage and its depletion layer thickness was less than the range of alpha particles. The averaged current induced by alpha particles was analyzed as a function of operating temperature and was shown to be proportional to the Hall mobility of InSb. (author) 20. Radial diffusion of toroidally trapped particles induced by lower hybrid and fast waves International Nuclear Information System (INIS) Krlin, L. 1992-10-01 The interaction of RF field with toroidally trapped particles (bananas) can cause their intrinsic stochastically diffusion both in the configuration and velocity space. In RF heating and/or current drive regimes, RF field can interact with plasma particles and with thermonuclear alpha particles. The aim of this contribution is to give some analytical estimates of induced radial diffusion of alphas and of ions. (author) 1. Study of secondary particles produced in central 12C-nucleus reactions at 4.5 AGeV International Nuclear Information System (INIS) Saleem Khan, M.; Shukla, Praveen Prakash; Khushnood, H. 2014-01-01 Study of secondary charged particles produced in central relativistic heavy ion interactions is attracting a great deal of attention during the recent years. It may be due to the fact that the study of totally disintegrated events produced in heavy ion collisions in which almost the whole projectile takes part in the reactions. On the basis of the study of the totally disintegrated events of Ag and Br nuclei caused by 4.5 GeV per nucleon carbon projectile, we may conclude that the distribution of charged shower particles produced in forward hemisphere is flatter than the distribution in the backward hemisphere 2. Nucleon transfer reactions to rotational states induced by 206,208PB projectiles International Nuclear Information System (INIS) Wollersheim, H.J.; DeBoer, F.W.N.; Emling, H.; Grein, H.; Grosse, E.; Spreng, W.; Eckert, G.; Elze, Th.W.; Stelzer, K.; Lauterbach, Ch. 1986-01-01 In a systematic study of nucleon transfer reactions accompanied by Coulomb excitation the authors bombarded 152 Sm, 160 Gd and 232 Th with 206, 208 Pb beams at incident energies close to the Coulomb barrier. Particle-gamma coincidence techniques were used to identify excited states of reaction products populated through inelastic scattering and in nucleon transfer reactions. Large cross sections were observed for one- and two-neutron pick-up from 232 Th at an incident energy of 6.4 MeV/μ. The results are analyzed in the framework of semiclassical models 3. DRIFT-INDUCED PERPENDICULAR TRANSPORT OF SOLAR ENERGETIC PARTICLES International Nuclear Information System (INIS) Marsh, M. S.; Dalla, S.; Kelly, J.; Laitinen, T. 2013-01-01 Drifts are known to play a role in galactic cosmic ray transport within the heliosphere and are a standard component of cosmic ray propagation models. However, the current paradigm of solar energetic particle (SEP) propagation holds the effects of drifts to be negligible, and they are not accounted for in most current SEP modeling efforts. We present full-orbit test particle simulations of SEP propagation in a Parker spiral interplanetary magnetic field (IMF), which demonstrate that high-energy particle drifts cause significant asymmetric propagation perpendicular to the IMF. Thus in many cases the assumption of field-aligned propagation of SEPs may not be valid. We show that SEP drifts have dependencies on energy, heliographic latitude, and charge-to-mass ratio that are capable of transporting energetic particles perpendicular to the field over significant distances within interplanetary space, e.g., protons of initial energy 100 MeV propagate distances across the field on the order of 1 AU, over timescales typical of a gradual SEP event. Our results demonstrate the need for current models of SEP events to include the effects of particle drift. We show that the drift is considerably stronger for heavy ion SEPs due to their larger mass-to-charge ratio. This paradigm shift has important consequences for the modeling of SEP events and is crucial to the understanding and interpretation of in situ observations 4. Reaction calorimetry for the development of ultrasound-induced polymerization processes in CO2-expanded fluids NARCIS (Netherlands) Kemmere, M.F.; Kuijpers, M.W.A.; Keurentjes, J.T.F. 2007-01-01 A strong viscosity increase upon polymn. hinders radical formation during an ultrasound-induced bulk polymn. Since CO2 acts as a strong anti-solvent for most polymers, it can be used to reduce the viscosity of the reaction mixt. In this work, a process for the ultrasound-induced polymn. in 5. Preparing poly (caprolactone) micro-particles through solvent-induced phase separation DEFF Research Database (Denmark) Li, Xiaoqiang; Kanjwal, Muzafar Ahmed; Stephansen, Karen 2012-01-01 Poly (caprolactone) (PCL) particles with the size distribution from 1 to 100 μm were prepared through solvent-induced phase separation, in which polyvinyl-alcohol (PVA) was used as the matrix-forming polymer to stabilize PCL particles. The cloud point data of PCL-acetone-water was determined... 6. The rotationally induced quadrupole pair field in the particle-rotor model International Nuclear Information System (INIS) Almberger, J. 1980-04-01 A formalism is developed which makes it possible to consider the influence of the rotationally induced quadrupole pair field and corresponding quasi-particle residual interactions within the particle-rotor model. The Y 21 pair field renormalizes both the Coriolis and the recoil interactions. (Auth.) 7. Erratum: Erratum to: Thermodynamic implications of the gravitationally induced particle creation scenario Science.gov (United States) Saha, Subhajit; Mondal, Anindita 2018-04-01 We would like to rectify an error regarding the validity of the first law of thermodynamics (FLT) on the apparent horizon of a spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) universe for the gravitationally induced particle creation scenario with constant specific entropy and an arbitrary particle creation rate (see Sect. 3.1 of original article) 8. Surface chemical reactions induced by molecules electronically-excited in the gas DEFF Research Database (Denmark) Petrunin, Victor V. 2011-01-01 and alignment are taking place, guiding all the molecules towards the intersections with the ground state PES, where transitions to the ground state PES will occur with minimum energy dissipation. The accumulated kinetic energy may be used to overcome the chemical reaction barrier. While recombination chemical...... be readily produced. Products of chemical adsorption and/or chemical reactions induced within adsorbates are aggregated on the surface and observed by light scattering. We will demonstrate how pressure and spectral dependencies of the chemical outcomes, polarization of the light and interference of two laser...... beams inducing the reaction can be used to distinguish the new process we try to investigate from chemical reactions induced by photoexcitation within adsorbed molecules and/or gas phase photolysis.... 9. Study on reaction mechanism by analysis of kinetic energy spectra of light particles and formation of final products Science.gov (United States) Giardina, G.; Mandaglio, G.; Nasirov, A. K.; Anastasi, A.; Curciarello, F.; Fazio, G. 2018-05-01 The sensitivity of reaction mechanism in the formation of compound nucleus (CN) by the analysis of kinetic energy spectra of light particles and of reaction products are shown. The dependence of the P CN fusion probability of reactants and W sur survival probability of CN against fission at its deexcitation on the mass and charge symmetries in the entrance channel of heavy-ion collisions, as well as on the neutron numbers is discussed. The possibility of conducting a complex program of investigations of the complete fusion by reliable ways depends on the detailed and refined methods of experimental and theoretical analyses. 10. Nucleon induced reaction by anti-symmetrized molecular dynamics Energy Technology Data Exchange (ETDEWEB) Tosaka, Yoshiharu [Fujitsu Labs. Ltd., Kawasaki, Kanagawa (Japan); Ono, Akira; Horiuchi, Hisashi 1998-07-01 Neutron soft error is a phenomenon, for example, to change memory information of LSI from 0 to 1 by introducing a lot of charge depend on product ions by nuclear reaction between neutrons of second cosmic rays and nuclei of Si in LSI. To study the phenomena, the information such as energy, angular and mass distribution of product ions by n+{sup 28}Si reaction from 20 to 500 MeV incident energy were necessary. 180 MeV p+{sup 27}Al, having large amount of data of heavy recoil nucleus, was investigated. On AMD, A = 1,24-27 ion were produced at the initial collision (before 100 fm/c), after that the mass distribution was not changed, indicating stable excited nucleus. However, on AMD-V, the excited nuclei were destructed in course of time. A = 1,2,4,18 and 21-27 ion were produced at 120 fm/c, but almost kinds of ions less than 27 produced at 320 fm/c. After decay, difference between results of AMD and AMD-V was small and both reproduced the experimental values. AMD-V gave better result of product cross section and differential cross section. (S.Y.) 11. Optimization of laser-induced breakdown spectroscopy for coal powder analysis with different particle flow diameters Energy Technology Data Exchange (ETDEWEB) Yao, Shunchun, E-mail: [email protected] [School of Electric Power, South China University of Technology, Guangzhou, Guangdong 510640 (China); State Key Laboratory of Pulsed Power Laser Technology, Electronic Engineering Institute, Hefei 230037 (China); Xu, Jialong; Dong, Xuan; Zhang, Bo; Zheng, Jianping [School of Electric Power, South China University of Technology, Guangzhou, Guangdong 510640 (China); Lu, Jidong, E-mail: [email protected] [School of Electric Power, South China University of Technology, Guangzhou, Guangdong 510640 (China) 2015-08-01 The on-line measurement of coal is extremely useful for emission control and combustion process optimization in coal-fired plant. Laser-induced breakdown spectroscopy was employed to directly analyze coal particle flow. A set of tapered tubes were proposed for beam-focusing the coal particle flow to different diameters. For optimizing the measurement of coal particle flow, the characteristics of laser-induced plasma, including optical breakdown, the relative standard deviation of repeated measurement, partial breakdown spectra ratio and line intensity, were carefully analyzed. The comparison of the plasma characteristics among coal particle flow with different diameters showed that air breakdown and the random change in plasma position relative to the collection optics could significantly influence on the line intensity and the reproducibility of measurement. It is demonstrated that the tapered tube with a diameter of 5.5 mm was particularly useful to enrich the coal particles in laser focus spot as well as to reduce the influence of air breakdown and random changes of plasma in the experiment. - Highlights: • Tapered tube was designed for beam-focusing the coal particle flow as well as enriching the particles in laser focus spot. • The characteristics of laser-induced plasma of coal particle flow were investigated carefully. • An appropriate diameter of coal particle flow was proven to benefit for improving the performance of LIBS measurement. 12. Ion-induced nucleation of pure biogenic particles CERN Document Server Kirkby, Jasper; Sengupta, Kamalika; Frege, Carla; Gordon, Hamish; Williamson, Christina; Heinritzi, Martin; Simon, Mario; Yan, Chao; Almeida, João; Tröstl, Jasmin; Nieminen, Tuomo; Ortega, Ismael K; Wagner, Robert; Adamov, Alexey; Amorim, Antonio; Bernhammer, Anne-Kathrin; Bianchi, Federico; Breitenlechner, Martin; Brilke, Sophia; Chen, Xuemeng; Craven, Jill; Dias, antonio; Ehrhart, Sebastian; Flagan, Richard C; Franchin, Alessandro; Fuchs, Claudia; Guida, Roberto; Hakala, Jani; Hoyle, Christopher R; Jokinen, Tuija; Junninen, Heikki; Kangasluoma, Juha; Kim, Jaeseok; Krapf, Manuel; Kürten, andreas; Laaksonen, Ari; Lehtipalo, Katrianne; Makhmutov, Vladimir; Mathot, Serge; Molteni, Ugo; Onnela, antti; Peräkylä, Otso; Piel, Felix; Petäjä, Tuukka; Praplan, Arnaud P; Pringle, Kirsty; Rap, Alexandru; Richards, Nigel A D; Riipinen, Ilona; Rissanen, Matti P; Rondo, Linda; Sarnela, Nina; Schobesberger, Siegfried; Scott, Catherine E; Seinfeld, John H; Sipilä, Mikko; Steiner, Gerhard; Stozhkov, Yuri; Stratmann, Frank; Tomé, Antonio; Virtanen, Annele; Vogel, Alexander L; Wagner, Andrea C; Wagner, Paul E; Weingartner, Ernest; Wimmer, Daniela; Winkler, Paul M; Ye, Penglin; Zhang, Xuan; Hansel, Armin; Dommen, Josef; Donahue, Neil M; Worsnop, Douglas R; Baltensperger, Urs; Kulmala, Markku; Carslaw, Kenneth S; Curtius, Joachim 2016-01-01 Atmospheric aerosols and their effect on clouds are thought to be important for anthropogenic radiative forcing of the climate, yet remain poorly understood. Globally, around half of cloud condensation nuclei originate from nucleation of atmospheric vapours. It is thought that sulfuric acid is essential to initiate most particle formation in the atmosphere and that ions have a relatively minor role. Some laboratory studies, however, have reported organic particle formation without the intentional addition of sulfuric acid, although contamination could not be excluded. Here we present evidence for the formation of aerosol particles from highly oxidized biogenic vapours in the absence of sulfuric acid in a large chamber under atmospheric conditions. The highly oxygenated molecules (HOMs) are produced by ozonolysis of\\alpha\$-pinene. We find that ions from Galactic cosmic rays increase the nucleation rate by one to two orders of magnitude compared with neutral nucleation. Our experimental findings are supported...
13. NMR relaxation induced by iron oxide particles: testing theoretical models.
Science.gov (United States)
Gossuin, Y; Orlando, T; Basini, M; Henrard, D; Lascialfari, A; Mattea, C; Stapf, S; Vuong, Q L
2016-04-15
Superparamagnetic iron oxide particles find their main application as contrast agents for cellular and molecular magnetic resonance imaging. The contrast they bring is due to the shortening of the transverse relaxation time T 2 of water protons. In order to understand their influence on proton relaxation, different theoretical relaxation models have been developed, each of them presenting a certain validity domain, which depends on the particle characteristics and proton dynamics. The validation of these models is crucial since they allow for predicting the ideal particle characteristics for obtaining the best contrast but also because the fitting of T 1 experimental data by the theory constitutes an interesting tool for the characterization of the nanoparticles. In this work, T 2 of suspensions of iron oxide particles in different solvents and at different temperatures, corresponding to different proton diffusion properties, were measured and were compared to the three main theoretical models (the motional averaging regime, the static dephasing regime, and the partial refocusing model) with good qualitative agreement. However, a real quantitative agreement was not observed, probably because of the complexity of these nanoparticulate systems. The Roch theory, developed in the motional averaging regime (MAR), was also successfully used to fit T 1 nuclear magnetic relaxation dispersion (NMRD) profiles, even outside the MAR validity range, and provided a good estimate of the particle size. On the other hand, the simultaneous fitting of T 1 and T 2 NMRD profiles by the theory was impossible, and this occurrence constitutes a clear limitation of the Roch model. Finally, the theory was shown to satisfactorily fit the deuterium T 1 NMRD profile of superparamagnetic particle suspensions in heavy water.
14. Studies on the reaction mechanism of the muon induced nuclear fission
International Nuclear Information System (INIS)
Mutius, R. von.
1985-01-01
The mass and energy distribution of the fission fragments after muon induced nuclear fission allows the determination of the mean excitation energy of the fissioning nucleus after muon capture. By the systematic comparison with a mass distribution of a corresponding reaction for the first time for this an accuracy of about 1 MeV could be reached. Theoretical calculations on the excitation probability in the muon capture allow in connection with the fission probability an estimating calculation of this energy. The experimental result represents by this a test criterium for the valuation of the theoretical calculation. The measured probabilities for the occurrence of radiationless transitions in the muonic γ cascade of 237 Np permit an indirect experimental determination of the barrier enhancement which causes the muon present during the fission process. The value found for this extends to 0.75+-0.1 MeV. A change of the mass distribution by the muon cannot be detected in the nuclides 235 U, 237 Np, and 242 Pu studied here. Only the mean total kinetic energy of the fission products is reduced in these three nuclides in the prompt μ - induced fission by 1 to 2 MeV. For this result the incomplete screening of the nuclear charge during the fission process is made responsible. A mass dependence of this reduction has not been stated. Because the muon has appearently no influence on the mass splitting it can be valied as nearly ideal particle in order to study the hitherto little studied dynamics of the fission process. (orig.) [de
15. Excitation functions for deuterium-induced reactions on 194Pt near the coulomb barrier
Czech Academy of Sciences Publication Activity Database
Kulko, A. A.; Skobelev, N. K.; Kroha, Václav; Penionzhkevich, Y. E.; Mrázek, Jaromír; Burjan, Václav; Hons, Zdeněk; Šimečková, Eva; Piskoř, Štěpán; Kugler, Andrej; Demekhina, N. A.; Sobolev, Yu. G.; Chuvilskaya, T. V.; Shirokova, K.; Kuterbekov, K.
2012-01-01
Roč. 9, 6-7 (2012), s. 502-507 ISSN 1547-4771 R&D Projects: GA MŠk LA08002 Institutional support: RVO:61389005 Keywords : nucelar reactions * excitation functions * charged particle activation Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders
16. Two-particle one-hole multiple-scattering contribution to 17O energies using an energy-dependent reaction matrix
International Nuclear Information System (INIS)
Bando, H.; Krenciglowa, E.M.
1976-01-01
The role of 2p1h correlations in 17 O is studied within a multiple-scattering formalism. An accurate, energy-dependent reaction matrix with orthogonalized plane-wave intermediate states is used to assess the relative importance of particle-particle and particle-hole correlations in the 17 O energies. The effect of energy dependence of the reaction matrix is closely examined. (Auth.)
17. Neutron-induced cross sections of short-lived nuclei via the surrogate reaction method
Directory of Open Access Journals (Sweden)
Morel P.
2011-10-01
Full Text Available The measurement of neutron-induced cross sections of short-lived nuclei is extremely difficult due to the radioactivity of the samples. The surrogate reaction method is an indirect way of determining cross sections for nuclear reactions that proceed through a compound nucleus. This method presents the advantage that the target material can be stable or less radioactive than the material required for a neutron-induced measurement. We have successfully used the surrogate reaction method to extract neutron-induced fission cross sections of various short-lived actinides. In this work, we investigate whether this technique can be used to determine neutron-induced capture cross sections in the rare-earth region.
18. Neutron-induced cross sections of short-lived nuclei via the surrogate reaction method
Directory of Open Access Journals (Sweden)
Tassan-Got L.
2012-02-01
Full Text Available The measurement of neutron-induced cross sections of short-lived nuclei is extremely difficult due to the radioactivity of the samples. The surrogate reaction method is an indirect way of determining cross sections for nuclear reactions that proceed through a compound nucleus. This method presents the advantage that the target material can be stable or less radioactive than the material required for a neutron-induced measurement. We have successfully used the surrogate reaction method to extract neutron-induced fission cross sections of various short-lived actinides. In this work, we investigate whether this technique can be used to determine neutron-induced capture cross sections in the rare-earth region.
19. Measurement of secondary particle production induced by particle therapy ion beams impinging on a PMMA target
Directory of Open Access Journals (Sweden)
Toppi M.
2016-01-01
Full Text Available Particle therapy is a technique that uses accelerated charged ions for cancer treatment and combines a high irradiation precision with a high biological effectiveness in killing tumor cells [1]. Informations about the secondary particles emitted in the interaction of an ion beam with the patient during a treatment can be of great interest in order to monitor the dose deposition. For this purpose an experiment at the HIT (Heidelberg Ion-Beam Therapy Center beam facility has been performed in order to measure fluxes and emission profiles of secondary particles produced in the interaction of therapeutic beams with a PMMA target. In this contribution some preliminary results about the emission profiles and the energy spectra of the detected secondaries will be presented.
20. Nuclear effects on elastic reactions induced by neutrinos and antineutrinos
International Nuclear Information System (INIS)
Schaeffer, M.; Bonneaud, G.
1977-01-01
Two nuclear effects are studied on the reactions νn→μ - p and mean number of neutrons νp→μ + n: inhibition effect (due to Pauli principle) and kinematical effects due to the Fermi motion of the target nucleon inside a nucleus. By comparison with shell-model calculations it is shown that the Fermi-gas model is sufficiently accurate to describe the low-Q 2 inhibition effects. The incertitude on Esub(mean number of neutrons) and Q 2 , due to Fermi motion, is studied with a set of curves which give the error on Esub(mean number of neutrons) and Q 2 once Psub(μ) is given [fr
1. Cold fusion in symmetric 90Zr induced reactions
International Nuclear Information System (INIS)
Keller, J.G.; Schmidt, K.H.; Hessberger, F.P.; Muenzenberg, G.; Reisdorf, W.; Clerc, H.G.; Sahm, C.C.
1985-08-01
Excitation functions for evaporation residues were measured for the reactions 90 Zr+ 89 Y, 90 Zr, 92 Zr, 96 Zr, and 94 Mo. Deexcitation only by γ radiation was found for the compound nuclei 179 Au, 180 Hg, 182 Hg, and 184 Pb. The cross sections for this process were found to be considerably larger than predicted by a statistical-model calculation using standard parameters for the γ-strength function. Fusion probabilities as well as fusion-barrier distributions were deduced from the measured cross sections. There are strong nuclear structure effects in subbarrier fusion. For energies far below the fusion barrier the increase of the fusion probabilities with increasing energy is found to be much steeper than predicted by WKB calculations. As a by-product of this work new α-spectroscopic information could be obtained for neutron deficient isotopes between Ir and Pb. (orig.)
2. Application of the Trojan Horse Method to study neutron induced reactions: the 17O(n, α14C reaction
Directory of Open Access Journals (Sweden)
Gulino M.
2014-03-01
Full Text Available The reaction 17O(n, α14C was studied using virtual neutrons coming from the quasi-free deuteron break-up in the three body reaction 17O+d → α+14C+p. This technique, called virtual neutron method, extends the Trojan Horse method to neutron-induced reactions allowing to study the reaction cross section avoiding the suppression effects coming from the penetrability of the centrifugal barrier. For incident neutron energies from thermal up to a few hundred keV, direct experiments have shown the population of two out of three expected excited states at energies 8213 keV and 8282 keV and the influence of the sub-threshold level at 8038 keV. In the present experiment the 18O excited state at E* = 8.125 MeV, missing in the direct measurement, is observed. The angular distributions of the populated resonances have been measured for the first time. The results unambiguously indicate the ability of the method to overcome the centrifugal barrier suppression effect and to pick out the contribution of the bare nuclear interaction.
3. Study of the liquid water luminescence induced by charged particles
International Nuclear Information System (INIS)
Rusu, Mircea; Stere, Oana; Haiduc, Maria; Caramete, Laurentiu
2004-01-01
Many observations suggested that liquid water (with impurities) could give a luminescence output when irradiated with charged particles. We investigate theoretical and practical possibility of detecting such luminescence. Preliminary results on this possibility are presented, and a layout of the device proposed for measuring luminescence is given. (authors)
4. Production of noble gas isotopes by proton-induced reactions on bismuth
International Nuclear Information System (INIS)
Leya, I.; David, J.-C.; Leray, S.; Wieler, R.; Michel, R.
2008-01-01
We measured integral thin target cross sections for the proton-induced production of He-, Ne-, Ar-, Kr- and Xe-isotopes from bismuth (Bi) from the respective reaction thresholds up to 2.6 GeV. Here we present 275 cross sections for 23 nuclear reactions. The production of noble gas isotopes from Bi is of special importance for design studies of accelerator driven systems (EA/ADS) and nuclear spallation sources. For experiments with proton energies above 200 MeV the mini-stack approach was used instead of the stacked-foil technique in order to minimise the influences of secondary particles on the residual nuclide production. Comparing the cross sections for Bi to the data published recently for Pb indicates that for 4 He the cross sections for Bi below 200 MeV are up to a factor of 2-3 higher than the Pb data, which can be explained by the production of α-decaying Po-isotopes from Bi but not from Pb. Some of the cross sections for the production of 21 Ne from Bi are affected by recoil effects from neighboured Al-foils, which compromises a study of a possible lowering of the effective Coulomb-barrier. The differences in the excitation functions between Pb and Bi for Kr- and Xe-isotopes can be explained by energy-dependent higher fission cross sections for Bi compared to Pb. The experimental data are compared to results from the theoretical nuclear model codes INCL4/ABLA and TALYS. The INCL4/ABLA system describes the cross sections for the production of 4 He-, Kr- and Xe-isotopes reasonably well, i.e. mostly within a factor of a few. In contrast, the model completely fails describing 21 Ne, 22 Ne, 36 Ar and 38 Ar, which are produced via spallation and/or multifragmentation. The TALYS code is only able to accurately predict reaction thresholds. The absolute values are either significantly over- or underestimated. Consequently, the comparison of measured and modelled thin target cross sections clearly indicates that experimental data are still needed because the
5. Production of He-, Ne-, Ar-, Kr-, and Xe-isotopes by proton-induced reactions on lead
International Nuclear Information System (INIS)
Leya, I.; Michel, R.
2003-01-01
We measured integral thin target cross sections for the proton-induced production of He-, Ne-, Ar-, Kr-, and Xe-isotopes from lead from the respective reaction thresholds up to 2.6 GeV. The production of noble gas isotopes in lead by proton-induced reactions is of special importance for design studies of accelerator driven systems and energy amplifiers. In order to minimise the influences of secondary particles on the production of residual nuclides a new Mini-Stack approach was used instead of the well-known stacked-foil techniques for all experiments with proton energies above 200 MeV. With some exceptions our database for the proton-induced production of noble gas isotopes from lead is consistent and nearly complete. In contradistinction to the production of He from Al and Fe, where the cross sections obtained by thin-target irradiation experiments are up to a factor of 2 higher than the NESSI data, both datasets agree for the He production from lead. (orig.)
6. Colloidal polymer particles as catalyst carriers and phase transfer agents in multiphasic hydroformylation reactions.
Science.gov (United States)
Peral, D; Stehl, D; Bibouche, B; Yu, H; Mardoukh, J; Schomäcker, R; Klitzing, R von; Vogt, D
2018-03-01
Colloidal particles have been used to covalently bind ligands for the heterogenization of homogeneous catalysts. The replacement of the covalent bonds by electrostatic interactions between particles and the catalyst could preserve the selectivity of a truly homogeneous catalytic process. Functionalized polymer particles with trimethylammonium moieties, dispersed in water, with a hydrophobic core and a hydrophilic shell have been synthesized by emulsion polymerization and have been thoroughly characterized. The ability of the particles with different monomer compositions to act as catalyst carriers has been studied. Finally, the colloidal dispersions have been applied as phase transfer agents in the multiphasic rhodium-catalyzed hydroformylation of 1-octene. The hydrodynamic radius of the particles has been shown to be around 100 nm, and a core-shell structure could be observed by atomic force microscopy. The polymer particles were proven to act as carriers for the water-soluble hydroformylation catalyst, due to electrostatic interaction between the functionalized particles bearing ammonium groups and the sulfonated ligands of the catalyst. The particles were stable under the hydroformylation conditions and the aqueous catalyst phase could be recycled three times. Copyright © 2017 Elsevier Inc. All rights reserved.
7. Tumor necrosis factor is not required for particle-induced genotoxicity and pulmonary inflammation
Energy Technology Data Exchange (ETDEWEB)
Saber, Anne T.; Bornholdt, Jette; Dybdahl, Marianne; Sharma, Anoop K.; Vogel, Ulla; Wallin, Haakan [National Institute of Occupational Health, Copenhagen (Denmark); Loft, Steffen [Copenhagen University, Institute of Public Health, Copenhagen (Denmark)
2005-03-01
Particle-induced carcinogenicity is not well understood, but might involve inflammation. The proinflammatory cytokine tumor necrosis factor (TNF) is considered to be an important mediator in inflammation. We investigated its role in particle-induced inflammation and DNA damage in mice with and without TNF signaling. TNF-/- mice and TNF+/+ mice were exposed by inhalation to 20 mg m{sup -3} carbon black (CB), 20 mg m{sup -3} diesel exhaust particles (DEP), or filtered air for 90 min on each of four consecutive days. DEP, but not CB particles, induced infiltration of neutrophilic granulocutes into the lung lining fluid (by the cellular fraction in the bronchoalveolar lavage fluid), and both particle types induced interleukin-6 mRNA in the lung tissue. Surprisingly, TNF-/- mice were intact in these inflammatory responses. There were more DNA strand breaks in the BAL cells of DEP-exposed TNF-/- mice and CB-exposed mice compared with the air-exposed mice. Thus, the CB-induced DNA damage in BAL-cells was independent of neutrophil infiltration. The data indicate that an inflammatory response was not a prerequisite for DNA damage, and TNF was not required for the induction of inflammation by DEP and CB particles. (orig.)
8. A case of adverse drug reaction induced by dispensing error.
Science.gov (United States)
Gallelli, L; Staltari, O; Palleria, C; Di Mizio, G; De Sarro, G; Caroleo, B
2012-11-01
To report about a case of acute renal failure due to absence of communication between physician and patient. A 78 year old man with human immunodeficiency virus (HIV) accessed our hospital and was brought to our attention in August 2011 for severe renal failure. Clinical history revealed that he had been taking highly active antiretroviral therapy with lamivudine/abacavir and fosamprenavir since 2006. In April 2011 due to an augmentation in creatinine plasma levels, a reduction in lamivudine dosage to 100 mg/day and the prescription of abacavir 300 mg/day became necessary. Unfortunately, the patient took both lamivudine and abacavir therefore the association of the two medications (lamivudine/abacavir) lead to asthenia and acute renal failure within a few days. This case emphasizes the importance about how physicians must pay very careful attention during drug prescription, most particularly, as far as elderly patients are concerned. In fact, communication improvement between physicians and patients can prevent increase of adverse drug reactions related to drug dispensing, with consequential reduction of costs in the healthcare system. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
9. Kinetics of nitrosamine and amine reactions with NO3 radical and ozone related to aqueous particle and cloud droplet chemistry
Science.gov (United States)
Weller, Christian; Herrmann, Hartmut
2015-01-01
Aqueous phase reactivity experiments with the amines dimethylamine (DMA), diethanolamine (DEA) and pyrrolidine (PYL) and their corresponding nitrosamines nitrosodimethylamine (NDMA), nitrosodiethanolamine (NDEA) and nitrosopyrrolidine (NPYL) have been performed. NO3 radical reaction rate coefficients for DMA, DEA and PYL were measured for the first time and are 3.7 × 105, 8.2 × 105 and 8.7 × 105 M-1 s-1, respectively. Rate coefficients for NO3 + NDMA, NDEA and NPYL are 1.2 × 108, 2.3 × 108 and 2.4 × 108 M-1 s-1. Compared to OH radical rate coefficients for reactions with amines, the NO3 radical will most likely not be an important oxidant but it is a potential nighttime oxidant for nitrosamines in cloud droplets or deliquescent particles. Ozone is unreactive towards amines and nitrosamines and upper limits of rate coefficients suggest that aqueous ozone reactions are not important in atmospheric waters.
10. Production of complex particles in low energy spallation and in fragmentation reactions by in-medium random clusterization
International Nuclear Information System (INIS)
Lacroix, D.; Durand, D.
2005-09-01
Rules for in-medium complex particle production in nuclear reactions are proposed. These rules have been implemented in two models to simulate nucleon-nucleus and nucleus-nucleus reactions around the Fermi energy. Our work emphasizes the effect of randomness in cluster formation, the importance of the nucleonic Fermi motion as well as the role of conservation laws. The concepts of total available phase-space and explored phase-space under constraint imposed by the reaction are clarified. The compatibility of experimental observations with a random clusterization is illustrated in a schematic scenario of a proton-nucleus collision. The role of randomness under constraint is also illustrated in the nucleus-nucleus case. (authors)
11. Reactions of SO 2 on hydrated cement particle system for atmospheric pollution reduction: A DRIFTS and XANES study
Energy Technology Data Exchange (ETDEWEB)
Ramakrishnan, Girish; Wu, Qiyuan; Moon, Juhyuk; Orlov, Alexander
2017-07-01
An investigation of the adsorptive property of hydrated cement particle system for sulfur dioxide (SO2) removal was conducted. In situ and ex situ experiments using Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS) and X-ray Absorption Near Edge Spectroscopy (XANES) characterization techniques were employed to identify surface species formed during the exposure to SO2. Oxidation of SO2 to sulfate and sulfite species observed during these experiments indicated dominant reaction pathways for SO2 reaction with concrete constituents, such as calcium hydroxide, which were also moderated by adsorption on porous surfaces of crushed aggregates. The impact of variable composition of concrete on its adsorption capacity and reaction mechanisms was also proposed in this work.
12. Analysis of propeller-induced ground vortices by particle image velocimetry
NARCIS (Netherlands)
Yang, Y.; Sciacchitano, A.; Veldhuis, L.L.M.; Eitelberg, G.
2017-01-01
Abstract: The interaction between a propeller and its self-induced vortices originating on the ground is investigated in a scaled experiment. The velocity distribution in the flow field in two different planes containing the self-induced vortices is measured by particle image velocimetry (PIV).
13. Time-resolved resonance fluorescence spectroscopy for study of chemical reactions in laser-induced plasmas.
Science.gov (United States)
Liu, Lei; Deng, Leimin; Fan, Lisha; Huang, Xi; Lu, Yao; Shen, Xiaokang; Jiang, Lan; Silvain, Jean-François; Lu, Yongfeng
2017-10-30
Identification of chemical intermediates and study of chemical reaction pathways and mechanisms in laser-induced plasmas are important for laser-ablated applications. Laser-induced breakdown spectroscopy (LIBS), as a promising spectroscopic technique, is efficient for elemental analyses but can only provide limited information about chemical products in laser-induced plasmas. In this work, time-resolved resonance fluorescence spectroscopy was studied as a promising tool for the study of chemical reactions in laser-induced plasmas. Resonance fluorescence excitation of diatomic aluminum monoxide (AlO) and triatomic dialuminum monoxide (Al 2 O) was used to identify these chemical intermediates. Time-resolved fluorescence spectra of AlO and Al 2 O were used to observe the temporal evolution in laser-induced Al plasmas and to study their formation in the Al-O 2 chemistry in air.
14. Effect of particle size on laser-induced breakdown spectroscopy analysis of alumina suspension in liquids
International Nuclear Information System (INIS)
Diaz Rosado, José Carlos; L'hermite, Daniel; Levi, Yves
2012-01-01
The analysis by Laser Induced Breakdown Spectroscopy (LIBS) was proposed for the detection and the quantification of different elements in water even when the analyte is composed of particles in suspension. We have studied the effect of particle size on the LIBS signal during liquid analysis. In our study we used different particle sizes (from 2 μm to 90 μm) of Al 2 O 3 in suspension in water. The results were compared to the signal obtained in the case of dissolved aluminum. In the case of particles, a linear correlation between the LIBS signal versus concentration was found but a significant decrease in the slope of the calibration curve was found when the particle size increased. Several hypotheses have been tested and only a partial ablation of the particles might explain this decrease in signal intensity. This effect probably does not occur at smaller particle size. We estimated 860 nm/pulse as ablated thickness from the top of the particle. A statistical analysis over all data obtained allowed us to calculate 100 μm as ablated water column depth. - Highlights: ► We have identified a decrease of calibration curve when particle size increases. ► Partial particle ablation has been identified as the origin of this effect. ► The ablation rate on Al 2 O 3 particles in suspension in water has been estimated. ► We can determine the deepness of the interaction volume into the liquid.
15. Particle roughness in magnetorheology: effect on the strength of the field-induced structures
International Nuclear Information System (INIS)
Vereda, F; Segovia-Gutiérrez, J P; De Vicente, J; Hidalgo-Alvarez, R
2015-01-01
We report a study on the effect of particle roughness on the strength of the field-induced structures of magnetorheological (MR) fluids in the quasi-static regime. We prepared one set of MR fluids with carbonyl iron particles and another set with magnetite particles, and in both sets we had particles with different degrees of surface roughness. Small amplitude oscillatory shear (SAOS) magnetosweeps and steady shear (SS) tests were carried out on the suspensions to measure their elastic modulus (G′) and static yield stress (τ static ). Results for both the iron and the magnetite sets of suspensions were consistent: for the MR fluids prepared with rougher particles, G′ increased at smaller fields and τ static was ca. 20% larger than for the suspensions prepared with relatively smooth particles. In addition to the experimental study, we carried out finite element method calculations to assess the effect of particle roughness on the magnetic interaction between particles. These calculations showed that roughness can facilitate the magnetization of the particles, thus increasing the magnetic energy of the system for a given field, but that this effect depends on the concrete morphology of the surface. For our real systems, no major differences were observed between the magnetization cycles of the MR fluids prepared with particles with different degree of roughness, which implied that the effect of roughness on the measured G′ and τ static was due mainly to friction between the solid surfaces of adjacent particles. (paper)
16. Nonlinear mechanisms for drift wave saturation and induced particle transport
International Nuclear Information System (INIS)
Dimits, A.M.; Lee, W.W.
1989-12-01
A detailed theoretical study of the nonlinear dynamics of gyrokinetic particle simulations of electrostatic collisionless and weakly collisional drift waves is presented. In previous studies it was shown that, in the nonlinearly saturated phase of the evolution, the saturation levels and especially the particle fluxes have an unexpected dependence on collisionality. In this paper, the explanations for these collisionality dependences are found to be as follows: The saturation level is determined by a balance between the electron and ion fluxes. The ion flux is small for levels of the potential below an E x B-trapping threshold and increases sharply once this threshold is crossed. Due to the presence of resonant electrons, the electron flux has a much smoother dependence on the potential. In the 2-1/2-dimensional (''pseudo-3D'') geometry, the electrons are accelerated away from the resonance as they diffuse spatially, resulting in an inhibition of their diffusion. Collisions and three-dimensional effects can repopulate the resonance thereby increasing the value of the particle flux. 30 refs., 32 figs., 2 tabs
17. Different particle determinants induce apoptosis and cytokine release in primary alveolar macrophage cultures
Directory of Open Access Journals (Sweden)
Schwarze Per E
2006-06-01
Full Text Available Abstract Background Particles are known to induce both cytokine release (MIP-2, TNF-α, a reduction in cell viability and an increased apoptosis in alveolar macrophages. To examine whether these responses are triggered by the same particle determinants, alveolar macrophages were exposed in vitro to mineral particles of different physical-chemical properties. Results The crystalline particles of the different stone types mylonite, gabbro, basalt, feldspar, quartz, hornfels and fine grain syenite porphyr (porphyr, with a relatively equal size distribution (≤ 10 μm, but different chemical/mineral composition, all induced low and relatively similar levels of apoptosis. In contrast, mylonite and gabbro induced a marked MIP-2 response compared to the other particles. For particles of smaller size, quartz (≤ 2 μm seemed to induce a somewhat stronger apoptotic response than even smaller quartz (≤ 0.5 μm and larger quartz (≤ 10 μm in relation to surface area, and was more potent than hornfels and porphyr (≤ 2 μm. The reduction in cell viability induced by quartz of the different sizes was roughly similar when adjusted to surface area. With respect to cytokines, the release was more marked after exposure to quartz ≤ 0.5 μm than to quartz ≤ 2 μm and ≤ 10 μm. Furthermore, hornfels (≤ 2 μm was more potent than the corresponding hornfels (≤ 10 μm and quartz (≤ 2 μm to induce cytokine responses. Pre-treatment of hornfels and quartz particles ≤ 2 μm with aluminium lactate, to diminish the surface reactivity, did significantly reduce the MIP-2 response to hornfels. In contrast, the apoptotic responses to the particles were not affected. Conclusion These results indicate that different determinants of mineral/stone particles are critical for inducing cytokine responses, reduction in cell viability and apoptosis in alveolar macrophages. The data suggest that the particle surface reactivity was critical for cytokine responses
18. Phenomenological analisis of the p-even- and p,odd-angular asimmetry of alpha particles in the 10B(n, α)7Li reaction with thermal polarized neutrons
International Nuclear Information System (INIS)
Rzhevskij, E.S.
1983-01-01
The formalism for multilevel phenomfor munological analysis of angular asymmetry of alpha-particles escape from compound-nuclei in reactions induced by thermal polarized neutrons is suggested. The formalism is based on R-matrix theory of nuclear reactions. The connection of problems of angular correlations description with those of light nuclei structure is shown. The problems related to the selection of compound-resonance parameters, determination of alpha-cluster states, estimation of the role of these or those compound-resonances in neutron and alpha-particle channels are discussed. An explanation is given to the observed in the experiment p-even left/right angular asymmetry of alpha-particles. The values of p-odd angular correlations, the measurements of which are continued, are estimated
19. Particle accelerator
International Nuclear Information System (INIS)
Ress, R.I.
1976-01-01
Charged particles are entrained in a predetermined direction, independent of their polarity, in a circular orbit by a magnetic field rotating at high speed about an axis in a closed cylindrical or toroidal vessel. The field may be generated by a cylindrical laser structure, whose beam is polygonally reflected from the walls of an excited cavity centered on the axis, or by high-frequency energization of a set of electromagnets perpendicular to the axis. In the latter case, a separate magnetostatic axial field limits the orbital radius of the particles. These rotating and stationary magnetic fields may be generated centrally or by individual magnets peripherally spaced along its circular orbit. Chemical or nuclear reactions can be induced by collisions between the orbiting particles and an injected reactant, or by diverting high-speed particles from one doughnut into the path of counterrotating particles in an adjoining doughnut
20. Necrosis of HepG2 cancer cells induced by the vibration of magnetic particles
Energy Technology Data Exchange (ETDEWEB)
Wang, Biran [Laboratoire de Physique de la Matière Condensée (LPMC), CNRS UMR 7336, Université de Nice Sophia Antipolis, Parc Valrose, 06108 Nice (France); Institut de Chimie de Nice, UMR 7272, Université de Nice Sophia Antipolis, CNRS, 28 Avenue de Valrose, F-06100 Nice (France); Bienvenu, Céline [Institut de Chimie de Nice, UMR 7272, Université de Nice Sophia Antipolis, CNRS, 28 Avenue de Valrose, F-06100 Nice (France); Mendez-Garza, Juan; Lançon, Pascal; Madeira, Alexandra [Laboratoire de Physique de la Matière Condensée (LPMC), CNRS UMR 7336, Université de Nice Sophia Antipolis, Parc Valrose, 06108 Nice (France); Vierling, Pierre [Institut de Chimie de Nice, UMR 7272, Université de Nice Sophia Antipolis, CNRS, 28 Avenue de Valrose, F-06100 Nice (France); Di Giorgio, Christophe, E-mail: [email protected] [Institut de Chimie de Nice, UMR 7272, Université de Nice Sophia Antipolis, CNRS, 28 Avenue de Valrose, F-06100 Nice (France); Bossis, Georges, E-mail: [email protected] [Laboratoire de Physique de la Matière Condensée (LPMC), CNRS UMR 7336, Université de Nice Sophia Antipolis, Parc Valrose, 06108 Nice (France)
2013-10-15
Experiments of magnetolysis, i.e., destruction of cells induced with magnetic particles (MPs) submitted to the application of a magnetic field, were conducted on HepG2 cancer cells. We herein demonstrate the usefulness of combining anisotropic MPs with an alternative magnetic field in magnetolysis. Thus, the application of an alternative magnetic field of low frequency (a few Hertz) in the presence of anisotropic, submicronic particles allowed the destruction of cancer cells “in vitro”. We also show that a constant magnetic field is far less efficient than an oscillating one. Moreover, we demonstrate that, at equal particle volume, it is much more efficient to utilize spindle shaped particles rather than spherical ones. In order to get deeper insight into the mechanism of magnetolysis experiments, we performed a study by AFM, which strongly supports that the magnetic field induces the formation of clusters of particles becoming then large enough todamage cell membranes. - Highlights: • Magnetic force was applied on cancer cells through magnetic particles. • The penetration depth was predicted, both for spherical and ellipsoidal particles. • Alternative force was shown to damage the cells contrary to static force. • The effect of indentation of magnetic particles was compared to the one of AFM tips. • The damage was attributed to the formation of clusters of particles.
1. The effects of particle size distribution and induced unpinning during grain growth
International Nuclear Information System (INIS)
Thompson, G.S.; Rickman, J.M.; Harmer, M.P.; Holm, E.A.
1996-01-01
The effect of a second-phase particle size distribution on grain boundary pinning was studied using a Monte Carlo simulation technique. Simulations were run using a constant number density of both whisker and rhombohedral particles, and the effect of size distribution was studied by varying the standard deviation of the distribution around a constant mean particle size. The results of present simulations indicate that, in accordance with the stereological assumption of the topological pinning model, changes in distribution width had no effect on the pinned grain size. The effect of induced unpinning of particles on microstructure was also studied. In contrast to predictions of the topological pinning model, a power law dependence of pinned grain size on particle size was observed at T=0.0. Based on this, a systematic deviation to the stereological predictions of the topological pinning model is observed. The results of simulations at higher temperatures indicate an increasing power law dependence of pinned grain size on particle size, with the slopes of the power law dependencies fitting an Arrhenius relation. The effect of induced unpinning of particles was also studied in order to obtain a correlation between particle/boundary concentration and equilibrium grain size. The results of simulations containing a constant number density of monosized rhombohedral particles suggest a strong power law correlation between the two parameters. copyright 1996 Materials Research Society
2. RhoA/ROCK Signaling Pathway Mediates Shuanghuanglian Injection-Induced Pseudo-allergic Reactions.
Science.gov (United States)
Han, Jiayin; Zhao, Yong; Zhang, Yushi; Li, Chunying; Yi, Yan; Pan, Chen; Tian, Jingzhuo; Yang, Yifei; Cui, Hongyu; Wang, Lianmei; Liu, Suyan; Liu, Jing; Deng, Nuo; Liang, Aihua
2018-01-01
Background: Shuanghuanglian injection (SHLI) is a famous Chinese medicine used as an intravenous preparation for the treatment of acute respiratory tract infections. In the recent years, the immediate hypersensitivity reactions induced by SHLI have attracted broad attention. However, the mechanism involved in these reactions has not yet been elucidated. The present study aims to explore the characteristics of the immediate hypersensitivity reactions induced by SHLI and deciphers the role of the RhoA/ROCK signaling pathway in these reactions. Methods: SHLI-immunized mice or naive mice were intravenously injected (i.v.) with SHLI (600 mg/kg) once, and vascular leakage in the ears was evaluated. Passive cutaneous anaphylaxis test was conducted using sera collected from SHLI-immunized mice. Naive mice were administered (i.v.) with a single dose of 150, 300, or 600 mg/kg of SHLI, and vascular leakage, histamine release, and histopathological alterations in the ears, lungs, and intestines were tested. In vitro , human umbilical vein endothelial cell (HUVEC) monolayer was incubated with SHLI (0.05, 0.1, or 0.15 mg/mL), and the changes in endothelial permeability and cytoskeleton were observed. Western blot analysis was performed and ROCK inhibitor was employed to investigate the contribution of the RhoA/ROCK signaling pathway in SHLI-induced hypersensitivity reactions, both in HUVECs and in mice. Results: Our results indicate that SHLI was able to cause immediate dose-dependent vascular leakage, edema, and exudates in the ears, lungs, and intestines, and histamine release in mice. These were pseudo-allergic reactions, as SHLI-specific IgE was not elicited during sensitization. In addition, SHLI induced reorganization of actin cytoskeleton and disrupted the endothelial barrier. The administration of SHLI directly activated the RhoA/ROCK signaling pathway both in HUVECs and in the ears, lungs, and intestines of mice. Fasudil hydrochloride, a ROCK inhibitor, ameliorated the
3. Reactions of charged and neutral recoil particles following nuclear transformations. Progress report No. 13
International Nuclear Information System (INIS)
Ache, H.J.
1979-09-01
Research is reported on: caging and solvent effects in hot 38 Cl substitution reactions in chlorinated hydrocarbons (dichlorobenzene), excitation labelling of organic compounds using 80 Br, reactions of energetic tritium with graphite and SiC surfaces, and micellar systems and microemulsions studied by positron annihilation
4. Multiscale simulations of anisotropic particles combining molecular dynamics and Green's function reaction dynamics
NARCIS (Netherlands)
Vijaykumar, A.; Ouldridge, T.E.; ten Wolde, P.R.; Bolhuis, P.G.
2017-01-01
The modeling of complex reaction-diffusion processes in, for instance, cellular biochemical networks or self-assembling soft matter can be tremendously sped up by employing a multiscale algorithm which combines the mesoscopic Green's Function Reaction Dynamics (GFRD) method with explicit stochastic
5. Heterogeneity induces spatiotemporal oscillations in reaction-diffusion systems
Science.gov (United States)
Krause, Andrew L.; Klika, Václav; Woolley, Thomas E.; Gaffney, Eamonn A.
2018-05-01
We report on an instability arising in activator-inhibitor reaction-diffusion (RD) systems with a simple spatial heterogeneity. This instability gives rise to periodic creation, translation, and destruction of spike solutions that are commonly formed due to Turing instabilities. While this behavior is oscillatory in nature, it occurs purely within the Turing space such that no region of the domain would give rise to a Hopf bifurcation for the homogeneous equilibrium. We use the shadow limit of the Gierer-Meinhardt system to show that the speed of spike movement can be predicted from well-known asymptotic theory, but that this theory is unable to explain the emergence of these spatiotemporal oscillations. Instead, we numerically explore this system and show that the oscillatory behavior is caused by the destabilization of a steady spike pattern due to the creation of a new spike arising from endogeneous activator production. We demonstrate that on the edge of this instability, the period of the oscillations goes to infinity, although it does not fit the profile of any well-known bifurcation of a limit cycle. We show that nearby stationary states are either Turing unstable or undergo saddle-node bifurcations near the onset of the oscillatory instability, suggesting that the periodic motion does not emerge from a local equilibrium. We demonstrate the robustness of this spatiotemporal oscillation by exploring small localized heterogeneity and showing that this behavior also occurs in the Schnakenberg RD model. Our results suggest that this phenomenon is ubiquitous in spatially heterogeneous RD systems, but that current tools, such as stability of spike solutions and shadow-limit asymptotics, do not elucidate understanding. This opens several avenues for further mathematical analysis and highlights difficulties in explaining how robust patterning emerges from Turing's mechanism in the presence of even small spatial heterogeneity.
6. Multiphoton dissociation and thermal unimolecular reactions induced by infrared lasers
International Nuclear Information System (INIS)
Dai, H.L.
1981-04-01
Multiphoton dissociation (MPD) of ethyl chloride was studied using a tunable 3.3 μm laser to excite CH stretches. The absorbed energy increases almost linearly with fluence, while for 10 μm excitation there is substantial saturation. Much higher dissociation yields were observed for 3.3 μm excitation than for 10 μm excitation, reflecting bottlenecking in the discrete region of 10 μm excitation. The resonant nature of the excitation allows the rate equations description for transitions in the quasicontinuum and continuum to be extended to the discrete levels. Absorption cross sections are estimated from ordinary ir spectra. A set of cross sections which is constant or slowly decreasing with increasing vibrational excitation gives good fits to both absorption and dissociation yield data. The rate equations model was also used to quantitatively calculate the pressure dependence of the MPD yield of SF 6 caused by vibrational self-quenching. Between 1000-3000 cm -1 of energy is removed from SF 6 excited to approx. > 60 kcal/mole by collision with a cold SF 6 molecule at gas kinetic rate. Calculation showed the fluence dependence of dissociation varies strongly with the gas pressure. Infrared multiphoton excitation was applied to study thermal unimolecular reactions. With SiF 4 as absorbing gas for the CO 2 laser pulse, transient high temperature pulses were generated in a gas mixture. IR fluorescence from the medium reflected the decay of the temperature. The activation energy and the preexponential factor of the reactant dissociation were obtained from a phenomenological model calculation. Results are presented in detail
7. Investigation on Mechanical Properties and Reaction Characteristics of Al-PTFE Composites with Different Al Particle Size
Directory of Open Access Journals (Sweden)
Jia-xiang Wu
2018-01-01
Full Text Available Al-PTFE (aluminum-polytetrafluoroethylene serves as one among the most promising reactive materials (RMs. In this work, six types of Al-PTFE composites with different Al particle sizes (i.e., 50 nm, 1∼2 μm, 6∼7 μm, 12∼14 μm, 22∼24 μm, and 32∼34 μm were prepared, and quasistatic compression and drop weight tests were conducted to characterize the mechanical properties and reaction characteristics of Al-PTFE composites. The reaction phenomenon and stress-strain curves were recorded by a high-speed camera and universal testing machine. The microstructure of selected specimens was anatomized through adopting a scanning electron microscope (SEM to correlate the mesoscale structural characteristics to their macroproperties. As the results indicated, in the case of quasistatic compression, the strength of the composites was decreased (the yield strength falling from 22.7 MPa to 13.6 MPa and the hardening modulus declining from 33.3 MPa to 25 MPa with the increase of the Al particle size. The toughness rose firstly and subsequently decreased and peaked as 116.42 MJ/m3 at 6∼7 μm. The reaction phenomenon occurred only in composites with the Al particle size less than 10 μm. In drop weight tests, six types of specimens were overall reacted. As the Al particle size rose, the ignition energy of the composites enhanced and the composites turned out to be more insensitive to reaction. In a lower strain rate range (10−2·s−1∼102·s−1, Al-PTFE specimens take on different mechanical properties and reaction characteristics in the case of different strain rates. The formation of circumferential open cracks is deemed as a prerequisite for Al-PTFE specimens to go through a reaction.
8. Property investigations of proton-proton reaction in dependence of the transverse momentum of a single particle for a beam momentum of 24 GeV/c
International Nuclear Information System (INIS)
Geist, W.M.
1976-01-01
This study is based on data produced in an experiment for the investigation of proton-proton reactions at a beam momentum of 24 GeV/c. In particular, the dependence of final state properties on the transverse momentum of a chosen secondary particle (trigger particle) is considered. The study has four parts: First, experimental procedures of selection, cleaning and correction of the data are developed and applied for exclusive and inclusive reactions. Then the description of a model with minimum correlation between two particles is given. In the third section, the mean charged multiplicities of inclusive reactions are measured and interpreted as a function of the transverse momentum of the trigger particle. A complete event structure for quasi-inclusive reactions is given in the last section. Much emphasis is placed on the investigation of events comprising the production of a particle with high transverse momentum (more than 1 GeV/c). (orig./WL) [de
9. Charged-particle thermonuclear reaction rates: I. Monte Carlo method and statistical distributions
International Nuclear Information System (INIS)
Longland, R.; Iliadis, C.; Champagne, A.E.; Newton, J.R.; Ugalde, C.; Coc, A.; Fitzgerald, R.
2010-01-01
A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended 'classical' rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless 'minimum' (or 'lower limit') and 'maximum' (or 'upper limit') reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters μ and σ. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this issue (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.
10. Solar Particle Induced Upsets in the TDRS-1 Attitude Control System RAM During the October 1989 Solar Particle Events
Science.gov (United States)
Croley, D. R.; Garrett, H. B.; Murphy, G. B.; Garrard,T. L.
1995-01-01
The three large solar particle events, beginning on October 19, 1989 and lasting approximately six days, were characterized by high fluences of solar protons and heavy ions at 1 AU. During these events, an abnormally large number of upsets (243) were observed in the random access memory of the attitude control system (ACS) control processing electronics (CPE) on-board the geosynchronous TDRS-1 (Telemetry and Data Relay Satellite). The RAM unit affected was composed of eight Fairchild 93L422 memory chips. The Galileo spacecraft, launched on October 18, 1989 (one day prior to the solar particle events) observed the fluxes of heavy ions experienced by TDRS-1. Two solid-state detector telescopes on-board Galileo, designed to measure heavy ion species and energy, were turned on during time periods within each of the three separate events. The heavy ion data have been modeled and the time history of the events reconstructed to estimate heavy ion fluences. These fluences were converted to effective LET spectra after transport through the estimated shielding distribution around the TDRS-1 ACS system. The number of single event upsets (SEU) expected was calculated by integrating the measured cross section for the Fairchild 93L422 memory chip with average effective LET spectrum. The expected number of heavy ion induced SEU's calculated was 176. GOES-7 proton data, observed during the solar particle events, were used to estimate the number of proton-induced SEU's by integrating the proton fluence spectrum incident on the memory chips, with the two-parameter Bendel cross section for proton SEU'S. The proton fluence spectrum at the device level was gotten by transporting the protons through the estimated shielding distribution. The number of calculated proton-induced SEU's was 72, yielding a total of 248 predicted SEU'S, very dose to the 243 observed SEU'S. These calculations uniquely demonstrate the roles that solar heavy ions and protons played in the production of SEU
11. Photon- and pion-induced reactions in the few body systems
International Nuclear Information System (INIS)
Laget, J.M.
1985-05-01
The study of the interplay of the degrees of freedom of the many nuclear system and the internal degrees of freedom of its constituents is reviewed. First nucleon-nucleon interaction mechanisms are recalled in relation to the interaction range. It appears that pion and photon induced reactions should provide two complementary ways to disentangle these various mechanisms. Most of pion and photon induced reactions, performed until now, can be understood in terms of nucleons, pions and deltas. But after a short description of the method of analysis of the reactions it is shown that this agreement is achieved at the price of the adjustment of two parameters (the πNN form factor and the rho-nucleon coupling contant) which may simulate more subtle short range effects. Then the relevance of the analysis of the same reactions in terms of quark degrees of freedom is discussed briefly
12. Neutron-induced cross sections of actinides via the surrogate-reaction method
Directory of Open Access Journals (Sweden)
Ducasse Q.
2013-12-01
Full Text Available The surrogate-reaction method is an indirect way of determining cross sections for reactions that proceed through a compound nucleus. This technique may enable neutron-induced cross sections to be extracted for short-lived nuclei that otherwise cannot be measured. However, the validity of the surrogate method has to be investigated. In particular, the absence of a compound nucleus formation and the Jπ dependence of the decay probabilities may question the method. In this work we study the reactions 238U(d,p239U, 238U(3He,t238Np, 238U(3He,4He237U as surrogates for neutron-induced reactions on 238U, 237Np and 236U, respectively, for which good quality data exist. The experimental set-up enabled the measurement of fission and gamma-decay probabilities. The first results are hereby presented.
13. Reaction
African Journals Online (AJOL)
abp
19 oct. 2017 ... Reaction to Mohamed Said Nakhli et al. concerning the article: "When the axillary block remains the only alternative in a 5 year old child". .... Bertini L1, Savoia G, De Nicola A, Ivani G, Gravino E, Albani A et al ... 2010;7(2):101-.
14. Pre-hospital treatment of bee and wasp induced anaphylactic reactions
DEFF Research Database (Denmark)
Ruiz Oropeza, Athamaica; Mikkelsen, Søren; Bindslev-Jensen, Carsten
2017-01-01
BACKGROUND: Bee and wasp stings are among the most common triggers of anaphylaxis in adults representing around 20% of fatal anaphylaxis from any cause. Data of pre-hospital treatment of bee and wasp induced anaphylactic reactions are sparse. This study aimed to estimate the incidence of bee...... only for Odense and 2009-2014 for the whole region). Discharge summaries with diagnosis related to anaphylaxis according to the International Classification of Diseases 10 (ICD-10) were reviewed to identify bee and wasp induced anaphylactic reactions. The severity of the anaphylactic reaction...... was assessed according to Sampson's severity score and Mueller's severity score. Treatment was evaluated in relation to administration of adrenaline, glucocorticoids and antihistamine. RESULTS: We identified 273 cases (Odense 2008 n = 14 and Region of Southern Denmark 2009-2014 n = 259) of bee and wasp induced...
15. Investigation of nucleon-induced reactions in the Fermi energy domain within the microscopic DYWAN model
Energy Technology Data Exchange (ETDEWEB)
Sebille, F.; Bonilla, C. [SUBATECH, Universite de Nantes, CNRS/IN2P3, 44 - Nantes (France); Blideanu, V.; Lecolley, J.F. [Laboratoire de Physique Corpusculaire, ENSICAEN, Universite de Caen, IN2P3-CNRS, 14 - Caen (France)
2004-06-01
A microscopic investigation of nucleon-induced reactions is addressed within the DYWAN model, which is based on the projection methods of out of equilibrium statistical physics and on the mathematical theory of wavelets. Due to a strongly compressed representation of the fermionic wave-functions, the numerical simulations of the nucleon transport in target are therefore able to preserve the quantum nature of the colliding system, as well as a least biased many-body information needed to keep track of the cluster formation. A special attention is devoted to the fingerprints of the phase space topology induced by the fluctuations of the self-consistent mean-field. Comparisons be ween theoretical results and experimental data point out that ETDHF type approaches are well suited to describe reaction mechanisms in the Fermi energy domain. The observed sensitivity to physical effects shows that the nucleon-induced reactions provide a valuable probe of the nuclear interaction in this range of energy. (authors)
16. Nonelastic nuclear reactions induced by light ions with the BRIEFF code
CERN Document Server
Duarte, H
2010-01-01
The intranuclear cascade (INC) code BRIC has been extended to compute nonelastic reactions induced by light ions on target nuclei. In our approach the nucleons of the incident light ion move freely inside the mean potential of the ion in its center-of-mass frame while the center-of-mass of the ion obeys to equations of motion dependant on the mean nuclear+Coulomb potential of the target nucleus. After transformation of the positions and momenta of the nucleons of the ion into the target nucleus frame, the collision term between the nucleons of the target and of the ion is computed taking into account the partial or total breakup of the ion. For reactions induced by low binding energy systems like deuteron, the Coulomb breakup of the ion at the surface of the target nucleus is an important feature. Preliminary results of nucleon production in light ion induced reactions are presented and discussed.
17. Pressure induced reactions amongst calcium aluminate hydrate phases
KAUST Repository
Moon, Ju-hyuk
2011-06-01
The compressibilities of two AFm phases (strätlingite and calcium hemicarboaluminate hydrate) and hydrogarnet were obtained up to 5 GPa by using synchrotron high-pressure X-ray powder diffraction with a diamond anvil cell. The AFm phases show abrupt volume contraction regardless of the molecular size of the pressure-transmitting media. This volume discontinuity could be associated to a structural transition or to the movement of the weakly bound interlayer water molecules in the AFm structure. The experimental results seem to indicate that the pressure-induced dehydration is the dominant mechanism especially with hygroscopic pressure medium. The Birch-Murnaghan equation of state was used to compute the bulk modulus of the minerals. Due to the discontinuity in the pressure-volume diagram, a two stage bulk modulus of each AFm phase was calculated. The abnormal volume compressibility for the AFm phases caused a significant change to their bulk modulus. The reliability of this experiment is verified by comparing the bulk modulus of hydrogarnet with previous studies. © 2011 Elsevier Ltd. All rights reserved.
18. Evaluation of neutron-induced reactions in 48Ti and 238U
International Nuclear Information System (INIS)
Carlson, B.V.; Fiorentino, J.; Frederico, T.; Isidro Filho, M.P.; Mastroleo, R.C.; Rego, R.A.
1984-05-01
Preliminary results of the evaluation of neutron-induced reactions in 48 Ti and 238 U are presented. Calculated cross sections for the reactions (n,γ), (n,n'), (n, 2n) and (n,p) as well as for (n,f) in 238 U are given. Comparisons with available experimental data are made and possible changes in the parameters are discussed. (Author) [pt
19. Evaluation of reactor induced (n,p) reactions for activation analysis of titanium in geological materials
Energy Technology Data Exchange (ETDEWEB)
Espinosa Garcia, R; Cohen, I M [Comision Nacional de Energia Atomica, Buenos Aires (Argentina)
1984-05-01
The possibilities of reactor induced (n,p) reactions as a tool for neutron activation analysis of titanium in geological samples are discussed. The interference of calcium and scandium is experimentally evaluated. Results for Ti, Ca and Sc in GSP-1 and PCC-1 standard rocks are presented. Based on the experimental values, it is concluded that the /sup 47/Ti(n,p)/sup 47/Sc reaction is the most favourable for titanium determination. 11 refs.
20. Cross-sections of spallation residues produced in Proton –Induced reactions
International Nuclear Information System (INIS)
Al-Haydari, A.; Khan, A.A.; Abdul Ganai, A.; Hassan, G.S.
2013-01-01
The recent available GSI data for proton-induced spallation reactions by using inverse kinematics at different energies are analyzed for different reactions in terms of the percolation model together with the intranuclear cascade model (MCAS). The simulation results obtained for the cross sections of production of light ions and isotopes as a function of mass and charge number is calculated. Results of calculations are in good agreement with experiment
1. Al-based metal matrix composites reinforced with Al–Cu–Fe quasicrystalline particles: Strengthening by interfacial reaction
International Nuclear Information System (INIS)
Ali, F.; Scudino, S.; Anwar, M.S.; Shahid, R.N.; Srivastava, V.C.; Uhlenwinkel, V.; Stoica, M.; Vaughan, G.; Eckert, J.
2014-01-01
Highlights: • Strength of composites is enhanced as the QC-to-ω phase transformation advances. • Yield strength increases from 195 to 400 MPa with QC-to-ω interfacial reaction. • Reducing matrix ligament size explains most of the strengthening. • Improved interfacial bonding and nano ω phase explains divergence from model. - Abstract: The interfacial reaction between the Al matrix and the Al 62.5 Cu 25 Fe 12.5 quasicrystalline (QC) reinforcing particles to form the Al 7 Cu 2 Fe ω-phase has been used to further enhance the strength of the Al/QC composites. The QC-to-ω phase transformation during heating was studied by in situ X-ray diffraction using a high-energy monochromatic synchrotron beam, which permits to follow the structural evolution and to correlate it with the mechanical properties of the composites. The mechanical behavior of these transformation-strengthened composites is remarkably improved as the QC-to-ω phase transformation progresses: the yield strength increases from 195 MPa for the starting material reinforced exclusively with QC particles to 400 MPa for the material where the QC-to-ω reaction is complete. The reduction of the matrix ligament size resulting from the increased volume fraction of the reinforcing phase during the transformation can account for most of the observed improvement in strength, whereas the additional strengthening can be ascribed to the possible presence of nanosized ω-phase particles as well as to the improved interfacial bonding between matrix and particles caused by the compressive stresses arising in the matrix
2. Al-based metal matrix composites reinforced with Al–Cu–Fe quasicrystalline particles: Strengthening by interfacial reaction
Energy Technology Data Exchange (ETDEWEB)
Ali, F. [IFW Dresden, Institut für Komplexe Materialien, Postfach 27 01 16, D-01171 Dresden (Germany); Materials Processing Group, DMME, Pakistan Institute of Engineering and Applied Sciences, P.O. Nilore, Islamabad (Pakistan); Scudino, S., E-mail: [email protected] [IFW Dresden, Institut für Komplexe Materialien, Postfach 27 01 16, D-01171 Dresden (Germany); Anwar, M.S.; Shahid, R.N. [Materials Processing Group, DMME, Pakistan Institute of Engineering and Applied Sciences, P.O. Nilore, Islamabad (Pakistan); Srivastava, V.C. [Metal Extraction and Forming Division, National Metallurgical Laboratory, Jamshedpur 831007 (India); Uhlenwinkel, V. [Institut für Werkstofftechnik, Universität Bremen, D-28359 Bremen (Germany); Stoica, M. [IFW Dresden, Institut für Komplexe Materialien, Postfach 27 01 16, D-01171 Dresden (Germany); Vaughan, G. [European Synchrotron Radiation Facilities ESRF, BP 220, 38043 Grenoble (France); Eckert, J. [IFW Dresden, Institut für Komplexe Materialien, Postfach 27 01 16, D-01171 Dresden (Germany); TU Dresden, Institut für Werkstoffwissenschaft, D-01062 Dresden (Germany)
2014-09-01
Highlights: • Strength of composites is enhanced as the QC-to-ω phase transformation advances. • Yield strength increases from 195 to 400 MPa with QC-to-ω interfacial reaction. • Reducing matrix ligament size explains most of the strengthening. • Improved interfacial bonding and nano ω phase explains divergence from model. - Abstract: The interfacial reaction between the Al matrix and the Al{sub 62.5}Cu{sub 25}Fe{sub 12.5} quasicrystalline (QC) reinforcing particles to form the Al{sub 7}Cu{sub 2}Fe ω-phase has been used to further enhance the strength of the Al/QC composites. The QC-to-ω phase transformation during heating was studied by in situ X-ray diffraction using a high-energy monochromatic synchrotron beam, which permits to follow the structural evolution and to correlate it with the mechanical properties of the composites. The mechanical behavior of these transformation-strengthened composites is remarkably improved as the QC-to-ω phase transformation progresses: the yield strength increases from 195 MPa for the starting material reinforced exclusively with QC particles to 400 MPa for the material where the QC-to-ω reaction is complete. The reduction of the matrix ligament size resulting from the increased volume fraction of the reinforcing phase during the transformation can account for most of the observed improvement in strength, whereas the additional strengthening can be ascribed to the possible presence of nanosized ω-phase particles as well as to the improved interfacial bonding between matrix and particles caused by the compressive stresses arising in the matrix.
3. Heterogeneous reaction of HO2 with airborne TiO2 particles and its implication for climate change mitigation strategies
Science.gov (United States)
Moon, Daniel R.; Taverna, Giorgio S.; Anduix-Canto, Clara; Ingham, Trevor; Chipperfield, Martyn P.; Seakins, Paul W.; Baeza-Romero, Maria-Teresa; Heard, Dwayne E.
2018-01-01
One geoengineering mitigation strategy for global temperature rises resulting from the increased concentrations of greenhouse gases is to inject particles into the stratosphere to scatter solar radiation back to space, with TiO2 particles emerging as a possible candidate. Uptake coefficients of HO2, γ(HO2), onto sub-micrometre TiO2 particles were measured at room temperature and different relative humidities (RHs) using an atmospheric pressure aerosol flow tube coupled to a sensitive HO2 detector. Values of γ(HO2) increased from 0.021 ± 0.001 to 0.036 ± 0.007 as the RH was increased from 11 to 66 %, and the increase in γ(HO2) correlated with the number of monolayers of water surrounding the TiO2 particles. The impact of the uptake of HO2 onto TiO2 particles on stratospheric concentrations of HO2 and O3 was simulated using the TOMCAT three-dimensional chemical transport model. The model showed that, when injecting the amount of TiO2 required to achieve the same cooling effect as the Mt Pinatubo eruption, heterogeneous reactions between HO2 and TiO2 would have a negligible effect on stratospheric concentrations of HO2 and O3.
4. Heterogeneous reaction of HO2 with airborne TiO2 particles and its implication for climate change mitigation strategies
Directory of Open Access Journals (Sweden)
D. R. Moon
2018-01-01
Full Text Available One geoengineering mitigation strategy for global temperature rises resulting from the increased concentrations of greenhouse gases is to inject particles into the stratosphere to scatter solar radiation back to space, with TiO2 particles emerging as a possible candidate. Uptake coefficients of HO2, γ(HO2, onto sub-micrometre TiO2 particles were measured at room temperature and different relative humidities (RHs using an atmospheric pressure aerosol flow tube coupled to a sensitive HO2 detector. Values of γ(HO2 increased from 0.021 ± 0.001 to 0.036 ± 0.007 as the RH was increased from 11 to 66 %, and the increase in γ(HO2 correlated with the number of monolayers of water surrounding the TiO2 particles. The impact of the uptake of HO2 onto TiO2 particles on stratospheric concentrations of HO2 and O3 was simulated using the TOMCAT three-dimensional chemical transport model. The model showed that, when injecting the amount of TiO2 required to achieve the same cooling effect as the Mt Pinatubo eruption, heterogeneous reactions between HO2 and TiO2 would have a negligible effect on stratospheric concentrations of HO2 and O3.
5. Pneumocystis carinii in bronchoalveolar lavage and induced sputum: detection with a nested polymerase chain reaction
DEFF Research Database (Denmark)
Skøt, J; Lerche, A G; Kolmos, H J
1995-01-01
To evaluate polymerase chain reaction (PCR) for detection of Pneumocystis carinii, 117 bronchoalveolar lavage (BAL) specimens, from HIV-infected patients undergoing a diagnostic bronchoscopy, were processed and a nested PCR, followed by Southern blot and hybridization with a P32-labelled probe......, but sensitivity dropped markedly with this system. A further 33 patients had both induced sputum and bronchoalveolar lavage performed and the induced sputum was analysed using PCR and routine microbiological methods. The PCR sensitivity on induced sputum was equal to that of routine methods. At present...... the evaluated PCR cannot replace routine microbiological methods for detection of Pneumocystis carinii, on either BAL fluid or induced sputum....
6. Systematic trends in photonic reagent induced reactions in a homologous chemical family.
Science.gov (United States)
Tibbetts, Katharine Moore; Xing, Xi; Rabitz, Herschel
2013-08-29
The growing use of ultrafast laser pulses to induce chemical reactions prompts consideration of these pulses as "photonic reagents" in analogy to chemical reagents. This work explores the prospect that photonic reagents may affect systematic trends in dissociative ionization reactions of a homologous family of halomethanes, much as systematic outcomes are often observed for reactions between homologous families of chemical reagents and chemical substrates. The experiments in this work with photonic reagents of varying pulse energy and linear spectral chirp reveal systematic correlations between observable ion yields and the following set of natural variables describing the substrate molecules: the ionization energy of the parent molecule, the appearance energy of each fragment ion, and the relative strength of carbon-halogen bonds in molecules containing two different halogens. The results suggest that reactions induced by photonic reagents exhibit systematic behavior analogous to that observed in reactions driven by chemical reagents, which provides a basis to consider empirical "rules" for predicting the outcomes of photonic reagent induced reactions.
7. Analysis of reaction cross-section production in neutron induced fission reactions on uranium isotope using computer code COMPLET.
Science.gov (United States)
Asres, Yihunie Hibstie; Mathuthu, Manny; Birhane, Marelgn Derso
2018-04-22
This study provides current evidence about cross-section production processes in the theoretical and experimental results of neutron induced reaction of uranium isotope on projectile energy range of 1-100 MeV in order to improve the reliability of nuclear stimulation. In such fission reactions of 235 U within nuclear reactors, much amount of energy would be released as a product that able to satisfy the needs of energy to the world wide without polluting processes as compared to other sources. The main objective of this work is to transform a related knowledge in the neutron-induced fission reactions on 235 U through describing, analyzing and interpreting the theoretical results of the cross sections obtained from computer code COMPLET by comparing with the experimental data obtained from EXFOR. The cross section value of 235 U(n,2n) 234 U, 235 U(n,3n) 233 U, 235 U(n,γ) 236 U, 235 U(n,f) are obtained using computer code COMPLET and the corresponding experimental values were browsed by EXFOR, IAEA. The theoretical results are compared with the experimental data taken from EXFOR Data Bank. Computer code COMPLET has been used for the analysis with the same set of input parameters and the graphs were plotted by the help of spreadsheet & Origin-8 software. The quantification of uncertainties stemming from both experimental data and computer code calculation plays a significant role in the final evaluated results. The calculated results for total cross sections were compared with the experimental data taken from EXFOR in the literature, and good agreement was found between the experimental and theoretical data. This comparison of the calculated data was analyzed and interpreted with tabulation and graphical descriptions, and the results were briefly discussed within the text of this research work. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
8. The decay of hot nuclei formed in La-induced reactions at intermediate energies
International Nuclear Information System (INIS)
Libby, B.; Mignerey, A.C.; Madani, H.; Marchetti, A.A.; Colonna, M.; DiToro, M.
1992-01-01
The decay of hot nuclei formed in lanthanum-induced reactions utilizing inverse kinematics has been studied from E/A = 35 to 55 MeV. At each bombarding energy studied, the probability for the multiple emission of complex fragments has been found to be independent of target. Global features (total charge, source velocity) of the reaction La + Al at E/A = 45 MeV have been reproduced by coupling a dynamical model to study the collision stage of the reaction to a statistical model of nuclear decay
9. Excitation functions for quasi-elastic transfer reactions induced with heavy ions in bismuth
International Nuclear Information System (INIS)
Gardes, D.; Bimbot, R.; Maison, J.; Reilhac, L. de; Rivet, M.F.; Fleury, A.; Hubert, F.; Llabador, Y.
1977-01-01
The excitation functions for the production of 210 Bi, 210 Po, sup(207-211)At and 211 Rn through quasi-elastic transfer reactions induced with heavy ions in 209 Bi have been measured. The corresponding reactions involved the transfer of one neutron, one proton, two and three charges from projectile to target. The projectiles used were 12 C, 14 N, 16 O, 19 F, 20 Ne, 40 Ca, 56 Fe and 63 Cu. The experimental techniques involved target irradiations and off-line α and γ activity measurements. Chemical separations were used to solve specific problems. Careful measurements of incident energies and cross sections were performed close to the reaction thresholds
10. Skin reactions to histamine of healthy subjects after hypnotically induced emotions of sadness, anger, and happiness.
Science.gov (United States)
Zachariae, R; Jørgensen, M M; Egekvist, H; Bjerring, P
2001-08-01
The severity of symptoms in asthma and other hypersensitivity-related disorders has been associated with changes in mood but little is known about the mechanisms possibly mediating such a relationship. The purpose of this study was to examine the influence of mood on skin reactivity to histamine by comparing the effects of hypnotically induced emotions on flare and wheal reactions to cutaneous histamine prick tests. Fifteen highly hypnotically susceptible volunteers had their cutaneous reactivity to histamine measured before hypnosis at 1, 2, 3, 4, 5, 10, and 15 min after the histamine prick. These measurements were repeated under three hypnotically induced emotions of sadness, anger, and happiness presented in a counterbalanced order. Skin reactions were measured as change in histamine flare and wheal area in mm2 per minute. The increase in flare reaction in the time interval from 1 to 3 min during happiness and anger was significantly smaller than flare reactions during sadness (P<0.05). No effect of emotion was found for wheal reactions. Hypnotic susceptibility scores were associated with increased flare reactions at baseline (r=0.56; P<0.05) and during the condition of happiness (r=0.56; P<0.05). Our results agree with previous studies showing mood to be a predictor of cutaneous immediate-type hypersensitivity and histamine skin reactions. The results are also in concordance with earlier findings of an association between hypnotic susceptibility and increased reactivity to an allergen.
11. Structural characterizaiton and gas reactions of small metal particles by high-resolution, in-situ TEM and TED
Science.gov (United States)
1984-01-01
The existing in-situ transmission electron microscopy (TEM) facility was improved by adding a separately pumped mini-specimen chamber. The chamber contains wire-evaporation sources for three metals and a specimen heater for moderate substrate temperatures. A sample introduction device was constructed, installed, and tested, facilitating rapid introduction of a specimen into the mini-chamber while maintaining the background pressure in that chamber in the 10(-9) millibar range. Small particles and clusters of Pd, grown by deposition from the vapor phase in an in-situ TEM facility on amorphous and crystalline support films of alumina and on ultra-thin carbon films, were analyzed by conventional high-resolution TEM and image analysis in terms of detectability, number density, and size distribution. The smallest particles that could be detected and counted contained no more than 6 atoms; size determinations could be made for particles 1 nm in diameter. The influence of various oxygen plasma treatments, annealing treatments, and of increasing the substrate temperature during deposition was investigated. The TEM technique was employed to demonstrate that under otherwise identica l conditions the lattice parameter of Pd particles in the 1 to 2 nm size range and supported in random orientation on ex-situ prepared mica films is expanded by some 3% when compared to 5 nm size particles. It is believed that this expansion is neither a small-particle diffraction effect nor due to pseudomorphism, but that it is due to a annealing-induced transformation of the small as-deposited particles with predominantly composite crystal structures into larger particles with true f.c.c. structure and thus inherently smaller lattice parameter.
12. Fusion reactions initiated by laser-accelerated particle beams in a laser-produced plasma
International Nuclear Information System (INIS)
Labaune, C.; Baccou, C.; Loisel, G.; Yahia, V.; Depierreux, S.; Goyon, C.; Rafelski, J.
2013-01-01
The advent of high-intensity-pulsed laser technology enables the generation of extreme states of matter under conditions that are far from thermal equilibrium. This in turn could enable different approaches to generating energy from nuclear fusion. Relaxing the equilibrium requirement could widen the range of isotopes used in fusion fuels permitting cleaner and less hazardous reactions that do not produce high-energy neutrons. Here we propose and implement a means to drive fusion reactions between protons and boron-11 nuclei by colliding a laser-accelerated proton beam with a laser-generated boron plasma. We report proton-boron reaction rates that are orders of magnitude higher than those reported previously. Beyond fusion, our approach demonstrates a new means for exploring low-energy nuclear reactions such as those that occur in astrophysical plasmas and related environments. (authors)
13. Approximation of the cross-sections for charged-particle emission reactions near the threshold
International Nuclear Information System (INIS)
1990-01-01
We perform an analytical approximation of the energy dependence of the cross-sections for the reactions (n,p) and (n,γ) from the BOSPOR library, correct them for the latest differential and integral experimental data using the common features, characteristic of the energy dependence of the threshold reaction cross-section and making some physical assumptions. 19 refs, 1 fig., 1 tab
14. Light induced heterogeneous ozone processing on the pesticides adsorbed on silica particles
Science.gov (United States)
Socorro, J.; Désert, M.; Quivet, E.; Gligorovski, S.; Wortham, H.
2013-12-01
In France, in 2010, the sales of pesticides reached 1.8 billion euros for 61 900 tons of active ingredients, positioning France as a first European consumer of pesticides, as reported by the European Crop Protection Association. About 19 million hectares of crops are sprayed annually with pesticides, i.e., 35% of the total surface area of France. This corresponds to an average pesticide dose of 3.2 kg ha-1. The consumption of herbicide and fungicide is favoured in comparison to the use of insecticides in France and the other European countries, as well. The partitioning of pesticides between the gas and particulate phases influences the atmospheric fate of these compounds such as their photo-chemical degradation. There is much uncertainty concerning the behavior of the pesticides in the atmosphere. Especially, there is a gap of knowledge concerning the degradation of the pesticides induced by heterogeneous reactions in absence and especially in presence of solar light. Considering that most of the pesticides currently used are semi-volatile, it is of crucial importance to investigate the heterogeneous reactivity of particulate pesticides with light and with atmospheric oxidants such as ozone and OH radical. The aim of the present work is to evaluate the light induced heterogeneous ozonation of suspended pesticide particles. 8 pesticides (cyprodinil, deltamethrin, difenoconazole, fipronil, oxadiazon, pendimethalin, permethrin and tetraconazole) were chosen for their physico-chemical properties and their concentration levels in the PACA (Région Provence-Alpes-Côte d'Azur) region, France. Silica particles with well-known properties were chosen as model particles of atmospheric relevance. Kinetic rate constants were determined to allow estimate the atmospheric lifetimes relating to ozone. The rate constants were determined as follows: k = (6.6 × 0.2) 10-19, (7.2 × 0.3) 10-19, (5.1 × 0.5) 10-19, (3.9 × 0.3) 10-19 [cm3 molecules-1 s-1] for Cyprodinil
15. Multifragmentation in 30 MeV/u [sup 129]Xe induced reactions
Energy Technology Data Exchange (ETDEWEB)
Colonna, N. (INFN, Bari (Italy)); Bowman, D.R. (Chalk River Lab. (Canada)); Brambilla, S. (Dipt. di Fisica and INFN, Milan (Italy)); Bruno, M. (Dipt. di Fisica and INFN, Bologna (Italy)); Buttazzo, P. (Dipt. di Fisica and INFN, Trieste (Italy)); Celano, L. (INFN, Bari (Italy)); Chiodini, S. (Dipt. di Fisica and INFN, Milan (Italy)); D' Agostino, M. (Dipt. di Fisica and INFN, Bologna (Italy)); Dinius, J.D. (Michigan State Univ., East Lansing, MI (United States). National Superconducting Cyclotron Lab.); Ferrero, A. (Dipt. di Fisica and INFN, Milan (Italy)); Gelbke, K. (Michigan State Univ., East Lansing, MI (United States). National Superconducting Cyclotron Lab.); Glasmacher, T. (Michigan State Univ., East Lansing, MI (United States). National Superconducting Cyclotron Lab.); Gramegna, F. (INFN, Lab. Nazionali, Legnaro (Italy)); Handzy, D.O. (Michigan State Univ., East Lansing, MI (United States). National Superconducting Cyclotron Lab.); Horn, D. (Chalk River Lab. (Canad
1994-01-01
Multifragmentation processes in the 30 MeV/u [sup 129]Xe + [sup nat]Cu and [sup 129]Xe + [sup 197]Au reactions were studied with a low threshold, high-granularity 4[pi] detection system. Fragment velocity distributions have been measured at forward angles as a function of the total charged particle multiplicity. While a two-source pattern is observed for the peripheral collisions, no evidence of a dinuclear emission pattern is found for the most central collisions. Kinematical observables, such as fragment relative velocity, relative angle and reduced velocity correlation functions indicate a fast timescale of the multifragmentation process in these reactions. (orig.)
16. α-cluster model for the multiple emission of particles in the reaction 90Zr (e, α)
International Nuclear Information System (INIS)
Guevara, Y.M.; Garcia, C.; Hoyos, O.E.R.; Rodriguez, T. E.; Arruda-Neto, J.D.T.
2011-01-01
We present a methodology based on the model of photoabsorption by a cluster N- α for a better understanding of the puzzling steady increase behavior of the 90 Zr (e, α) yield obtained experimentally in the energy range of the giant dipole resonance (RDG) and the quasi-deuteron (QD).The calculation takes into account the emission of protons, neutrons and alpha particles in the framework of the reaction (which was used for the Intranuclear Cascade model (MCMC)). The statistical decay of the compound nucleus is described by Monte Carlo techniques in terms of competition between evaporation of particles (p, n, d, α, 3 He t) and nuclear fission, but for our specific case (the reaction and + Zr 90 in an energy range between 20 and 140 MeV) the fission channel does not have a high probability of occurrence. The results reproduce quite successfully the experimental data, suggesting that pre-equilibrium emission of alpha particles are essential for the interpretation of this exotic increase of the cross sections. (Author)
17. Estimation of the {alpha} particles and neutron distribution generated during a fusion reaction; Evaluation de la distribution des particules {alpha} et des neutrons issus de la reaction de fusion
Energy Technology Data Exchange (ETDEWEB)
Dellacherie, S.
1997-12-01
The respective distributions (or density probabilities) of {alpha} particles and neutrons have been modeled using a Monte-Carlo method for the thermonuclear fusion reaction D + T {yields} {alpha} + n + 17.6 MeV. (N.T.).
18. Particle-induced amorphization of complex ceramics. Final report
International Nuclear Information System (INIS)
Ewing, R.C.; Wang, L.M.
1998-01-01
The crystalline-to-amorphous (c-a) phase transition is of fundamental importance. Particle irradiations provide an important, highly controlled means of investigating this phase transformation and the structure of the amorphous state. The interaction of heavy-particles with ceramics is complex because these materials have a wide range of structure types, complex compositions, and because chemical bonding is variable. Radiation damage and annealing can produce diverse results, but most commonly, single crystals become aperiodic or break down into a polycrystalline aggregate. The authors continued the studies of the transition from the periodic-to-aperiodic state in natural materials that have been damaged by α-recoil nuclei in the uranium and thorium decay series and in synthetic, analogous structures. The transition from the periodic to aperiodic state was followed by detailed x-ray diffraction analysis, in-situ irradiation/transmission electron microscopy, high resolution transmission electron microscopy, extended x-ray absorption fine structure spectroscopy/x-ray absorption near edge spectroscopy and other spectroscopic techniques. These studies were completed in conjunction with bulk irradiations that can be completed at Los Alamos National Laboratory or Sandia National Laboratories. Principal questions addressed in this research program included: (1) What is the process at the atomic level by which a ceramic material is transformed into a disordered or aperiodic state? (2) What are the controlling effects of structural topology, bond-type, dose rate, and irradiation temperature on the final state of the irradiated material? (3) What is the structure of the damaged material? (4) What are the mechanisms and kinetics for the annealing of interstitial and aggregate defects in these irradiated ceramic materials? (5) What general criteria may be applied to the prediction of amorphization in complex ceramics?
19. Stopping power and polarization induced in a plasma by a fast charged particle in circular motion
Energy Technology Data Exchange (ETDEWEB)
Villo-Perez, Isidro [Departamento de Electronica, Tecnologia de las Computadoras y Proyectos, Universidad Politecnica de Cartagena, Cartagena (Spain); Arista, Nestor R. [Division Colisiones Atomicas, Centro Atomico Bariloche and Instituto Balseiro, Comision Nacional de Energia Atomica, Bariloche (Argentina); Garcia-Molina, Rafael [Departamento de Fisica, Universidad de Murcia, Murcia (Spain)
2002-03-28
We describe the perturbation induced in a plasma by a charged particle in circular motion, analysing in detail the evolution of the induced charge, the electrostatic potential and the energy loss of the particle. We describe the initial transitory behaviour and the different ways in which convergence to final stationary solutions may be obtained depending on the basic parameters of the problem. The results for the stopping power show a resonant behaviour which may give place to large stopping enhancement values as compared with the case of particles in straight-line motion with the same linear velocity. The results also explain a resonant effect recently obtained for particles in circular motion in magnetized plasmas. (author)
20. Noise-and delay-induced phase transitions of the dimer–monomer surface reaction model
International Nuclear Information System (INIS)
Zeng Chunhua; Wang Hua
2012-01-01
Highlights: ► We study the dimer–monomer surface reaction model. ► We show that noise induces first-order irreversible phase transition (IPT). ► Combination of noise and time-delayed feedback induce first- and second-order IPT. ► First- and second-order IPT is viewed as noise-and delay-induced phase transitions. - Abstract: The effects of noise and time-delayed feedback in the dimer–monomer (DM) surface reaction model are investigated. Applying small delay approximation, we construct a stochastic delayed differential equation and its Fokker–Planck equation to describe the state evolution of the DM reaction model. We show that the noise can only induce first-order irreversible phase transition (IPT) characteristic of the DM model, however the combination of the noise and time-delayed feedback can simultaneously induce first- and second-order IPT characteristics of the DM model. Therefore, it is shown that the well-known first- and second-order IPT characteristics of the DM model may be viewed as noise-and delay-induced phase transitions. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8700316548347473, "perplexity": 4511.44211474828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660067.26/warc/CC-MAIN-20191015155056-20191015182556-00004.warc.gz"} |
http://mathoverflow.net/questions/137375/topology-generated-by-irreducible-componets-of-gamma-invariant-closed-sets | # topology generated by irreducible componets of $\Gamma$-invariant closed sets
For an analytic space $U$ equipped with an action of a group $\Gamma$, call a subset $Z\subseteq U$ $\Gamma$-closed iff it is a closed analytic subset and each of its irreducible components is an irreducible component of a $\Gamma$-invariant closed analytic set.
When do $\Gamma$-closed sets form a topology? Does it have a name and are there any references discussing it ?
What I want to know is the following: Under what conditions the following is true:
(i) $\Gamma$-closed sets form a topology. (ii) for an analytic map $f:U\rightarrow U'$, the map $f:U/\Gamma \rightarrow U'/\Gamma'$ being closed and well-defined implies $f_*:U \rightarrow U'$ is closed as well in the topology of $\Gamma$- and $\Gamma'$-closed sets ?
For (i), it is enough to prove that the intersection of two $\Gamma$-closed analytically irreducible closed sets is $\Gamma$-closed.
I am mostly interested in the case when $\Gamma$ is the topological fundamental group of a complex algebraic variety acting on its univeresal covering space $U_A$. Informally, the idea is to define a somewhat "finer" variant of the etale topology ( induced on the inverse limit of finite etale covers of $A(C)$.) For $\Gamma$ the algebraic fundamental group and $U$ the inverse limit as above, both (i) and (ii) are easy.
There is a proof of this under assumption that $U$ and $U'$ are universal covering spaces of projective algebraic varieties with subgroup separable (aka lerf) fundamental groups, by a rather ugly technical argument and not quite in this form ( reference, p.11 and p.20). When $A(C)$ is a group, probably it also follows from a purely algebraic proof in a model theory paper (which also generalised to prime characteristic).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9878337383270264, "perplexity": 178.25791289758254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345771702/warc/CC-MAIN-20131218054931-00079-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/centripital-acceleration-question.51531/ | # Centripital Acceleration Question
1. Nov 5, 2004
### pinky2468
I can get the first part, but I am getting tripped up on the second part.
A 220 kg boat is negotiating a circular turn (radius=32m) around a buoy. During the turn, the engine causes a net tangetntial force of magnitude 550N to be applied to the boat. The initial tangential speed of the boat going around the turn is 5.0 m/s a)find the tangential acceleration b)after the boat is 2.0 s into the turn find the centripetal acceleration.
So for part a)
F=mr(alpha) (alpha=angular acceleration, I can't make the symbol) alpha= .078 rad/s^2
Tang. acceleration= r(alpha)= 2.5 m/s^2
I am stuck on part b, I think the fact that I am given the initial tangential speed. So do I find the final and take the average?
2. Nov 5, 2004
### doriang101
part b
Since you are given the force of the engine, and the intial speed, you know the acceleration of the boat. From that you can deduce the final speed of the boat after 2 seconds from v2 = v1 + at(t = 2,a=550/massofboat,v1=5m/s). Then the acceleration you seek is simply
a = v^2/R m/s^2
Similar Discussions: Centripital Acceleration Question | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389026165008545, "perplexity": 1431.6258632166143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806455.2/warc/CC-MAIN-20171122031440-20171122051440-00393.warc.gz"} |
https://www.physicsforums.com/threads/linalg-describe-ker-t-using-set-notation-for-t-p-1-r.948524/ | # [LinAlg] Describe Ker(T) using set notation for T:P^1 -> R
• #1
51
3
## Homework Statement
2. Let $T: \mathbb{P}^1 \rightarrow \mathbb{R} \text{ be given by } T(p(x)) = \int_0^1 p(x)~dx$ Describe $Ker(T)$ using set notation.
## Homework Equations
$p(x) \in \mathbb{P}^1~|~ p(x) = a_0 + a_1x$
## The Attempt at a Solution
$T: \mathbb{P}^1 \rightarrow \mathbb{R}$ is a mapping/transformation from $\mathbb{P}^1 \text{ to } \mathbb{R}$.
For $p(x)$ in $\mathbb{P}^1 \rm , T(p(x))$ is the image of $p(x)$ under the action of T. For each p(x) in $\mathbb{P}^1 \rm , \int_0^1 p(x)~dx$ is computed as $T(p(x)) = Ax = \int_0^1 p(x)~dx = 0$
For p(x) to be mapped to the null space then T(p(x)) must be 0 which means that $\int_0^1 p(x)~dx=0$ which is really $\int_0^1 a_0 + a_1x~dx = 0$.
Then by integrating we get $a_0x + \frac 1 2 a_1 x^2 ~|_0^1$.
$a_0 + \frac 1 2 a_1 = 0$
$a_0 = -\frac 1 2 a_1$
Then
$Ker T = \{p(x) = a_0 + a_1x ~|~ a_0 = -\frac 1 2 a_1\}$
Related Calculus and Beyond Homework Help News on Phys.org
• #2
fresh_42
Mentor
12,643
9,161
## Homework Statement
2. Let $T: \mathbb{P}^1 \rightarrow \mathbb{R} \text{ be given by } T(p(x)) = \int_0^1 p(x)~dx$ Describe $Ker(T)$ using set notation.
## Homework Equations
$p(x) \in \mathbb{P}^1~|~ p(x) = a_0 + a_1x$
## The Attempt at a Solution
$T: \mathbb{P}^1 \rightarrow \mathbb{R}$ is a mapping/transformation from $\mathbb{P}^1 \text{ to } \mathbb{R}$.
For $p(x)$ in $\mathbb{P}^1 \rm , T(p(x))$ is the image of $p(x)$ under the action of T. For each p(x) in $\mathbb{P}^1 \rm , \int_0^1 p(x)~dx$ is computed as $T(p(x)) = Ax = \int_0^1 p(x)~dx = 0$
For p(x) to be mapped to the null space then T(p(x)) must be 0 which means that $\int_0^1 p(x)~dx=0$ which is really $\int_0^1 a_0 + a_1x~dx = 0$.
Then by integrating we get $a_0x + \frac 1 2 a_1 x^2 ~|_0^1$.
$a_0 + \frac 1 2 a_1 = 0$
$a_0 = -\frac 1 2 a_1$
Then
$Ker T = \{p(x) = a_0 + a_1x ~|~ a_0 = -\frac 1 2 a_1\}$
That's correct.
• #3
Ray Vickson
Homework Helper
Dearly Missed
10,706
1,728
## Homework Statement
2. Let $T: \mathbb{P}^1 \rightarrow \mathbb{R} \text{ be given by } T(p(x)) = \int_0^1 p(x)~dx$ Describe $Ker(T)$ using set notation.
## Homework Equations
$p(x) \in \mathbb{P}^1~|~ p(x) = a_0 + a_1x$
## The Attempt at a Solution
$T: \mathbb{P}^1 \rightarrow \mathbb{R}$ is a mapping/transformation from $\mathbb{P}^1 \text{ to } \mathbb{R}$.
For $p(x)$ in $\mathbb{P}^1 \rm , T(p(x))$ is the image of $p(x)$ under the action of T. For each p(x) in $\mathbb{P}^1 \rm , \int_0^1 p(x)~dx$ is computed as $T(p(x)) = Ax = \int_0^1 p(x)~dx = 0$
For p(x) to be mapped to the null space then T(p(x)) must be 0 which means that $\int_0^1 p(x)~dx=0$ which is really $\int_0^1 a_0 + a_1x~dx = 0$.
Then by integrating we get $a_0x + \frac 1 2 a_1 x^2 ~|_0^1$.
$a_0 + \frac 1 2 a_1 = 0$
$a_0 = -\frac 1 2 a_1$
Then
$Ker T = \{p(x) = a_0 + a_1x ~|~ a_0 = -\frac 1 2 a_1\}$
By $Ker(T)$, do you really mean $\text{Ker} (T)$? They look very different! You get the second one by typing "\text{Ker}" instead of "Ker", as required by LaTeX when used properly.
Anyway, you could write $\text{Ker}(T)$ as
$$\bigcup_{a \in \mathbf{R}} \{ p \in \mathbf{P}^1 | p(x) = a - 2ax \}$$
• #4
Mark44
Mentor
33,262
4,963
By $Ker(T)$, do you really mean $\text{Ker} (T)$?
Now, now, it's pretty clear what he meant. IMO, this is really nit-picking.
• #5
51
3
That's correct.
Great! Thanks! I wasn't quite sure about it. It seemed really simple but really complex at the same time.
Is there a standard style for Ker? $Ker(T)$ is how it's been presented to me.
• #6
Ray Vickson
Homework Helper
Dearly Missed
10,706
1,728
Now, now, it's pretty clear what he meant. IMO, this is really nit-picking.
I regard is as a contribution to his education. Maybe he did not know about that before, but now he probably does. Certainly it has done him no harm!
• #7
Ray Vickson
Homework Helper
Dearly Missed
10,706
1,728
Great! Thanks! I wasn't quite sure about it. It seemed really simple but really complex at the same time.
Is there a standard style for Ker? $Ker(T)$ is how it's been presented to me.
Probably mistakenly; LaTeX manuals are pretty explicit about such matters, but sometimes people who write notes and lectures may be c
Great! Thanks! I wasn't quite sure about it. It seemed really simple but really complex at the same time.
Is there a standard style for Ker? $Ker(T)$ is how it's been presented to me.
In the AMSMath package there is a command "\ker" that typesets it in the "approved" way, but I don't think the version of LaTeX available in this forum has it. I think the AMSMath version gives $\text{ker} (T)$ instead of $\text{Ker} (T)$.
• #8
51
3
Probably mistakenly; LaTeX manuals are pretty explicit about such matters, but sometimes people who write notes and lectures may be c
In the AMSMath package there is a command "\ker" that typesets it in the "approved" way, but I don't think the version of LaTeX available in this forum has it. I think the AMSMath version gives $\text{ker} (T)$ instead of $\text{Ker} (T)$.
Alright. I'll try to keep that in mind then.
• #9
fresh_42
Mentor
12,643
9,161
Great! Thanks! I wasn't quite sure about it. It seemed really simple but really complex at the same time.
Is there a standard style for Ker? $Ker(T)$ is how it's been presented to me.
There are standard functions like \lim, \sin, \log which result in $\lim, \sin, \log$ instead of $lim, sin, log$ which I've written without backslash. However, there are many such conventions which don't have a backslash version. Not sure, whether the kernel is among them. Let's test it:
\ker $\longrightarrow \ker$
\Ker $\longrightarrow \Ker$
\im $\longrightarrow \im$
In case of doubt, i.e. if you don't want to try or look it up, there is a version which always works:
\operatorname{works} results in $\operatorname{works}$ instead of simply $works$.
• #12
Mark44
Mentor
33,262
4,963
Ray Vickson said:
By $Ker(T)$, do you really mean $\text{Ker} (T)$? They look very different!
I wouldn't go so far as to say they look "very different."
Is there a standard style for Ker? $Ker(T)$ is how it's been presented to me.
I regard is as a contribution to his education. Maybe he did not know about that before, but now he probably does. Certainly it has done him no harm!
What @bornofflame wrote was clear and unambiguous LaTeX, quite an accomplishment for a fairly new member here. A critque on such a nit as $Ker(T)$ versus $\text{Ker}(T)$ (the latter of which requires twice the typing for a very marginal difference) could at least been more diplomatically presented, in my view. By writing "do you really mean ..." there's the suggestion that the one form is incorrect and the other is correct.
• #13
StoneTemplePython
Gold Member
2019 Award
1,145
546
For what its worth, I can assure you that others have learned some ideas / techniques from Ray's past formatting pointers. Putting the leading slash in front for things like cosine $cos(x) \to \cos(x)$ is the low hanging fruit.
For non-qualifying ones like $\text{Ker}(T)$ what I've started doing is just writing it as $Ker(T)$ and then at the very end, if I remember,
do a find and replace "Ker(" with "\text{Ker}("
in my editor -- Jupyter notebooks in this case
It really is twice as much work if you do it on your own each each time you type in the term Ker, but almost no more work if you do the find and replace at the very end before submitting your post.
• #14
fresh_42
Mentor
12,643
9,161
I agree with @Mark44. The OP used the kernel twice and both cases could easily be recognized as such. We could as well debate whether the kernel could be denoted with a capital K or not, or if an image should be abbreviated by $\operatorname{im}$ or $\operatorname{img}$, or whether $21\,°C$ room temperature is better than $20.5°\,C$.
Edit: Good tip with the replacement, but inconvenient if a) my browser doesn't support it and I don't use an extra editor to write here, or b) there are various locations with different operators. E.g. I regular have $\operatorname{ad}$, $\operatorname{Ad}$, $\operatorname{GL}(V)$, $\operatorname{ker}$, $\operatorname{rk}$, etc. Ooops, sorry, I am guilty! I write $GL(V)$ and not $\operatorname{GL}(V)$. Mea culpa.
• #15
StoneTemplePython
Gold Member
2019 Award
1,145
546
Edit: Good tip with the replacement, but inconvenient if a) my browser doesn't support it and I don't use an extra editor to write here, or b) there are various locations with different operators. E.g. I regular have $\operatorname{ad}$, $\operatorname{Ad}$, $\operatorname{GL}(V)$, $\operatorname{ker}$, $\operatorname{rk}$, etc. Ooops, sorry, I am guilty! I write $GL(V)$ and not $\operatorname{GL}(V)$. Mea culpa.
I believe you can copy paste your writeup into a regular text editor like Word and do a find and replace all from there. I think any decent one can do that, though the one I use in Ubuntu messes up spacing for some reason?
I don't have any great ideas when there are multiple problem ones -- though a simple python script that runs a through a list of the ones commonly used that aren't "backslashable" comes to mind. If I get around to making such a script maybe I'll share it in the "MATLAB, Maple, Mathematica, LaTeX, Etc" folder...
• #16
fresh_42
Mentor
12,643
9,161
I believe you can copy paste your writeup into a regular text editor like Word and do a find and replace all from there. I think any decent one can do that, though the one I use in Ubuntu messes up spacing for some reason?
Sure, but a change of platforms isn't very convenient.
I don't have any great ideas when there are multiple problem ones -- though a simple python script that runs a through a list of the ones commonly used that aren't "backslashable" comes to mind. If I get around to making such a script maybe I'll share it in the "MATLAB, Maple, Mathematica, LaTeX, Etc" folder...
I installed a little tool which allows me to define my own shortcuts. E.g. $\operatorname{Ker}$ is on my keyboard: Ctrl+o , arrow left, Ker, arrow right. The arrows are, because I encoded \operatorname{} with brackets, because the arrows are much faster to type than the brackets. It's enormously helpful, e.g. for expressions like \left. \dfrac{d}{d}\right|_{}
• Last Post
Replies
9
Views
482
• Last Post
Replies
8
Views
962
• Last Post
Replies
29
Views
2K
• Last Post
Replies
3
Views
860
• Last Post
Replies
5
Views
1K
• Last Post
Replies
2
Views
534
• Last Post
Replies
3
Views
1K
• Last Post
Replies
1
Views
2K
• Last Post
Replies
4
Views
3K
• Last Post
Replies
4
Views
1K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8924171328544617, "perplexity": 1160.9386626410765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496227.25/warc/CC-MAIN-20200329201741-20200329231741-00106.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-11-infinite-series-11-1-sequences-exercises-page-538/44 | ## Calculus (3rd Edition)
$1$
We have $$\lim_{n\to \infty}a_n=\lim_{n\to \infty}\frac{\sqrt{n}}{\sqrt{n}+4}=\lim_{n\to \infty}\frac{1}{1+(4/\sqrt{n})}=1.$$ Hence, by Theorem 1, the sequence $a_n$ converges to $1$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995154142379761, "perplexity": 439.46935903988066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251737572.61/warc/CC-MAIN-20200127235617-20200128025617-00066.warc.gz"} |
http://clay6.com/qa/826/examine-the-consistency-of-the-system-of-equations- | Browse Questions
Examine the consistency of the system of equations: $\quad 2x -y = 5$ $\quad x + y = 4$
Toolbox:
• (i)A matrix is said to be invertible if inverse exists.
• (ii)If A is a non singular matrix such that
• AX=B
• then X=$A^{-1}B$
• using this we can solve system of equations,which has unique solutions.
The given system of equation is
2x-y=5
x+y=4
This can be written in the form AX=B,
$A=\begin{bmatrix}2 & -1\\1 & 1\end{bmatrix}\;X=\begin{bmatrix}x\\y\end{bmatrix}\;B=\begin{bmatrix}5\\4\end{bmatrix}$
Hence $\begin{bmatrix}2 & -1\\1 & 1\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}=\begin{bmatrix}5\\4\end{bmatrix}$
Now let us find the value of the determinant
|A|=2-(-1)=2+1=3$\neq 0$
Hence A is non-singular.
Therefore $A^{-1}$ exists.
Hence the given system is consistent.
edited Feb 27, 2013 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9666329622268677, "perplexity": 1468.0941754537455}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00397-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://quizlet.com/subject/equations/?imagesOnly=1 | Study sets
Classes
Users
Study sets matching "equations"
21 terms
Equations
variable
equation
substitution
increased
a letter used to represent a quantity that can change.
a mathematical sentence that shows that two expressions are e…
to replace a variable with another number
variable
a letter used to represent a quantity that can change.
equation
a mathematical sentence that shows that two expressions are e…
20 terms
EQUATIONS
x=-11
x=-7
x=3
x=5/3
4+x=-7
6-x=13
4x-2=10
5x+2=8x-3
x=-11
4+x=-7
x=-7
6-x=13
20 terms
EQUATIONS
x=-11
x=-7
x=3
x=5/3
4+x=-7
6-x=13
4x-2=10
5x+2=8x-3
x=-11
4+x=-7
x=-7
6-x=13
8 terms
Equations
linear equation
algebraic expression
variable
pronumeral
An equation whose graph is a line
an expression that contains at least one variable
A symbol used to represent a quantity that can change
A symbol (usually a letter) that takes the place of a numeral.
linear equation
An equation whose graph is a line
algebraic expression
an expression that contains at least one variable
8 terms
Equations
like terms
coefficient
exponent
equals
have identical variables, with same powers that can be groupe…
A number multiplied by a variable in an algebraic expression.
A mathematical notation indicating the number of times a quan…
to meet or match (i.e. to be the same in size, value or amount)
like terms
have identical variables, with same powers that can be groupe…
coefficient
A number multiplied by a variable in an algebraic expression.
50 terms
Equations
The Big Five Equations
Force of Kinetic Friction
Newton's Law of Gravitation
Newton's Law of Gravitation
w=mg
The Big Five Equations
Force of Kinetic Friction
20 terms
EQUATIONS
x=-11
x=-7
x=3
x=5/3
4+x=-7
6-x=13
4x-2=10
5x+2=8x-3
x=-11
4+x=-7
x=-7
6-x=13
21 terms
Equations
variable
equation
substitution
increased
a letter used to represent a quantity that can change.
a mathematical sentence that shows that two expressions are e…
to replace a variable with another number
variable
a letter used to represent a quantity that can change.
equation
a mathematical sentence that shows that two expressions are e…
10 terms
Equations
Equation
Solution
Equivalent Equations
Coefficient
When two things are equal.
A value we can put in place of a variable (such as x) that ma…
When two solutions have the same value.
A number used to multiply a variable. Sometimes a letter stan…
Equation
When two things are equal.
Solution
A value we can put in place of a variable (such as x) that ma…
10 terms
Equations
Equation
Solution
Equivalent Equations
Coefficient
When two things are equal.
A value we can put in place of a variable (such as x) that ma…
When two solutions have the same value.
A number used to multiply a variable. Sometimes a letter stan…
Equation
When two things are equal.
Solution
A value we can put in place of a variable (such as x) that ma…
21 terms
EQUATIONS
4+x=-7
6-x=13
4x-2=10
5x+2=8x-3
x=-11
x=-7
x=3
x=5/3
4+x=-7
x=-11
6-x=13
x=-7
26 terms
Equations
variable
equation
substitution
increased
a letter used to represent a quantity that can change.
a mathematical sentence that shows that two expressions are e…
to replace a variable with another number
variable
a letter used to represent a quantity that can change.
equation
a mathematical sentence that shows that two expressions are e…
11 terms
Equations
Variable
One solution
Infinitely many solutions
No solution
A letter or symbol used to represent the value of an unknown.
A solution to a mathematical statement that says there is onl…
A solution to a mathematical statement that says any value fo…
A solution to a mathematical statement that says there is no…
Variable
A letter or symbol used to represent the value of an unknown.
One solution
A solution to a mathematical statement that says there is onl…
21 terms
Equations
variable
equation
substitution
increased
a letter used to represent a quantity that can change.
a mathematical sentence that shows that two expressions are e…
to replace a variable with another number
variable
a letter used to represent a quantity that can change.
equation
a mathematical sentence that shows that two expressions are e…
33 terms
Equations
x ÷ 7 = 5
11 = x + 7
8 + x = 19
4 = x + 2
x = 35
x = 4
x = 11
x = 2
x ÷ 7 = 5
x = 35
11 = x + 7
x = 4
27 terms
Equations
Variable
Constant
Expression
Equivalent expressions
A letter used to represent a quantity that can change.
A value that does not change.
A mathematical phrase that contains operations, numbers, and/…
Expressions that have the same value.
Variable
A letter used to represent a quantity that can change.
Constant
A value that does not change.
61 terms
Equations
Total Lung Capacity
Functional Residual Capacity
Inspiratory capacity
Elasticity (E)
=IRV+Vt+ERV+RV... =Vital capacity+ Residual volume
= ERV+RV... - The amount of volume that remains in our lung at…
IRV+Vt
- the property by virtue of which an object resists and recov…
Total Lung Capacity
=IRV+Vt+ERV+RV... =Vital capacity+ Residual volume
Functional Residual Capacity
= ERV+RV... - The amount of volume that remains in our lung at…
15 terms
Equations
area of a rectangle
area of a triangle
circumference of a circle
circumference of a semicircle
A=lw
A=(1/2)bh
C= 2πr... C = π d
C=1/2 × pi × diameter... C= 2π*r/2
area of a rectangle
A=lw
area of a triangle
A=(1/2)bh
71 terms
Equations
Torque
Work-Energy Theorem
Completely Inelastic Collision
Hooke's Law
θ = angle between F and the lever arm. (Max force at sin90° =…
F = restoring force, k = spring constant, x = displacement fr…
Torque
θ = angle between F and the lever arm. (Max force at sin90° =…
Work-Energy Theorem
54 terms
Equations
Resistance
Current
Voltage
Power
R = V/I
I = V/R
V = I x R
P = I x V
Resistance
R = V/I
Current
I = V/R
11 terms
Equations
Rational Number
Irrational Number
Coefficient
Constant
Real number that can be written as a ratio of two integers (i…
Real number that cannot be written as a ratio of two integers…
Number in front of a variable
number without a variable in an equation
Rational Number
Real number that can be written as a ratio of two integers (i…
Irrational Number
Real number that cannot be written as a ratio of two integers…
21 terms
Equations
variable
equation
substitution
increased
a letter used to represent a quantity that can change.
a mathematical sentence that shows that two expressions are e…
to replace a variable with another number
variable
a letter used to represent a quantity that can change.
equation
a mathematical sentence that shows that two expressions are e…
20 terms
EQUATIONS
x=-11
x=-7
x=3
x=5/3
4+x=-7
6-x=13
4x-2=10
5x+2=8x-3
x=-11
4+x=-7
x=-7
6-x=13
21 terms
Equations
variable
equation
substitution
increased
a letter used to represent a quantity that can change.
a mathematical sentence that shows that two expressions are e…
to replace a variable with another number
variable
a letter used to represent a quantity that can change.
equation
a mathematical sentence that shows that two expressions are e…
21 terms
Equations
variable
equation
substitution
increased
a letter used to represent a quantity that can change.
a mathematical sentence that shows that two expressions are e…
to replace a variable with another number
variable
a letter used to represent a quantity that can change.
equation
a mathematical sentence that shows that two expressions are e…
20 terms
EQUATIONS
x=-11
x=-7
x=3
x=5/3
4+x=-7
6-x=13
4x-2=10
5x+2=8x-3
x=-11
4+x=-7
x=-7
6-x=13
8 terms
Equations
like terms
coefficient
exponent
equals
have identical variables, with same powers that can be groupe…
A number multiplied by a variable in an algebraic expression.
A mathematical notation indicating the number of times a quan…
to meet or match (i.e. to be the same in size, value or amount)
like terms
have identical variables, with same powers that can be groupe…
coefficient
A number multiplied by a variable in an algebraic expression.
6 terms
Equations
like terms
coefficient
exponent
equals
have identical variables, with same powers that can be groupe…
A number multiplied by a variable in an algebraic expression.
A mathematical notation indicating the number of times a quan…
to meet or match; to be the same in size, value or amount
like terms
have identical variables, with same powers that can be groupe…
coefficient
A number multiplied by a variable in an algebraic expression.
10 terms
Equations
Equation
Solution
Equivalent Equations
Coefficient
When two things are equal.
A value we can put in place of a variable (such as x) that ma…
When two solutions have the same value.
A number used to multiply a variable. Sometimes a letter stan…
Equation
When two things are equal.
Solution
A value we can put in place of a variable (such as x) that ma…
15 terms
EQUATIONS
x = -11
x = -7
x = 3
x = 5/3
4 + x = - 7
6 - x = 13
4x - 2 = 10
5x + 2 = 8x - 3
x = -11
4 + x = - 7
x = -7
6 - x = 13
5 terms
Equations
Equivalent Expression
Properties of Operations
Solution
Inequality
Expressions that have the same value.
Rules that help us add, subtract, multiply and divide effecti…
A value that makes an equation true.
A statement that compares two quantities using <, >, ≤, or ≥.
Equivalent Expression
Expressions that have the same value.
Properties of Operations
Rules that help us add, subtract, multiply and divide effecti…
10 terms
Equations
variable
equation
constant
numerator
a letter used to represent a quantity that can change.
a mathematical sentence that shows that two expressions are e…
a value that does not change
the top number in a fraction
variable
a letter used to represent a quantity that can change.
equation
a mathematical sentence that shows that two expressions are e…
6 terms
Equations
like terms
coefficient
exponent
equals
have identical variables, with same powers that can be groupe…
A number multiplied by a variable in an algebraic expression.
A mathematical notation indicating the number of times a quan…
to meet or match (i.e. to be the same in size, value or amount)
like terms
have identical variables, with same powers that can be groupe…
coefficient
A number multiplied by a variable in an algebraic expression.
7 terms
Equations
like terms
exponent
equals
simplify
have identical variables, with same powers that can be groupe…
A mathematical notation indicating the number of times a quan…
to meet or match (i.e. to be the same in size, value or amount)
to write an expression in a simpler form
like terms
have identical variables, with same powers that can be groupe…
exponent
A mathematical notation indicating the number of times a quan…
11 terms
Equations
Equation
Solution
Coefficient
Division Property of Equality
When two things are equal.
A value we can put in place of a variable (such as x) that ma…
A number in front of the variable
When you divide both sides of an equation by the same nonzero…
Equation
When two things are equal.
Solution
A value we can put in place of a variable (such as x) that ma…
13 terms
Equations
Equation
Expression
Variable
Infinitely many solutions / All real n…
Always has an =. Shows that the left side of the "=" is equal…
Never has an =. Can have variables, numbers, and operators (+…
A letter or symbol used to represent the value of an unknown.
A solution to a mathematical statement that says any value fo…
Equation
Always has an =. Shows that the left side of the "=" is equal…
Expression
Never has an =. Can have variables, numbers, and operators (+…
21 terms
Equations
variable
equation
substitution
increased
a letter used to represent a quantity that can change.
a mathematical sentence that shows that two expressions are e…
to replace a variable with another number
variable
a letter used to represent a quantity that can change.
equation
a mathematical sentence that shows that two expressions are e…
20 terms
EQUATIONS
x=-11
x=-7
x=3
x=5/3
4+x=-7
6-x=13
4x-2=10
5x+2=8x-3
x=-11
4+x=-7
x=-7
6-x=13
21 terms
Equations
variable
equation
substitution
increased
a letter used to represent a quantity that can change.
a mathematical sentence that shows that two expressions are e…
to replace a variable with another number
variable
a letter used to represent a quantity that can change.
equation
a mathematical sentence that shows that two expressions are e…
17 terms
Equations
Variable
Constant
Expression
Equivalent expressions
A letter used to represent a quantity that can change.
A value that does not change.
A mathematical phrase that contains operations, numbers, and/…
Expressions that have the same value.
Variable
A letter used to represent a quantity that can change.
Constant
A value that does not change.
20 terms
EQUATIONS
x=-11
x=-7
x=3
x=5/3
4+x=-7
6-x=13
4x-2=10
5x+2=8x-3
x=-11
4+x=-7
x=-7
6-x=13
21 terms
EQUATIONS
x=-11
x=-7
x=3
x=5/3
4+x=-7
6-x=13
4x-2=10
5x+2=8x-3
x=-11
4+x=-7
x=-7
6-x=13
10 terms
Equations
Equation
Solution
Equivalent Equations
Coefficient
When two things are equal.
A value we can put in place of a variable (such as x) that ma…
When two solutions have the same value.
A number used to multiply a variable. Sometimes a letter stan…
Equation
When two things are equal.
Solution
A value we can put in place of a variable (such as x) that ma…
19 terms
Equations
Equation
Solution
Equivalent Equations
Coefficient
When two things are equal.
A value we can put in place of a variable (such as x) that ma…
When two solutions have the same value.
A number used to multiply a variable. Sometimes a letter stan…
Equation
When two things are equal.
Solution
A value we can put in place of a variable (such as x) that ma…
10 terms
Equations
Equation
Solution
Equivalent Equations
Coefficient
When two things are equal.
A value we can put in place of a variable (such as x) that ma…
When two solutions have the same value.
A number used to multiply a variable. Sometimes a letter stan…
Equation
When two things are equal.
Solution
A value we can put in place of a variable (such as x) that ma…
8 terms
Equations
like terms
coefficient
exponent
equals
have identical variables, with same powers that can be groupe…
A number multiplied by a variable in an algebraic expression.
A mathematical notation indicating the number of times a quan…
to meet or match (i.e. to be the same in size, value or amount)
like terms
have identical variables, with same powers that can be groupe…
coefficient
A number multiplied by a variable in an algebraic expression.
17 terms
Equations
variable
equation
substitution
increased
a letter used to represent a quantity that can change.
a mathematical sentence that shows that two expressions are e…
to replace a variable with another number
variable
a letter used to represent a quantity that can change.
equation
a mathematical sentence that shows that two expressions are e…
9 terms
Equations
Variable
One solution
Infinitely many solutions
No solution
A letter or symbol used to represent the value of an unknown.
A solution to a mathematical statement that says there is onl…
A solution to a mathematical statement that says any value fo…
A solution to a mathematical statement that says there is no…
Variable
A letter or symbol used to represent the value of an unknown.
One solution
A solution to a mathematical statement that says there is onl…
9 terms
Equations
equation
substitution
expression
evaluate
a mathematical sentence that shows that two expressions are e…
to replace a variable with another number
A mathematical phrase that contains operations, numbers, and/…
find the value of, review
equation
a mathematical sentence that shows that two expressions are e…
substitution
to replace a variable with another number
10 terms
Equations
Equation
Solution
Equivalent Equations
Coefficient
When two things are equal.
A value we can put in place of a variable (such as x) that ma…
When two solutions have the same value.
A number used to multiply a variable. Sometimes a letter stan…
Equation
When two things are equal.
Solution
A value we can put in place of a variable (such as x) that ma…
1 of 10 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8209578990936279, "perplexity": 1966.7888326665723}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00120-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/61296/sobolev-slobodeckij-spaces-for-p-infinity | # Sobolev-Slobodeckij spaces for p=infinity
For $1\leq p<\infty$ an approach to define fractional Sobolev spaces is by Sobolev-Slobodeckij spaces a generalisation of Hölder continuity. For example letting $U\subset\mathbb{R}^n$ then,
$\left\|u\right\|^p_{W^\mu_p(U)} = \left\|u\right\|^p_{W^{\lfloor\mu\rfloor}_p(U)} + \sum \int_U \int_U \frac{|D^\alpha u(x)-D^\alpha u(y)|^p}{|x-y|^{n+p[\mu]}}dxdy$
where $[\mu]=\mu-\lfloor\mu\rfloor$ and the sum is taken over all multi-indices $\alpha$ with $|\alpha|=\lfloor\mu\rfloor$
This is from Chapter 14 of The mathematical theory of finite element methods By Susanne C. Brenner, L. Ridgway Scott.
Does the above hold for $p=\infty$? For example, for $p=\infty$ do we have (or something similar),
$\left\|u\right\|_{W^\mu_p(U)} = \left\|u\right\|_{W^{\lfloor\mu\rfloor}_p(U)} + \sup \sup_U \sup_U \frac{|D^\alpha u(x)-D^\alpha u(y)|}{|x-y|^{[\mu]}}$
where the $\sup$ is taken over all multi-indices $\alpha$ with $|\alpha|=\lfloor\mu\rfloor$. Can this be shown by considering the limit of the case $p<\infty$ as $p\rightarrow\infty$?
Yes, it is Hölder spaces and can be regarded as SS spaces for $p=\infty$. Actually, for natural $\mu$ more correct for functions analysis are Zygmund spaces (with differences of th second order in the definition). They are special cases of Besov spaces, which are defined for $0 < p\le \infty$, $\mu\in \mathbb R$. They coincide with corresponding Sobolev-Slobodetskij in some cases. But there are some differences between cases $p<\infty$ and $p=\infty$. For example, Holder spaces are not separable.
On the second question, shown what? It is a definition for $p=\infty$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9408376216888428, "perplexity": 199.09805152445185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148402.62/warc/CC-MAIN-20160205193908-00276-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://www.nag.com/numeric/FL/nagdoc_fl24/html/F08/f08paf.html | F08 Chapter Contents
F08 Chapter Introduction
NAG Library Manual
# NAG Library Routine DocumentF08PAF (DGEES)
Note: before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details.
## 1 Purpose
F08PAF (DGEES) computes the eigenvalues, the real Schur form $T$, and, optionally, the matrix of Schur vectors $Z$ for an $n$ by $n$ real nonsymmetric matrix $A$.
## 2 Specification
SUBROUTINE F08PAF ( JOBVS, SORT, SELECT, N, A, LDA, SDIM, WR, WI, VS, LDVS, WORK, LWORK, BWORK, INFO)
INTEGER N, LDA, SDIM, LDVS, LWORK, INFO REAL (KIND=nag_wp) A(LDA,*), WR(*), WI(*), VS(LDVS,*), WORK(max(1,LWORK)) LOGICAL SELECT, BWORK(*) CHARACTER(1) JOBVS, SORT EXTERNAL SELECT
The routine may be called by its LAPACK name dgees.
## 3 Description
The real Schur factorization of $A$ is given by
$A = Z T ZT ,$
where $Z$, the matrix of Schur vectors, is orthogonal and $T$ is the real Schur form. A matrix is in real Schur form if it is upper quasi-triangular with $1$ by $1$ and $2$ by $2$ blocks. $2$ by $2$ blocks will be standardized in the form
$a b c a$
where $bc<0$. The eigenvalues of such a block are $a±\sqrt{bc}$.
Optionally, F08PAF (DGEES) also orders the eigenvalues on the diagonal of the real Schur form so that selected eigenvalues are at the top left. The leading columns of $Z$ form an orthonormal basis for the invariant subspace corresponding to the selected eigenvalues.
## 4 References
Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia http://www.netlib.org/lapack/lug
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
## 5 Parameters
1: JOBVS – CHARACTER(1)Input
On entry: if ${\mathbf{JOBVS}}=\text{'N'}$, Schur vectors are not computed.
If ${\mathbf{JOBVS}}=\text{'V'}$, Schur vectors are computed.
Constraint: ${\mathbf{JOBVS}}=\text{'N'}$ or $\text{'V'}$.
2: SORT – CHARACTER(1)Input
On entry: specifies whether or not to order the eigenvalues on the diagonal of the Schur form.
${\mathbf{SORT}}=\text{'N'}$
Eigenvalues are not ordered.
${\mathbf{SORT}}=\text{'S'}$
Eigenvalues are ordered (see SELECT).
Constraint: ${\mathbf{SORT}}=\text{'N'}$ or $\text{'S'}$.
3: SELECT – LOGICAL FUNCTION, supplied by the user.External Procedure
If ${\mathbf{SORT}}=\text{'S'}$, SELECT is used to select eigenvalues to sort to the top left of the Schur form.
If ${\mathbf{SORT}}=\text{'N'}$, SELECT is not referenced and F08PAF (DGEES) may be called with the dummy function F08PAZ.
An eigenvalue ${\mathbf{WR}}\left(j\right)+\sqrt{-1}×{\mathbf{WI}}\left(j\right)$ is selected if ${\mathbf{SELECT}}\left({\mathbf{WR}}\left(j\right),{\mathbf{WI}}\left(j\right)\right)$ is .TRUE.. If either one of a complex conjugate pair of eigenvalues is selected, then both are. Note that a selected complex eigenvalue may no longer satisfy ${\mathbf{SELECT}}\left({\mathbf{WR}}\left(j\right),{\mathbf{WI}}\left(j\right)\right)=\mathrm{.TRUE.}$ after ordering, since ordering may change the value of complex eigenvalues (especially if the eigenvalue is ill-conditioned); in this case INFO is set to ${\mathbf{N}}+2$ (see INFO below).
The specification of SELECT is:
FUNCTION SELECT ( WR, WI)
LOGICAL SELECT
REAL (KIND=nag_wp) WR, WI
1: WR – REAL (KIND=nag_wp)Input
2: WI – REAL (KIND=nag_wp)Input
On entry: the real and imaginary parts of the eigenvalue.
SELECT must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which F08PAF (DGEES) is called. Parameters denoted as Input must not be changed by this procedure.
4: N – INTEGERInput
On entry: $n$, the order of the matrix $A$.
Constraint: ${\mathbf{N}}\ge 0$.
5: A(LDA,$*$) – REAL (KIND=nag_wp) arrayInput/Output
Note: the second dimension of the array A must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$.
On entry: the $n$ by $n$ matrix $A$.
On exit: A is overwritten by its real Schur form $T$.
6: LDA – INTEGERInput
On entry: the first dimension of the array A as declared in the (sub)program from which F08PAF (DGEES) is called.
Constraint: ${\mathbf{LDA}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$.
7: SDIM – INTEGEROutput
On exit: if ${\mathbf{SORT}}=\text{'N'}$, ${\mathbf{SDIM}}=0$.
If ${\mathbf{SORT}}=\text{'S'}$, ${\mathbf{SDIM}}=\text{}$ number of eigenvalues (after sorting) for which SELECT is .TRUE.. (Complex conjugate pairs for which SELECT is .TRUE. for either eigenvalue count as $2$.)
8: WR($*$) – REAL (KIND=nag_wp) arrayOutput
Note: the dimension of the array WR must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$.
On exit: see the description of WI.
9: WI($*$) – REAL (KIND=nag_wp) arrayOutput
Note: the dimension of the array WI must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$.
On exit: WR and WI contain the real and imaginary parts, respectively, of the computed eigenvalues in the same order that they appear on the diagonal of the output Schur form $T$. Complex conjugate pairs of eigenvalues will appear consecutively with the eigenvalue having the positive imaginary part first.
10: VS(LDVS,$*$) – REAL (KIND=nag_wp) arrayOutput
Note: the second dimension of the array VS must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$ if ${\mathbf{JOBVS}}=\text{'V'}$, and at least $1$ otherwise.
On exit: if ${\mathbf{JOBVS}}=\text{'V'}$, VS contains the orthogonal matrix $Z$ of Schur vectors.
If ${\mathbf{JOBVS}}=\text{'N'}$, VS is not referenced.
11: LDVS – INTEGERInput
On entry: the first dimension of the array VS as declared in the (sub)program from which F08PAF (DGEES) is called.
Constraints:
• if ${\mathbf{JOBVS}}=\text{'V'}$, ${\mathbf{LDVS}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$;
• otherwise ${\mathbf{LDVS}}\ge 1$.
12: WORK($\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{LWORK}}\right)$) – REAL (KIND=nag_wp) arrayWorkspace
On exit: if ${\mathbf{INFO}}={\mathbf{0}}$, ${\mathbf{WORK}}\left(1\right)$ contains the minimum value of LWORK required for optimal performance.
13: LWORK – INTEGERInput
On entry: the dimension of the array WORK as declared in the (sub)program from which F08PAF (DGEES) is called.
If ${\mathbf{LWORK}}=-1$, a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued.
Suggested value: for optimal performance, LWORK must generally be larger than the minimum, say $3×{\mathbf{N}}+\mathit{nb}×{\mathbf{N}}$, where $\mathit{nb}$ is the optimal block size for F08NEF (DGEHRD)
Constraint: ${\mathbf{LWORK}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,3×{\mathbf{N}}\right)$.
14: BWORK($*$) – LOGICAL arrayWorkspace
Note: the dimension of the array BWORK must be at least $1$ if ${\mathbf{SORT}}=\text{'N'}$, and at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$ otherwise.
If ${\mathbf{SORT}}=\text{'N'}$, BWORK is not referenced.
15: INFO – INTEGEROutput
On exit: ${\mathbf{INFO}}=0$ unless the routine detects an error (see Section 6).
## 6 Error Indicators and Warnings
Errors or warnings detected by the routine:
${\mathbf{INFO}}<0$
If ${\mathbf{INFO}}=-i$, argument $i$ had an illegal value. An explanatory message is output, and execution of the program is terminated.
If ${\mathbf{INFO}}=i$ and $i\le {\mathbf{N}}$, the $QR$ algorithm failed to compute all the eigenvalues.
${\mathbf{INFO}}={\mathbf{N}}+1$
The eigenvalues could not be reordered because some eigenvalues were too close to separate (the problem is very ill-conditioned).
${\mathbf{INFO}}={\mathbf{N}}+2$
After reordering, roundoff changed values of some complex eigenvalues so that leading eigenvalues in the Schur form no longer satisfy ${\mathbf{SELECT}}=\mathrm{.TRUE.}$. This could also be caused by underflow due to scaling.
## 7 Accuracy
The computed Schur factorization satisfies
$A+E=ZT ZT ,$
where
$E2 = Oε A2 ,$
and $\epsilon$ is the machine precision. See Section 4.8 of Anderson et al. (1999) for further details.
The total number of floating point operations is proportional to ${n}^{3}$.
The complex analogue of this routine is F08PNF (ZGEES).
## 9 Example
This example finds the Schur factorization of the matrix
$A = 0.35 0.45 -0.14 -0.17 0.09 0.07 -0.54 0.35 -0.44 -0.33 -0.03 0.17 0.25 -0.32 -0.13 0.11 ,$
such that the real eigenvalues of $A$ are the top left diagonal elements of the Schur form, $T$.
Note that the block size (NB) of $64$ assumed in this example is not realistic for such a small problem, but should be suitable for large problems.
### 9.1 Program Text
Program Text (f08pafe.f90)
### 9.2 Program Data
Program Data (f08pafe.d)
### 9.3 Program Results
Program Results (f08pafe.r) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 93, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9924153685569763, "perplexity": 3921.428188027585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099105.15/warc/CC-MAIN-20150627031819-00072-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://rd.springer.com/article/10.1007%2Fs00382-019-04814-0 | # Investigating the predictability of North Atlantic sea surface height
• Robert Fraser
• Matthew Palmer
• Christopher Roberts
• Chris Wilson
• Dan Copsey
• Laure Zanna
Open Access
Article
## Abstract
Interannual sea surface height (SSH) forecasts are subject to several sources of uncertainty. Methods relying on statistical forecasts have proven useful in assessing predictability and associated uncertainty due to both initial conditions and boundary conditions. In this study, the interannual predictability of SSH dynamics in the North Atlantic is investigated using the output from a 150 year long control simulation based on HadGEM3, a coupled climate model at eddy-permitting resolution. Linear inverse modeling (LIM) is used to create a statistical model for the evolution of monthly-mean SSH anomalies. The forecasts based on the LIM model demonstrate skill on interannanual timescales $$\mathcal {O}$$(1–2 years). Forecast skill is found to be largest in both the subtropical and subpolar gyres, with decreased skill in the Gulf Stream extension region. The SSH initial conditions involving a tripolar anomaly off Cape Hatteras lead to a maximum growth in SSH about 20 months later. At this time, there is a meridional shift in the 0 m-SSH contour on the order of $$0.5{^{\circ }}$$$$1.5 {^{\circ }}$$-latitude, coupled with a change in SSH along the US East Coast. To complement the LIM-based study, interannual SSH predictability is also quantified using the system’s average predictability time (APT). The APT analysis extracted large-scale SSH patterns which displayed predictability on timescales longer than 2 years. These patterns are responsible for changes in SSH on the order of 10 cm along the US East Coast, driven by variations in Ekman velocity. Our results shed light on the timescales of SSH predictability in the North Atlantic. In addition, the diagnosed optimal initial conditions and predictable patterns could improve interannual forecasts of the Gulf Stream’s characteristics and coastal SSH.
## Keywords
North Atlantic Ocean Sea surface height Internal variability Predictability Optimal initial conditions Statistical forecasting
## 1 Introduction
Forecasts of sea surface height (SSH) on interannual timescales are affected by several sources of uncertainty. Assessing such uncertainty and associated mechanisms are crucial to provide reliable forecasts and potentially mitigate the effects of coastal flooding in certain regions. Furthermore, this can have implications for devising strategies on how best to design ocean observing systems and initialise climate models (Zanna et al. 2018). Finally, more skilful SSH predictions could help improve predictions of other parts of the climate system, for example increased skill in predicting the Gulf Stream’s meridional position could impact predictions of air–sea heat fluxes and atmospheric blocking (Scaife et al. 2011).
Increased understanding of the ocean-atmosphere processes which modulate interannual sea level variability could aid in improving SSH forecasts. Cabanes et al. (2006) found that the mechanisms which contribute to observed interannual SSH variability are regionally dependent, with the majority of the interannual variability controlled by both the local steric response to heat fluxes, and the large-scale oceanic adjustments to variations in the wind stress. In modelling studies (e.g., Roberts et al. 2016), the SSH interannual variability in subpolar gyre has been found to be mostly buoyancy driven through variations in both thermosteric and halosteric SSH components. Whereas, in the subtropical gyre, variability is primarily driven by the momentum forcing and variations in the thermosteric SSH component (e.g., Roberts et al. 2016). In the Gulf Stream region, intrinsic ocean processes (which are defined as ocean processes generated in the absence of atmospheric forcing variability) are responsible for the majority of the SSH interannual variability (Penduff et al. 2011). Such intrinsic ocean processes include mesoscale eddies.
Several studies have investigated seasonal to interannual SSH predictability (Chowdhury et al. 2007; Wang et al. 2013; Qiu et al. 2014) with some key mechanisms identified in General circulation models (GCMs) (e.g., Roberts et al. 2016). Nonaka et al. (2016) and Roberts et al. (2016) focused on interannual SSH variability in eddy-resolving and eddy-permitting ocean models, respectively. Nonaka et al. (2016) investigated the predictability of mid-latitude ocean currents, using an ensemble of eddy-resolving ocean GCM (OGCM) experiments with horizontal resolution of $$\sim 1/10{^{\circ }}$$. They find a lack of predictability in the jet regions due to the contribution from the mesoscale eddy field. Roberts et al. (2016) examined interannual SSH predictability using the eddy-permitting ($$\sim 1/4{^{\circ }}$$) Hadley Centre Global Environment Model version 3 (HadGEM3) and found predictive skill in the Tropics on timescales of several years with a lack of skill in jet regions. Ensemble forecasts of SSH did not exhibit skill on 2–5 years that could beat persistence forecasts.
Other predictability studies, have used low resolution OGCMs ($$1{^{\circ }}$$$$2{^{\circ }}$$) to investigate dynamic and steric SSH predictability (Schneider and Griffies 1999; Miles et al. 2014; Polkova et al. 2015). Polkova et al. (2015) found predictive skill in interannual steric SSH predictions in the subtropics on timescales of 2–5 years. Such skill was related to adjustments due to baroclinic Rossby waves. Skill was also found in the North Atlantic subpolar gyre on timescales of 2–5 years, which was related to changes in spiciness along isopycnals. Schneider and Griffies (1999) found SSH predictability in the North Atlantic on times of up to 17 years, using an ensemble of coupled climate model runs, related to the large-scale ocean circulation.
The work presented here focuses on the predictability of dynamic sea level, i.e. variations which arise from ocean processes. These variations are linked to changes in ocean circulation, large scale heat transports, position of gyre boundaries and thus air–sea interactions.
The aims of this study are to
• Establish the timescales of predictability due to internal variability of SSH in the North Atlantic;
• Identify the regions where forecasts are most sensitive to perturbations in the initial conditions;
• Evaluate the spatio-temporal characteristics relevant to assessing North Atlantic SSH predictability.
• Relate any internally generated predictability to large-scale ocean characteristics, with an emphasis on both mid-latitude jets and the gyre circulations;
We focus our analysis on statistical methods, which are used to evaluate predictability generated via internal variability in a fully coupled climate model. We will make use of an extended model control run, without interannual variations in the model’s boundary conditions. This should enable us to isolate any interannual predictability related to the model’s internal variability. A perfect model approach and a combination of Linear Inverse Modeling (LIM), non-normal mode analysis, and average predictability time (APT) are used to evaluate how initial conditions influence error growth (Penland 1989; Farrell and Ioannou 1996; DelSole and Tippett 2009a).
This paper is organised as follows, Sect. 2 details the model set up and the interannual SSH variability present. Section 3 contains information on the statistical methods used to evaluate predictability and an investigation into the influence of eddy-mean flow interactions on forecast skill. Section 4 examines the predictability related to the initial conditions of the ocean model through both the optimal initial conditions of SSH and the predictable components diagnosed by evaluation of the APT of the system. The final section contains a discussion of the results.
## 2 Characterising interannual sea surface height variability in the North Atlantic in HadGEM3
We use the output from a 150-year free-running control simulation of a coupled climate model, HadGEM3 GC2.0 (Williams et al. 2015). This control simulation has repeated-year radiative forcings (e.g., aerosols and greenhouse gases) taken from the year 2000 (identical to experiment 2 in the Coupled Model Inter-comparison Project Phase 3, CMIP3). The ocean component of HadGEM3, GO5.0 is based on version 3.4 of NEMO (Nucleus for European Modelling of the Ocean) and is described in detail in Megann et al. (2014). The model is on an ORCA025 horizontal grid, which uses an eddy-permitting $$1/4{^{\circ }}$$ horizontal resolution and 75 vertical levels, with the vertical level spacings increasing from 1 m at the surface to 200 m at depth. The vertical level spacing provides high resolution near the surface for short to mid-range forecasting. It uses both a linear free surface and an energy conserving momentum advection scheme. The vertical mixing of tracers and momentum is parameterised by a turbulent closure scheme (Gaspar et al. 1990; Madec 2008). The horizontal viscosity used is bi-Laplacian and the bottom friction is quadratic (Megann et al. 2014).
Monthly mean SSH anomalies are defined as departures from a time-mean, constructed from the entire model run. These anomalies are then used to create the statistical forecast models in Sects. 3 and 4. In addition, monthly mean fields of surface net heat fluxes and both the zonal and meridional wind stress are used in Sect. 4.4 in the analysis of the mechanisms responsible for the predictable components. All the fields are linearly-detrended and have their seasonal cycles removed. Two independent methods were trialed to remove the seasonal variability. In the first method, the output seasonal cycle is by removed through Fourier-filtering, by fitting cosine and sine waves to the annual and semiannual harmonics and removing variability at these frequencies. The second method deseasoned the model output by subtracting the monthly climatology from each month. The results are insensitive to the method used and so the second method is chosen for simplicity.
### 2.2 Evaluation of HadGEM3 against observational references
Previous model verification experiments by Williams et al. (2015) have shown that there are some small model biases in SST, primarily located just to the south of Greenland. However, HadGEM3 fields are significantly improved relative to its predecessor HadGEM2-ES. HadGEM3 has a more accurate Gulf Stream path, which which leads to improved atmospheric blocking statistics for the UK and Europe (Scaife et al. 2011; Williams et al. 2015).
The time-mean SSH and its variability in the HadGEM3 model are compared to observations to ensure the model demonstrates a sufficient level of realism. An AVISO mean dynamic topography estimate for the period 1993–2012 (Rio et al. 2014), is used to evaluate the HadGEM3 time-mean SSH for the full model simulation. A uniform offset is applied to the AVISO data to give a similar domain average to the HadGEM3 simulation. These mean SSH profiles of the observation-based data and the model output are shown in Fig. 1a, b. There is a reasonable agreement between the model and the observations in both the magnitude and spatial pattern of the mean field. However, there are some slight discrepancies between the two: the observations have more prominent recirculation gyres flanking the Gulf Stream near to its detachment point; and the mean SSH is slightly lower in the west of the subpolar gyre in the model.
The observational estimate of sea level variability comes from the the European Space Agency (ESA) Climate Change Initiative (CCI) version 2.0 monthly gridded fields of surface height anomaly (Legeais et al. 2018). Monthly data for the period 1993–2012 are aggregated into annual mean values before computing the standard deviation for each grid box. For comparison, we select the last 20-years of the HadGEM3 simulation (representative of the simulation as a whole) to compute the standard deviation of annual mean values. The standard deviation of the interannual SSH anomalies of both the observations and the HadGEM3 output are shown in Fig. 1c, d. The patterns are in good agreement both in magnitude and spatially. In the observations there is a higher amount of interannual variability located at the Gulf Stream’s detachment point (5 cm), this is unsurprising as the $$1/4{^{\circ }}$$ resolution model will be missing some variability due to eddy-mean flow interactions in the most turbulent regions.
### 2.3 Interannual sea surface height variability
Figure 2a shows the time-mean SSH of the control run. The characteristic double-gyre structure is evident, and the strong SSH gradient is indicative of the location of the Gulf Stream and its extension. The power spectra of SSH anomalies at several locations in the domain are shown in Fig. 2b. The power spectral density measured along the Gulf Stream (black and blue lines) is larger at all timescales than that within the gyre regions (red and green lines). The largest spectral power at interannual frequencies is near Gulf Stream’s detachment point (blue line). As a reference, the spectra are compared to the Zang and Wunsch (2001) canonical frequency-wavenumber spectrum (gray dashed line). As a function of frequency ($$\sigma$$), this spectrum is proportional to $$\sigma ^{-1/2}$$ on periods longer than 100 days, whereas, for periods shorter than 100 days it is proportional to $$\sigma ^{-2}$$ (i.e. red noise). Hughes and Williams (2010) also highlight regional deviations from this canonical spectrum. The spectra taken in the vicinity of the Gulf Stream, display approximately red noise profiles up to timescales of a year with whiter noise profiles on longer timescales. This whitening is indicative that the predictability of SSH in the Gulf Stream may be limited on interannual timescales. In contrast, the profiles taken in the subpolar and subtropical gyres (red and green lines), are closer to being red noise like in nature for all time periods. This is indicative that skillful interannual SSH forecasts can potentially be made in these regions.
Figure 3a shows the standard deviation of annual mean SSH anomalies. This again shows that most of the interannual variability is located in the vicinity of the Gulf Stream’s extension and in the subpolar gyre. There are several potential mechanisms for such interannual variability in the Gulf Stream’s extension, including: baroclinic Rossby waves directly modulating the jet extension (Sasaki and Schneider 2011; Qiu et al. 2014); variations in the western boundary currents due to changes in wind forcing (Andres 2016); and modulation by the mesoscale eddy field (Spall 1996; Berloff et al. 2007).
The signal to noise ratio of interannual variability in the North Atlantic is investigated by diagnosing $$\frac{\sigma _{N}}{\sigma _{1}}$$, where $$\sigma _{N}$$ represents the standard deviation of N-year means of SSH. This measure is often referred to as potential predictability (Boer 2004; Hawkins et al. 2011). Figure 3b, c show the potential predictability associated with $$\sigma _3$$ and $$\sigma _5$$. Regions with weak interannual variability (standard deviations $$<0.02$$m) are masked (white regions in Fig. 3b, c). These regions of low variability are located along the eastern edge of the basin. The US east coast south of $$45{^{\circ }} \hbox {N}$$ stands out as the only portion of coastline where any potential interannual predictability is present (Fig. 3b, c). The largest potential predictability is located in the subpolar gyre. The dynamics of this region are likely to be relatively linear, dominated by mixed layer’s response to forcing and mean advection, and not heavily influenced by the effects of turbulent mesoscale eddies (Sérazin et al. 2015). However, such relatively linear dynamics in the model may also be because the $$1/4{^{\circ }}$$ model resolution will not fully resolve the small internal deformation radius in the subtropical gyre and therefore will likely underestimate effects related to the baroclinic instability. Figure 3a shows large values of interannual variability in the Gulf Stream extension, however, Fig. 3b, c demonstrate this pattern of large variability does not translate into a comparable pattern of large potential predictability. Nevertheless, although the potential predictability is lower in the Gulf Stream region it is still non zero, in agreement with the power spectra on interannual timescales in Fig. 2b (black and blue profiles). Therefore even in the eddy-active Gulf Stream region, there appears to be some potentially predictable interannual variability.
## 3 LIM forecast analysis: influence of eddy field initialisation on interannual forecasts
Although the SSH variability analysis hints at the presence of interannual timescales, an investigation of SSH forecasts is needed to evaluate any interannual SSH predictability present. Traditionally the statistics needed to evaluate predictability in a GCM are generated by creating an ensemble of model simulations. Depending on the model used, and the size of the ensemble, this process can be computationally expensive (Collins 2007). In this paper we use a contrasting approach whereby the methods used to evaluate predictability are based on statistical models created from one long dynamical model run. The statistical models used here have the benefit of being computationally cheap to run and can still provide insights into the dynamics of the system due to their simplicity (e.g., Sonnewald et al. 2018).
Linear Inverse Modeling (LIM; Penland 1989) has previously been used to evaluate predictability in sea surface temperature, in both models and observations (e.g., Penland 1989; Hawkins et al. 2011; Zanna 2012; Huddart et al. 2017; Dias et al. 2018). The method models the evolution of the desired fields as a linear process forced by white noise. In doing so, the linear inverse model gives information about the predictability of the fluctuations in the system. To enable this calculation, the SSH anomalies are decomposed into empirical orthogonal functions (EOFs) and their related principal components (PCs),
\begin{aligned} SSH(x,y,t)=\sum _{i}EOF_i(x,y)PC_i(t). \end{aligned}
(1)
The EOFs are constructed using monthly-mean SSH model output. The EOFs are also weighted by the area of their grid boxes as the NEMO grid is irregular. In the calculation of the EOFs, SSH in the Gulf of Mexico is not used as we wished to focus on the SSH predictability in the main ocean basin.
In the following analysis we use 25 EOFs explaining 63% of the variance; the leading three EOFs (responsible for 9.9%, 5.2% and 5.0% of the variance, respectively) are shown in Fig. 4. The leading EOF has a spatial structure reminiscent to that calculated using observations (Häkkinen and Rhines 2004; Häkkinen et al. 2013). Häkkinen et al. (2011) attributed this pattern of SSH variability to variability in the wind stress curl. As the wind stress curl varies, there are associated variations in the strength and sizes of the subpolar and subtropical gyres and a resultant change in SSH (Häkkinen et al. 2013). This ‘gyre mode’ varies between a state with a small subpolar gyre with a large eastward extended subtropical gyre and a state with a large eastward extended subpolar gyre with a small contracted subtropical gyre. A similar variation and associated dependence on the wind stress curl has been identified in this model (shown in the supplementary material). Moreover, there is a lagged response of the first principal component of SSH to a leading principal component of the wind stress, again in agreement with Häkkinen et al. (2011).
The evolution of the PCs of SSH anomalies is approximated by a linear stochastic model (Penland and Sardeshmukh 1995)
\begin{aligned} \frac{d\mathbf P }{dt} = \mathbf AP (t) + \xi , \end{aligned}
(2)
where $$\mathbf {P}$$ is the vector of n-PCs, with dimensions of n by 1, $$\xi$$ is a stochastic forcing term, and A is a linear n by n matrix which controls the temporal evolution of the n-PCs. The linear operator
\begin{aligned} \mathbf A = \frac{1}{\tau _0}\ln [\mathbf C (\tau _0)\mathbf C (0)^{-1}], \end{aligned}
(3)
contains dynamical information about the variability of the system as it is constructed using the, n by n, covariance matrices at lag-$$\tau _0$$ and lag-0, which are calculated from the PCs as
\begin{aligned} \begin{aligned} \mathbf C (\tau _0)&= \langle \mathbf P (t+ \tau _0)\mathbf P ^{T}(t) \rangle , \\ \mathbf C (0)&= \langle \mathbf P (t)\mathbf P ^{T}(t) \rangle . \end{aligned} \end{aligned}
(4)
Here $$\langle \rangle$$ indicates an average over all times.
Forecasts, $$\hat{\mathbf {P}}$$, are then generated using the model such that, with respect to the initial time, t,
\begin{aligned} \hat{\mathbf{P }}(t+\tau )=\mathbf B (\tau )\mathbf P (t), \end{aligned}
(5)
where $$\tau$$ is the forecast lead time and the n by n matrix B is the forecast propagator,
\begin{aligned} \mathbf B (\tau )=\exp (\mathbf A \tau )= \exp \bigg [\frac{\tau }{\tau _0}\ln [\mathbf C (\tau _0)\mathbf C (0)^{-1}]\bigg ], \end{aligned}
(6)
Predictability can then be evaluated by examining the difference between the probability distribution of the predictions and that of the climatology. For LIM to be applicable, the system being examined is required to possess several characteristics (Penland and Sardeshmukh 1995):
• it can be described by Gaussian statistics;
• A is independent of the time lag, $$\tau _0$$, used to calculate it;
• all real parts of the eigenvalues of A must be negative and therefore decay.
Finally, to prevent overfitting of the linear models, the data used in each experiment is separated into a training and a verification data set. In the presented results the training and verification data sets are 140 years and 10 years long respectively. The linear model is constructed using the training data set. This model is then used to make predictions for the verification set, and the skill of these predictions is used to evaluate the system’s predictability.
Tests to assess how well these conditions are met for SSH anomalies in the North Atlantic and are shown in Fig. 5. Panel a shows a comparison of the cumulative density function of the 150 years of SSH anomalies with that of an idealised Gaussian distribution with the mean and variance of the model output, all calculated in the area used to calculate the EOFs (shown in Fig. 4e). The agreement between the two profiles demonstrates that the system is well described by Gaussian statistics. To examine the influence of mesoscale eddies on the skill of the forecasts two different linear operators are constructed. In each experiment, the same reduced basis, consisting of 25 EOFs and PCs, is used to create each propagator. The first propagator contains all available frequencies. A second temporally smoothed propagator is constructed by applying an 18-month running mean filter to the PCs. Figure 5b shows the Frobenius norm of A as a function of different lag times, $$\tau _0$$, for these two operators. Both operators possess regions where the norm of A only varies by a small amount as a function of $$\tau _0$$. However, for the 18 month filter there is strong variation in the Frobenius norm of A, when $$\tau _0$$ is 15 months. This large variation indicates that in this parameter range the model output may not be well represented by a system of the form shown in Eq. 2. When using the operator constructed with monthly means a $$\tau _0$$ of 6 months is chosen and when using the smoothed operator a $$\tau _0$$ of 2 months is used. In both cases, all the real parts of the eigenvalues of A are negative and thus satisfy the final necessary condition.
We can now create forecasts of SSH anomalies using the viable LIMs. Forecasts are initialised every 6 months throughout the 150 years of model output, creating 300 forecasts in total. When creating these forecasts it is crucial to ensure a difference between the training and test datasets. At each forecast initalisation date the LIM propagator used to generate the forecasts is trained on 140 years of model data. In each case these 140 years are comprised of the the data which is not within a ten-year window centered on the forecast initialisation date. In order to create a benchmark for the forecasts made using the LIM models, lagged correlation forecasts are also made (Lorenz 1963),
\begin{aligned} x(t_{0}+\tau )=\beta (\tau )x(t_{0}), \end{aligned}
(7)
where $$\beta$$ is the auto-correlation of the time series at a point in space, x is time series of the quantity being predicted and $$\tau$$ is the lag time of the forecast. This is a type of ‘damped persistence’ forecast which may provide forecasts better than climatology ($$\beta =0$$) and persistence ($$\beta =1$$) forecasts. The model output used to construct these damped persistence forecasts is reconstructed from the same EOFs and principal components used to construct the LIM models, to allow for a fair comparison.
To evaluate the skill of the statistical models relative to climatological forecasts, we use a root mean square error metric ($$RMSE_{Relative}$$) (Hawkins et al. 2011):
\begin{aligned} RMSE_{Relative}=\frac{RMSE_{pred}}{RMSE_{clim}} \end{aligned}
(8)
where $$RMSE_{clim}$$ is the root mean square error of a climatological forecast over the forecast period and $$RMSE_{pred}$$ is the root mean square error of the predicted field. The climatological forecast assumes the SSH anomalies keep their initial values, i.e. $$\hat{\mathbf{P }}(t+\tau )=\mathbf P (t)$$ and is constructed using the same EOFs and PCs used to construct the LIM models (i.e the time filtered forecasts are verified against the time-filtered truth). The RMSEs are calculated relative to the control run solution, which has been reconstructed using the same EOFs and PCs used to construct the LIM models. A value greater than unity indicates that the model’s forecasts are inferior to those generated using the climatology, whereas, a value less than unity demonstrates forecast skill superior to climatology.
The forecast error maps for the LIM model trained on monthly data are shown in Fig. 6d–f. Errors emerge rapidly in the subpolar and subtropical gyres (seen in panels a and b). Only in the Gulf Stream region and southern part of the domain are any areas of skillful forecasts seen (confirmed in panel c). The damped persistence forecasts created with the monthly mean model output (panels j, k, and l) exhibit small errors in the subpolar gyre and parts of the subtropical gyre (panels a and b), coinciding with regions of large potential predictabilities (Fig. 3). The forecasts created with the LIM model trained on monthly mean SSH anomalies are less skillful than those produced with the damped persistence model. The inclusion of the high-frequency components of the SSH in the construction of the LIM model means that predictably is not exhibited on timescales longer than a year.
The error maps which are subject to 18-month filtering produce smaller errors in all regions (Fig. 6a–c, g–i, and m–o). These models again display the smallest relative RMSEs in the subpolar and subtropical gyres, with more substantial errors in the Gulf Stream region. The LIM model outperforms the damped persistence forecasts in the majority of areas and timescales [the exception being in the subtropical gyre (panel b)]. The subpolar gyre emerges as the region with the largest amount of predictability, on timescales longer than a year (panels a and i). Small errors are also exhibited in the tropics, and extending eastwards towards the Iberian Peninsula. The US east coast stands out as the only section of coastline which borders a region with forecast errors of less than 0.8 on interannual timescales. These results are in agreement to those found by Nonaka et al. (2016), where a lack of any predictability, on timescales longer than a few months, is also found in the Kuroshio.
## 4 Predictable patterns: optimal initial conditions and average predictability time
The spatio-temporal structure of the predictability can also be analysed by explicitly identifying any patterns which are predictable on interannual timescales. Two methods are now used: (1) an examination of the growth of optimal initial conditions leading to a maximum increase of variance and (2) a decomposition of the system into predictable components, ranked by their relative contributions to the total average predictability time present.
### 4.1 Non-normal mode analysis and optimal initial conditions
The characteristics of the trained linear model can be used to infer information about a system’s sensitivity to initial conditions. In a series of papers, Farrell and co-authors developed a methodology, generalised linear stability theory, to investigate the transient behaviour resulting from initial perturbations to its mean state (Farrell and Ioannou 1996). This methodology has been used to examine a range of geophysical problems including: Couette flow (Farrell 1982), atmospheric forecast error growth (Farrell 1990), quasi-geostrophic turbulence (Farrell and Ioannou 1995), the El Niño Southern Oscillation (Penland and Sardeshmukh 1995), Gulf Stream dynamics (Farrell and Moore 1992) and the Atlantic meridional overturning circulation (Zanna and Tziperman 2005, 2008; Hawkins and Sutton 2009).
This analysis investigates the transient growth in linearly-stable fluid dynamical systems. It may appear counter-intuitive that there can exist disturbances which lead to growth in a stable system. However, when the operator A is non-normal, i.e. $$\mathbf{AA}^{\mathbf{T }}\ne \mathbf{A}^{\mathbf{T }}\mathbf{A }$$, it is possible for the eigenmodes of the system to interact and give a large amplification of variance at a finite-time (Farrell and Ioannou 1996). The solutions to
\begin{aligned} \frac{d\mathbf P }{dt} = \mathbf AP (t), \end{aligned}
(9)
can be written in terms of the eigenvectors, $$\mathbf {e}_i$$ as
\begin{aligned} \mathbf {P}(t)=\sum _{i}\mathbf {e}_ia_i\exp {\lambda _it}, \end{aligned}
(10)
where $$\lambda _i$$ are the eigenvalues of $$\mathbf {A}$$ and $$a_i$$ is a complex constant. The SSH anomaly growth at time $$\tau$$ by non-normal eigenmode interference is given by
\begin{aligned} \begin{aligned} \mu (\tau )= \frac{\mathbf{P}(\tau )^{\mathbf {T}} \mathbf {P}(\tau )}{\mathbf {P}(0)^{\mathbf {T}} \mathbf {P}(0)}. \end{aligned} \end{aligned}
(11)
The longest timescale on which this growth occurs can be thought of as an optimistic upper bound on the predictability of linear events without forcing. The corresponding spatial patterns, which lead to a maximum growth at a time $$\tau$$, are called the optimal initial conditions and are given by calculating the leading singular vector of $$\mathbf B (\tau )$$. In this section, the LIM constructed using 18 month temporally smoothed principal components is used as it exhibits skillful forecasts on interannual timescales.
The curve depicting the growth of SSH anomalies, $$\mu (\tau )$$, is shown in Fig. 7a. The perturbations can grow through non-normal interactions on time scales of up to 100 months, with the maximum growth occurring at 20 months. The optimal initial condition pattern in SSH, which leads to the largest growth in SSH anomalies after 20 months, is shown in Fig. 7b. This pattern has a very weak gyre scale tripolar pattern, reminiscent of EOF 1 (shown in Fig. 4e). The pattern has two main notable smaller scale features, a tripolar structure off Cape Hatteras (situated at $$32.5{^{\circ }}$$$$42.5{^{\circ }} \hbox {N}$$, $$67{^{\circ }}$$$$74.55{^{\circ }} \hbox {W}$$, shown by the green ellipse in panel e) and a single sign SSH anomaly along the US east coast (black ellipse, panel e). The propagated optimal initial condition is shown in Fig. 7, panels c and d, at 10 and 20 months, respectively. After 10 months, the SSH anomaly along the boundary no longer has a single sign. There is an increase in SSH along the path of the Gulf Stream and in the subtropical recirculation gyre, and a decrease in SSH in the subpolar gyre. After 20 months, an SSH anomaly grows along the Gulf Stream path, and its magnitude is seen to double. The magnitude of SSH in the subpolar and subtropical gyres is also seen to increase significantly. One interpretation of these optimal initial conditions is that it is especially important to constrain the position of the Gulf Stream separation in the initial conditions as initial errors in this region lead to gyre-scale errors within 10–20 months. However, it is also possible that it is the weaker gyre-scale pattern present in the optimal initial conditions which leads to this growth, as SSH anomalies can be integrated by the gyre circulation on interannual timescales.
Figure 8 shows the positive optimal initial condition (of the same magnitude as that shown in Fig. 7b) and its evolution after 20 months when it is added to the mean SSH field. It also shows the negative version of the optimal initial condition added to mean field, which is an equally valid solution since the evolution is linear. The initial and propagated version of the positive optimal initial condition, Fig. 8, panels a and b, demonstrates an increase in strength of the subpolar gyre, as well as an increase in the SSH gradient across the Gulf Stream. The change in the SSH gradient is linked to variations in the geostrophic transport along the Gulf Stream path, shown in Fig. 9. The resultant geostrophic velocity anomalies act in different directions in the two gyres and are particularly evident in the subtropical gyre. The SSH 0m contour is also seen to be shifted to a higher latitude. However, this is a marginal effect as shown by the contours in Fig. 8’s panel b (less than a degree in latitude, for an initial perturbation with double the magnitude of that shown in Fig. 7a). The evolution of the negative optimal initial conditions, shown in Fig. 8c, d, demonstrates an increase in SSH along the US east coast North of Cape Hatteras, as well as a southward shifted Gulf Stream detachment point. The SSH gradient across the Gulf Stream is also lower, indicating a decrease in Gulf Stream transport. Panel d shows that the SSH 0m contour’s position can move significantly southward (approximately $$5{^{\circ }}$$ in latitude, for an initial perturbation with double the magnitude of that shown in Fig. 7a and that the subtropical gyre contracts to the west of the basin. The initial conditions associated with timescales ($$\tau$$) ranging from 10 to 30 months are also calculated and compared to the optimal calculated at the maximum amplification time. The spatial correlation between these initial patterns are found to be at least 0.8, and the patterns behave in a qualitatively similar manner when propagated in time. The optimal initial conditions, calculated from similar models with differing numbers of EOFs, exhibited small-scale ($$1/2{^{\circ }}$$) spatial differences in the Gulf Stream’s extension, however both the signal along the US east coast and the tripolar pattern appear robust. Moreover, the propagated optimals all resemble that shown in Fig. 7d.
### 4.2 Optimal initial conditions occurring in the model output
It is important to determine how often the optimal initial conditions and their evolved patterns are realised in the model output. Figure 10a, shows the projections of the initial states on the model output (at each time t the projection is the product of the $$\mathbf P (t)$$ and the optimal initial condition, i.e. the leading singular vector of B), as well as the projections of the evolved initial conditions 20 months later. The growth in these projections is seen to be close to that predicted by the maximum amplification curve (Fig. 7a). The occurrences of the the tripolar SSH pattern, seen in the optimal initial condition, are detected by using an algorithm which calculates the 2D correlation coefficients between the monthly mean SSH anomalies and optimal initial condition in the relevant area (the green ellipse in Fig. 7e). Spatial correlations which are greater than 0.8 are retained. Out of the 1800 monthly-mean SSH anomalies comprising the model output, 404 are found to display a tripolar anomaly structure off Cape Hatteras. After 15–20 months from those 404 tripolar anomaly patterns, 310 ( 77%) lead to SSH anomaly growth along the US east coast (as in Fig. 7f, green ellipse). This growth is detected by evaluating the change in sign of the SSH anomalies (in the the green ellipse in Fig. 7f). About 140 (out of 310) also display a change in sign of SSH along the coast (as in Fig. 7f, black ellipse). The occurrence of these optimal evolutions in the model output indicates that changes in the SSH in the Gulf Stream near its detachment point are potentially important in predicting SSH variations along the US east coast.
### 4.3 Average predictability time
We complement the analysis of the optimal patterns, which depends on the target timescale, by examining predictable patterns that persist over all timescales, and are therefore the most predictable over a range of target times (DelSole and Tippett 2007). This is done by calculating the average predictability time (APT) (DelSole and Tippett 2009a). This index of predictability is based on the Mahalanobis signal (DelSole and Tippett 2007),
\begin{aligned} S(\tau ) = \frac{1}{k} tr[({\varvec{\varSigma }}_\infty - {\varvec{\varSigma }}_\tau ) {\varvec{\varSigma }}_\infty ^{-1}], \end{aligned}
(12)
where k is a constant related to the number of principal components used in the analysis, tr is the trace of the matrix, $${\varvec{\varSigma }}_\tau$$ is the covariance matrix of the forecast error at lead time $$\tau$$, $${\varvec{\varSigma }}_\infty$$ is covariance matrix of the forecast distribution at long lead times. Here, $$S(\tau )$$ has a value of 1 when the system is completely predictable, and a value of 0 when the forecast covariance matrix is the same as the climatological covariance matrix, meaning the system is unpredictable. This method has been used before to examine the predictability of several geophysical fields, including the upper ocean temperature and the AMOC (Branstator et al. 2012; Branstator and Teng 2014).
The APT can be defined by integrating the Mahalanobis signal over all lead times (DelSole and Tippett 2009a), leading to
\begin{aligned} APT=2 \sum _{\tau =1}^{\infty } S(\tau ). \end{aligned}
(13)
The factor of two makes APT agree with the e-folding time in the univariate case. In one dimension, APT resembles a root mean square error and is given by DelSole and Tippett (2009b)
\begin{aligned} APT=2\sum _{\tau =1}^{\infty } \frac{\sigma _{\infty }^{2}-\sigma _{\tau }^{2}}{\sigma _{\infty }^{2}} =2\sum _{\tau =1}^{\infty } \bigg ( 1-\frac{\sigma _{\tau }^{2}}{\sigma _{\infty }^{2}} \bigg ). \end{aligned}
(14)
Since APT is the integral of predictability over all times, it is independent of the chosen lead time. This measure can also be used to define predictable components by finding the projection vectors q that maximize APT. In which case, the component $${\mathbf{q }^{\mathbf{T }}\mathbf{P }}$$, with $$\mathbf {P}$$ being the principal component state vector, has forecast and climatological variances given by $$\sigma _{\tau }^{2}=\mathbf {q}^{\mathbf {T}} {\varvec{\varSigma }}_{\tau }\mathbf {q}$$ and $$\sigma _{\infty }^{2}=\mathbf {q}^{\mathbf {T}} {\varvec{\varSigma }}_{\infty }\mathbf {q}$$, respectively.
In this study, the APT of the whole system and of the leading predictable components are calculated using the method contained in DelSole and Tippett (2009b). Firstly, to prevent overfitting, the data is separated into training and verification data sets, in the same manner as described previously. Forecasts are then generated by forming linear regression models from the training data (DelSole and Tippett 2009b), i.e. the projections $$\hat{\mathbf {P_L}}(t+\tau )$$, are given by
\begin{aligned} \hat{\mathbf {P_L}}(t+\tau )=\mathbf {C}(\tau )\mathbf {C}(0)^{-1}\mathbf {P}(t). \end{aligned}
(15)
Using such models and in the case of a zero-mean stationary process, meaning $$\mathbf {C}(0)={\varvec{\varSigma _{\infty }}}$$, the forecast error covariance matrix is given by
\begin{aligned} {\varvec{\varSigma }}_\tau =\mathbf {C}(0)- \mathbf {C(\tau )C}(0)^{-1}\mathbf {C(\tau )}^{T}. \end{aligned}
(16)
These values for $${\varvec{\varSigma }}_\infty$$ and $${\varvec{\varSigma }}_\tau$$ can be substituted into Eq. 13 to calculate the APT of the entire system. In order to maximize APT in Eq. 14, the problem reduces to solving the generalized eigenvalue problem (See DelSole and Tippett (2009b) for a full derivation),
\begin{aligned} \mathbf {Gq}=\lambda \mathbf {C}(0)\mathbf {q} \end{aligned}
(17)
where
\begin{aligned} \mathbf {G}=\sum _{\tau =1}^{\infty } \mathbf {C}(\tau )\mathbf {C}(0)^{-1}\mathbf {C}(\tau )^{T}. \end{aligned}
(18)
The projection vectors $$\mathbf {q}$$ are uncorrelated with each other because G and $${\varvec{\varSigma _{\infty }}}$$ are symmetric. The spatial patterns, $$\mathbf {p}$$, associated with the projection vectors $$\mathbf {q}$$ are found by using
\begin{aligned} \mathbf{p}={\langle \mathbf{PP }^{\mathbf{T }}\mathbf{q }\rangle } =\varvec{\varSigma }_\infty \mathbf{q}; \end{aligned}
(19)
these spatial patterns can be projected back onto the EOFs and are referred to as the predictable components. The predictable components of the system are calculated using only the training data set. To prevent overfitting and calculate APT of each predictable component, the projection vector, $$\mathbf {q}$$, calculated from the training data is applied to verification data set. Thus, the squared multiple correlation between the component time series and the verification data is
\begin{aligned} \mathbf {R}_{\tau }^2=\frac{\mathbf {q}^{T}\mathbf {C}(\tau )\mathbf {C}(0)^{-1}\mathbf{C}(\tau )^{T}\mathbf {q}}{\mathbf {q}^{T}\mathbf {C}(0)\mathbf {q}}, \end{aligned}
(20)
where $$\mathbf{q}$$ is calculated from the training data set and the correlations are calculated from the verification set. Therefore, $$\mathbf {R}_{\tau }^2$$ can be interpreted as the variance of the predictable component time series, which is explained by a linear regression prediction at time lag $$\tau$$. The predictability time of each component, $$APT_p$$, is then calculated as,
\begin{aligned} APT_p=2\sum _{\tau =1}^{\infty } \mathbf {R}_{\tau }^2. \end{aligned}
(21)
Figure 11a shows that the predictability of the SSH of the whole system, measured by the Mahalanobis signal (solid blue line), diminishes rapidly, reaching a value of approximately 0.25 after 10 months. This is in approximate agreement with the timescale found in the LIM study. However, Fig. 11a (blue shaded region) also shows that several of the individual components of the system have Mahalanobis signals which decay on longer timescales. The APT of the leading 25 predictable components shown in panel b, confirms that several components demonstrate predictability on timescales longer than 2 years. The three leading predictable components (those which have the largest values of APT) have average predictability times of 26–28 months. The corresponding spatial patterns of the leading three components are shown in panels c, d, and e. The pattern which is associated with the largest value of APT (panel c) has large SSH magnitudes in the jet extension region. The second pattern (panel d) is localised mainly to the US east coast and Gulf Stream extension region, whereas, the third pattern (panel e) is similar to the evolved optimal initial conditions (Fig. 7), and EOF1. The time series related to each of these spatial patterns, shown in panels f, g and h, all display interannual variability and have autocorrelation times of 30–60 months.
The similarities between the third leading predictable component, the evolved optimal initial conditions, and EOF1 indicate some robustness of the constructed predictability patterns. The time series associated with the third predictable component correlates strongly with the leading principal component at zero lag. The leading predictable patterns (1 and 2) are not merely EOF1, highlighting that the mode capturing most of the variance is not necessarily the most predictable.
### 4.4 The influence of atmospheric forcings on the predictable components
The steric component dominates interannual variability in SSH in the North Atlantic. Roberts et al. (2016) confirmed that this is also the case in an ocean only component of HadGEM3 (forced NEMO simulation) and that it is the thermosteric and wind-driven components which contribute most to the interannual variability in the subtropical gyre. In the subpolar gyre, the variability is caused by both the thermosteric and halosteric components and is dominated by the response of the ocean to variations in the buoyancy forcings. Furthermore, in the Gulf Stream region, the variations due to intrinsic ocean processes are also important (Penduff et al. 2011; Sérazin et al. 2015).
Attempts are now made to establish the dynamical origin of the predictable patterns by examining their relationships with fields relating to the wind and buoyancy driven circulations, namely the Ekman components of SSH and the net heat fluxes. The interannual variability detected is likely related to the oceanic adjustment to variations in these forcings. The net heat fluxes contribute to the thermosteric buoyancy forced component of SSH, and the wind stresses contribute to the steric advective components.
The SSH, meridional and zonal wind stress fields are used to decompose the ocean currents into the associated geostrophic $$\mathbf {u_g}=(u_g,v_g)$$ and Ekman components $$\mathbf {u_e}=(u_e,v_e)$$,
\begin{aligned} \mathbf {u}=\mathbf {u_{g}}+\mathbf {u_{e}}. \end{aligned}
(22)
The Ekman components are calculated from,
\begin{aligned} v_{e} =-\frac{\tau ^{x}_{s}}{f\rho _{0}d_{Ek}} \quad \text {and}\quad u_{e} =\frac{\tau ^{y}_{s}}{f\rho _{0}d_{Ek}}, \end{aligned}
(23)
where f is the Coriolis parameter, $$\tau ^{x}_{s}$$ and $$\tau ^{y}_{s}$$ are the zonal and meridional components of the wind stress, $${\varvec{\tau _s}}$$, at the ocean’s surface. The density, $$\rho _{0}$$, and the Ekman depth, $$d_{Ek}$$, are taken to be constants of 1025 kg/m3 (a typical value at the surface of the North Atlantic (Wang et al. 2010)) and 100m (a typical value in the Subtropical gyre in the winter (Stommel 1979)), as most of the variability in the Ekman velocity component is due to variations in the wind stress. The Ekman pumping velocity is also calculated as
\begin{aligned} w_{e} = \frac{1}{f\rho _{0}}(\nabla \times {\varvec{\tau _s}}), \end{aligned}
(24)
and the geostrophic currents are calculated as
\begin{aligned} v_{g} =\frac{g}{af cos \phi }\frac{\partial \eta }{\partial \lambda } \quad \text {and}\quad u_{g} =-\frac{g}{af}\frac{\partial \eta }{\partial \phi }, \end{aligned}
(25)
where $$\eta$$ is the monthly mean SSH, a is the radius of Earth, $$\phi$$ is latitude, $$\lambda$$ is longitude and g is gravity.
These fields and the net heat fluxes are then regressed against the normalized time series of the three leading predictable components (Fig. 11). The fields are smoothed with a 6-month running mean before regression, to focus on the interannual variability. The regression coefficients of the geostrophic currents and the first predictable component are shown in Fig. 12. The regression coefficients with SSH are also shown as contours. These show a westward propagation of SSH in the subtropical gyre. The meridional geostrophic velocities have large regression coefficients with the leading predictable component, in the subpolar gyre along the Canadian east coast. At times where the current is leading the predictable component time series, there is also a positive signal at the Gulf Stream’s detachment point. The lack of a clear lead–lag relationship here makes causality hard to distinguish. However, from these strong correlations, it is apparent that there is significant interannual predictability present in the western boundary current in the subpolar gyre. There is a lack of any apparent changes in the large-scale patterns of the zonal geostrophic current regression coefficients. These coefficients are large in the subtropical gyre and in the Gulf Stream region. The regression coefficients relating to the net heat fluxes have a sizeable dipolar pattern in the Gulf Stream region at all times. However, in the east of the Subpolar gyre, a strong signal appears at lead times of 15 months.
The regression coefficients of the leading predictable component with the Ekman currents are shown in Fig. 13. There is an apparent time-lagged relationship present, with variations in the Ekman currents leading the predictable component strongly on timescales of up to 15 months. The Ekman currents are associated with the large-scale Sverdrup transport, within the wind-driven gyres. These regression coefficients imply that wind-driven variations in the gyre circulations lead to a predictable change in SSH on interannual timescales. The associated changes in gyre scale variations of the Ekman currents translate to variations in SSH in the Gulf Stream extension and subpolar gyre regions. This result is also indicative that interannual forecasts of SSH can be improved by better representing the zonal and meridional wind stress fields, on longer than monthly timescales.
It is difficult to discern anything about the variability of the time-lagged geostrophic regression coefficients and predictable component 2 (the figures relating to the regression analysis for the second and third predictable components are contained in the supplementary information). However, there is a signal in the regression coefficients of the net heat fluxes which leads the predictable component by 8–15 months. This signal is located in the Gulf Stream extension region. The regression coefficients relating to the zonal and meridional Ekman components also strongly lead the predictable component and are related to changes in the wind stress in the east of the North Atlantic and the subpolar gyre.
The third predictable component’s regression with the Ekman components demonstrate a clear lead–lag relationship. The Ekman components are seen to lead variations in the predictable component. The patterns of the regression coefficients are large scale and sizeable in the west of the basin. The meridional components of the Ekman currents can be interpreted as causing a convergence or divergence of SSH in the Gulf Stream region as the gyres react to changes in the wind stress. There are also variations in the net heat fluxes in the subpolar gyre, which lead a change in the predictable component.
Therefore it is concluded that predictable component one is largely a response to variations in the wind at the latitudes of the Gulf Stream. Predictable component two is likely due to variations in both the net heat fluxes and the wind stress in the subpolar gyre and in the Gulf Stream extension region. Finally, predictable component three is likely due to the oceanic adjustment resulting from a combination of both variations in the wind stress in the subpolar gyre and east of the ocean basin, and the response to variations in the net heat fluxes in the subpolar gyre. All three patterns show that changes in the atmospheric forcings lead large-scale predictable patterns of SSH. Even though the variations in the wind stress and net heat fluxes are unpredictable at times longer than a few months, the oceans adjustment to them appears to be predictable on timescales of approximately 1–2 years. However, further experiments and analysis are needed to determine how the dynamical processes present generate the diagnosed predictable components.
## 5 Summary and discussion
The predictability of SSH in the North Atlantic in a control run of a fully coupled model (HadGEM3) was evaluated using methods based on linear inverse modeling and average predictability time. The key findings from this study include:
• predictability of SSH in the subpolar gyre and along the west coast of the Atlantic basin on timescales of up to 20 months (using LIM).
• Short predictability times in the Gulf Stream extension region (5–10 months).
• Optimal initial conditions resulting in regional SSH changes in the subpolar and subtropical gyres and a change in SSH gradient along the Gulf Stream’s extension over a timescales of approximately 20 months. The optimals consist of a weak large-scale SSH tripole, with a stronger signal at the Gulf Stream’s detachment point.
• large-scale predictable patterns on timescales of 26–28 months, characterized by SSH variations of order 5–10 cm along the US east coast, extending to the gyre scale, which are predictable on timescales of 26–28 months. These predictable patterns are calculated using a complementary method to LIM, namely APT, which is independent of target time.
• these predictable components correlate significantly with persistent, large-scale, evolving features in SSH, and which appear to be induced by wind and heat flux forcing in the preceding 8–15 months.
To our knowledge, this is the first analysis of SSH using such a comprehensive set of linear predictability methods. These linear methods provide a computationally inexpensive alternative to ensemble modelling techniques (Hawkins and Sutton 2009). There is an expected trade-off between a linear approximation of the dynamical system and computational savings. However, the use of temporal filtering or averaging appears to improve interannual predictions, most likely because the direct influence on the variability from the strongly nonlinear ocean mesoscale is removed. For example, optimal initial conditions of SSH identified by the non-normal mode analysis, when pattern matched in the full, nonlinear forward model, evolve at 15–20 months as predicted by the linear method for almost 80 % of events.
The interannual ocean variability in mid-latitude jet extensions is dominated by the intrinsic component (Sérazin et al. 2015). Our study shows that the variability in the Gulf Stream extension, is not generally predictable on interannual timescales. As the predictability in SSH in the turbulent jet extension regions is limited to less than order 5–10 months (in agreement with Nonaka et al. (2016) and Roberts et al. (2016)). However, in the subpolar gyre and in areas of the subtropical gyre significant predictive skill was found on timescales over a year. As, predictability in SSH and ocean dynamics might be enhanced via atmospheric forcing integrated over large-scale regions (Cabanes et al. 2006). In addition, SSH predictability along the US east coast might be influenced by the position and strength of the Gulf Stream. The timescales and patterns of predictability of SSH in the North Atlantic derived from statistical forecasts trained on model output are comparable to those in Roberts et al. (2016), diagnosed using multiple runs of a fully dynamical model indicating that the results are robust to the method chosen.
The maximum amplification of the optimal initial conditions occurs 20 months after initialisation, which can be used as a predictability timescale and results in several different effects. Firstly, the amplification can lead to a doubling in the magnitude of the initial SSH anomalies in the Gulf Stream region (e.g., an optimal initial condition perturbation with such a tripolar structure and amplitude 3 cm can propagate to give anomalies of 6 cm along the Gulf Stream path). Secondly, the amplification acts to increase (or decrease, depending on the sign) the SSH gradient across the Gulf Stream, leading to a geostrophic velocity anomaly of the order of 10 cm/s (for an optimal perturbation, P, with pattern and magnitude as in Fig. 7b). Moreover, these changes can lead to a meridional shift of an SSH contour of 0 m by several degrees in latitude (a southward shift of approximately $$5{^{\circ }}$$ for a perturbation $$-2 P$$; see also Fig. 8b, d). The optimal perturbation, P, results in a 5 cm increase in SSH along the US east coast at latitudes of $$30{^{\circ }}$$$$40{^{\circ }} \hbox {N}$$ (5 cm decrease for a perturbation $$-P$$). This change is large compared to the recent (1993–2009) rate of global mean SSH rise from satellite altimetry, which is $$3.2\pm 0.4\, {\text {mm}}\, {\text {year}}^{-1}$$ (Church and White 2011).
In order to investigate the dynamical evolution of the optimal initial condition the climate model could be restarted with the optimal initial conditions, and with parts of the optimal initial conditions masked (however, restarting the climate model with optimal initial conditions would require multivariate 3D restarts). Such experiments would help examine the role of the oceanic dynamical processes which lead to the growth of the initial conditions, and the relative importance of atmospheric noise and model error.
The optimal initial conditions also have implications for observations. The initial conditions found are indicative that to better constrain interannual predictions of SSH, in the North Atlantic, it would be beneficial to incorporate a higher number of ocean observations (SSH, temperature and velocity fields) in the region near to the Gulf Stream’s detachment point. This area has already been the subject of many observational studies (e.g., Line W, Toole et al. (2011)) and is well observed by altimetry (see Lillibridge and Mariano (2013) and references within). Furthermore, initialisation of GCM ensemble simulations with the optimal initial conditions could provide a better estimate of initial condition uncertainty in SSH prediction (Zanna et al. 2018).
This study did not entirely decouple the effects of applying interannual external forcings from the intrinsic variability due to the mesoscale eddy fields. It also made use of a single model, and therefore the optimal initial conditions presented may be model specific. It would therefore be beneficial to examine the SSH predictability with a more extensive ensemble of model simulations, including those which isolate the effects of intrinsic processes (e.g., Sérazin et al. (2015) and Zanna et al. (2018)). Moreover, a comparison with altimetry or higher resolution model output may further elucidate the effects of the eddy field on interannual variability. It would be interesting to assess how a change in spatial resolution affect the diagnosed optimal initial conditions. Such a resolution change might impact the rectification and behavior of the jets, and therefore the diagnosed mode of Gulf Stream variability. Finally, a series of idealised simulations, with selective timescales of the wind and buoyancy forcings, may aid in explaining the dynamical origin of the predictable components. Such ensembles already exist for a range of applications (Gregory et al. 2016; Roberts et al. 2016; Meyssignac et al. 2017). Alternatively, a probabilistic approach as described by Bessières et al. (2017) could be used to disentangle the forced and intrinsic variability components, thus, better explaining the dynamical origin of the predictable patterns.
## Notes
### Acknowledgements
This work is funded by the Natural Environment Research Council, award reference: NE/L501530/1. RF also received funding from a Met Office Case studentship. MP and CR were supported by the Met Office Hadley Centre Programme funded by BEIS and Defra. We are also grateful for computational support from the UK national high performance computing service, ARCHER. Finally, the authors would also like to thank the editor and anonymous reviewers who assisted in improving the manuscript.
## Supplementary material
382_2019_4814_MOESM1_ESM.pdf (7.2 mb)
Supplementary material 1 (pdf 7324 KB)
## References
1. Andres M (2016) On the recent destabilization of the gulf stream path downstream of cape hatteras. Geophys Res Lett 43(18):9836–9842
2. Berloff P, Hogg AMC, Dewar W (2007) The turbulent oscillator: a mechanism of low-frequency variability of the wind-driven ocean Gyres. J Phys Oceanogr 37(9):2363–2386.
3. Bessières L, Leroux S, Brankart J-M, Molines J-M, Moine M-P, Bouttier P-A, Penduff T, Terray L, Barnier B, Sérazin G (2017) Development of a probabilistic ocean modelling system based on nemo 3.5: application at eddying resolution. Geosci Model Dev 10(3):1091–1106. https://www.geosci-model-dev.net/10/1091/2017/. Accessed 9 Apr 2018
4. Boer GJ (2004) Long time-scale potential predictability in an ensemble of coupled climate models. Clim Dyn 23(1):29–44
5. Branstator G, Teng H (2014) Is AMOC more predictable than North Atlantic heat content? J Clim 27(10):3537–3550.
6. Branstator G, Teng H, Meehl GA, Kimoto M, Knight JR, Latif M, Rosati A (2012) Systematic estimates of initial-value decadal predictability for six AOGCMs. J Clim 25:1827–1846
7. Cabanes C, Huck T, Colin de Verdière A (2006) Contributions of wind forcing and surface heating to interannual sea level variations in the Atlantic Ocean. J Phys Oceanogr 36:1739–1750
8. Chowdhury M, Chu P-S, Schroeder T, Colasacco N (2007) Seasonal sea-level forecasts by canonical correlation analysis–an operational scheme for the us-affiliated pacific islands. Int J climatol 27(10):1389–1402
9. Church JA, White NJ (2011) Sea-level rise from the late 19th to the early 21st century. Surv Geophys 32(4):585–602.
10. Collins M (2007) Ensembles and probabilities: a new era in the prediction of climate change. Philos Trans R Soci A Math Phys Eng Sci 365:1957–1970Google Scholar
11. DelSole T, Tippett MK (2007) Predictability: recent insights from information theory. Rev Geophys 45(4):1188–1204
12. DelSole T, Tippett MK (2009a) Average predictability time. Part I: theory. J Atmos Sci 66:1172–1187
13. DelSole T, Tippett MK (2009b) Average predictability time. Part II: seamless diagnoses of predictability on multiple time scales. J Atmos Sci 66(5):1188–1204
14. Dias DF, Subramanian A, Zanna L, Miller AJ (2018) Remote and local influences in forecasting pacific sst: a linear inverse model and a multimodel ensemble study. Clim Dyn.
15. Farrell B, Ioannou PJ (1996) Generalized stability theory. Part I: autonomous operators. J Atmos Sci 53:2025–2040
16. Farrell BF (1982) The initial growth of disturbances in a baroclinic flow. J Atmos Sci 39(8):1663–1686
17. Farrell BF (1990) Small error dynamics and the predictability of atmospheric flows. J Atmos Sci 47(20):2409–2416
18. Farrell BF, Ioannou PJ (1995) Stochastic dynamics of the midlatitude atmospheric jet. J Atmos Sci 52(10):1642–1656
19. Farrell BF, Moore AM (1992) An adjoint method for obtaining the most rapidly growing perturbation to oceanic flows. J Phys Oceanogr 22(4):338–349
20. Gaspar P, Grégoris Y, Lefevre J-M (1990) A simple eddy kinetic energy model for simulations of the oceanic vertical mixing: tests at station papa and long-term upper ocean study site. J Geophys Res Oceans 95(C9):16179–16193.
21. Gregory JM, Bouttes N, Griffies SM, Haak H, Hurlin WJ, Jungclaus J, Kelley M, Lee WG, Marshall J, Romanou A, Saenko OA, Stammer D, Winton M (2016) The flux-anomaly-forced model intercomparison project (fafmip) contribution to cmip6: investigation of sea-level and ocean climate change in response to co$$_{2}$$ forcing. Geosci Model Dev 9(11):3993–4017. https://www.geosci-model-dev.net/9/3993/2016/. Accessed 14 Mar 2018
22. Häkkinen S, Rhines PB (2004) Decline of subpolar north atlantic circulation during the 1990s. Science 304(5670):555–559. http://science.sciencemag.org/content/304/5670/555. Accessed 21 Mar 2018
23. Häkkinen S, Rhines PB, Worthen DL (2011) Warm and saline events embedded in the meridional circulation of the northern north atlantic. J Geophys Res Oceans 116(C3):0148–0227
24. Häkkinen S, Rhines PB, Worthen DL (2013) Northern North Atlantic sea surface height and ocean heat content variability. J Geophys Res Oceans 118(7):3670–3678.
25. Hawkins E, Robson J, Sutton R, Smith D, Keenlyside N (2011) Evaluating the potential for statistical decadal predictions of sea surface temperatures with a perfect model approach. Clim Dyn 37(2495):2509Google Scholar
26. Hawkins E, Sutton R (2009) Decadal predictability of the Atlantic Ocean in a coupled GCM: forecast skill and optimal perturbations using linear inverse modeling. J Clim 22:3960–3978
27. Huddart B, Subramanian A, Zanna L, Palmer T (2017) Seasonal and decadal forecasts of atlantic sea surface temperatures using a linear inverse model. Clim Dyn 49(5):1833–1845.
28. Hughes CW, Williams SDP (2010) The color of sea level: importance of spatial variations in spectral shape for assessing the significance of trends. J Geophys Res (Oceans) 115:C10048.
29. Jia L, DelSole T (2011) Diagnosis of multiyear predictability on continental scales. J Clim 24(19):5108–5124
30. Legeais J-F, Ablain M, Zawadzki L, Zuo H, Johannessen JA, Scharffenberg MG, Fenoglio-Marc L, Fernandes MJ, Andersen OB, Rudenko S, Cipollini P, Quartly GD, Passaro M, Cazenave A, Benveniste J (2018) An improved and homogeneous altimeter sea level record from the esa climate change initiative. Earth Syst Sci Data 10(1):281–301. https://www.earth-syst-sci-data.net/10/281/2018/. Accessed 3 Aug 2018
31. Lillibridge JL, Mariano AJ (2013) A statistical analysis of gulf stream variability from 18+ years of altimetry data. Deep sea research part II: topical studies in oceanography modern physical oceanography and Professor H.T. Rossby. 85:127–146Google Scholar
32. Lorenz EN (1963) Deterministic nonperiodic flow. J Atmos Sci 20(130):141Google Scholar
33. Madec G (2008) Nemo ocean engine, technical document. https://eprints.soton.ac.uk/64324/. Accessed 17 Jan 2019
34. Megann A, Storkey D, Aksenov Y, Alderson S, Calvert D, Graham T, Hyder P, Siddorn J, Sinha B (2014) Go5. 0: the joint nerc-met office nemo global ocean model for use in coupled and forced applications. Geosci Model Dev 7(3):1069–1092
35. Meyssignac B, Piecuch CG, Merchant CJ, Racault M-F, Palanisamy H, MacIntosh C, Sathyendranath S, Brewin R (2017) Causes of the regional variability in observed sea level, sea surface temperature and ocean colour over the period 1993–2011. Surv Geophys 38(1):187–215.
36. Miles ER, Spillman CM, Church JA, McIntosh PC (2014) Seasonal prediction of global sea level anomalies using an ocean-atmosphere dynamical model. Clim Dyn 43(7):2131–2145.
37. Nonaka M, Sasai Y, Sasaki H, Taguchi B, Nakamura H (2016) How potentially predictable are midlatitude ocean currents? Sci Rep 6(10):20153
38. North GR, Bell TL, Cahalan RF, Moeng FJ (1982) Sampling errors in the estimation of empirical orthogonal functions. Mon Weather Rev 110(7):699–706
39. Penduff T, Juza M, Barnier B, Zika J, Dewar WK, Treguier A-M, Molines J-M, Audiffren N (2011) Sea level expression of intrinsic and forced ocean variabilities at interannual time scales. J Clim 24(21):5652–5670.
40. Penland C (1989) Random forcing and forecasting using principal oscillation pattern analysis. Mon Weather Rev 117:10
41. Penland C, Sardeshmukh PD (1995) The optimal growth of tropical sea surface temperature anomalies. J Clim 8:1999–2024
42. Polkova I, Köhl A, Stammer D (2015) Predictive skill for regional interannual steric sea level and mechanisms for predictability. J Clim 28(18):7407–7419.
43. Qiu B, Chen S, Schneider N, Taguchi B (2014) A coupled decadal prediction of the dynamic state of the Kuroshio extension system. J Clim 27(4):1751–1764.
44. Rio M-H, Mulet S, Picot N (2014) Beyond goce for the ocean circulation estimate: synergetic use of altimetry, gravimetry, and in situ data provides new insight into geostrophic and ekman currents. Geophys Res Lett 41(24):8918–8925.
45. Roberts CD, Calvert D, Dunstone N, Hermanson L, Palmer MD, Smith D (2016) On the drivers and predictability of seasonal-to-interannual variations in regional sea level. J Clim 29(21):7565–7585.
46. Sasaki YN, Schneider N (2011) Interannual to decadal gulf stream variability in an eddy-resolving ocean model. Ocean Model 39(3–4):209–219. http://www.sciencedirect.com/science/article/pii/S1463500311000771. Accessed 21 Mar 2018
47. Scaife AA, Copsey D, Gordon C, Harris C, Hinton T, Keeley S, O’Neill A, Roberts M, Williams K (2011) Improved Atlantic winter blocking in a climate model. Geophys Res Lett.
48. Schneider T, Griffies SM (1999) A conceptual framework for predictability studies. J Clim 12:3133–3155
49. Sérazin G, Penduff T, Grégorio S, Barnier B, Molines J-M, Terray L (2015) Intrinsic variability of sea level from global ocean simulations: spatiotemporal scales. J Clim 28(10):4279–4292.
50. Sonnewald M, Wunsch C, Heimbach P (2018) Linear predictability: a sea surface height case study. J Clim 31(7):2599–2611.
51. MA Spall (1996) Dynamics of the Gulf Stream/deep western boundary current crossover. Part II: low-frequency internal oscillations. J Phys Oceanogr 26:2169–2182
52. Stommel H (1979) Determination of water mass properties of water pumped down from the ekman layer to the geostrophic flow below. Proc Natl Acad Sci 76(7):3051–3055. http://www.pnas.org/content/76/7/3051. Accessed 14 Aug 2018
53. Toole J, Curry R, Joyce T, McCartney M, Pena-Molino B (2011) Transport of the north atlantic deep western boundary current about 39 n, 70 w: 2004–2008. Deep sea research part II. Topical Studies in Oceanography climate and the Atlantic Meridional Overturning Circulation. 58(17):1768–1780. http://www.sciencedirect.com/science/article/pii/S096706451100021X
54. Wang C, Dong S, Munoz E (2010) Seawater density variations in the north atlantic and the atlantic meridional overturning circulation. Clim Dyn 34(7):953–968.
55. Wang Q, Mu M, Dijkstra AH (2013) Effects of nonlinear physical processes on optimal error growth in predictability experiments of the Kuroshio Large Meander. J Geophys Res Oceans 118(12):6425–6436.
56. Williams KD, Harris CM, Bodas-Salcedo A, Camp J, Comer RE, Copsey D, Fereday D, Graham T, Hill R, Hinton T, Hyder P, Ineson S, Masato G, Milton SF, Roberts MJ, Rowell DP, Sanchez C, Shelly A, Sinha B, Walters DN, West A, Woollings T, Xavier PK (2015) The met office global coupled model 2.0 (gc2) configuration. Geosci Model Dev 8(5):1509–1524. http://www.geosci-model-dev.net/8/1509/2015/. Accessed 16 Jan 2019
57. Zang X, Wunsch C (2001) Spectral description of low-frequency oceanic variability. J Phys Oceanogr 31(10):3073–3095. https://doi.org/10.1175/1520-0485(2001) 031<3073:SDOLFO>2.0.CO;2Google Scholar
58. Zanna L (2012) Forecast skill and predictability of observed Atlantic Sea surface temperatures. J Clim 25(14):5047–5056.
59. Zanna L, Brankart JM, Huber M, Leroux S, Penduff T, Williams PD (2018) Uncertainty and scale interactions in ocean ensembles: from seasonal forecasts to multidecadal climate predictions. Q J R Meteorol Soc.
60. Zanna L, Tziperman E (2005) Nonnormal amplification of the thermohaline circulation. J Phys Oceanogr 35(9):1593–1605
61. Zanna L, Tziperman E (2008) Optimal surface excitation of the thermohaline circulation. J Phys Oceanogr 38(8):1820–1830.
## Authors and Affiliations
• Robert Fraser
• 1
• Matthew Palmer
• 2
• Christopher Roberts
• 3
• Chris Wilson
• 4
• Dan Copsey
• 2
• Laure Zanna
• 1
1. 1.Department of Physics, Clarendon LaboratoryUniversity of OxfordOxfordUK
2. 2.Hadley Centre for Climate ChangeMet OfficeExeterUK
3. 3.European Centre for Medium-Range Weather ForecastsReadingUK
4. 4.National Oceanography CentreLiverpoolUK | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175544142723083, "perplexity": 3696.1196608820223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000367.74/warc/CC-MAIN-20190626154459-20190626180459-00149.warc.gz"} |
https://www.physicsforums.com/threads/points-where-gradient-is-zero-plotting-it.165824/ | # Points where gradient is zero (plotting it)
1. Apr 15, 2007
### W3bbo
1. The problem statement, all variables and given/known data
A curve has equation:
x^2+2xy-3y^2+16=0
Find the co-ordinates of the points on the curve where dy/dx=0
I think I was able to differentiate it and get the coordinates fine, but I'm wanting to plot the function in Mathematica (5.2) to see if I'm right or not (BTW, I tried Ma's Dt[] and Differential[] functions, but I can't interpret the results. And plot[f, {x,-2,2}] just gives me error messages because y is undefined).
2. The attempt at a solution
x^2+2xy-3y^2+16=0
2x+2x(dy/dx)+y-3(2y(dy/dx))=0
y+2x+(dy/dx)(2x-6y)=0
(dy/dx)=-(y+2x)/(2x-6y)=0
For the fraction to equal zero, the numerator must also be zero, therefore:
-y-2x=0
y=-2x
Given this, substituting this value for y:
x^2+2x(-2x)-3(-2x)^2+16=0
x^2-4x^2-12x^2+16=0
-15x^2+16=0
x=Sqrt(960)/-30
x=Sqrt(960)/30
but it seems a little hackish to me, this from a past-paper (Edexcel Advanced Level C4, 28th June 2005), usually you get integer answers.
But besides asking if I'm right, how can I plot functions with multiple instances of x and y within? I'm guessing I'd need to convert it to a parametric somehow.
2. Apr 15, 2007
### f(x)
diff. gives $\ 2x + 2x \frac{dy}{dx} + 2y - 3(2y \frac{dy}{dx} ) = 0$
3. Apr 15, 2007
### W3bbo
Where did $+2y$ come from? I didn't have a solitary $y^2$ expression.
EDIT: Ah I see, product rule; I forgot to reapply the coefficient (2) of xy after performing the product differentiation.
Still, how can I plot the function?
Last edited: Apr 15, 2007
4. Apr 15, 2007
### Mathgician
the question is asking for the critical points of the surface right? I have a question do you need to graph this function? Do you need to find the saddle points and min max?
5. Apr 15, 2007
### W3bbo
I'm not being asked to plot the graph, and I've since found what I think are the right co-ordinates (by substituting the resolved value of y into the equation and solving) as ${ (2,0) , (-2,0) }$.
I want to plot the graph out of personal curiosity, to see what the graph actually looks like (but also to make sure I'm right). I haven't covered the plotting of implicit functions on my curriculum's syllabus though. Hence why I'm asking :) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8911451697349548, "perplexity": 944.1469584801677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476990033880.51/warc/CC-MAIN-20161020190033-00432-ip-10-142-188-19.ec2.internal.warc.gz"} |
https://tutorial.math.lamar.edu/Solutions/CalcI/AreaProblem/Prob3.aspx | Paul's Online Notes
Paul's Online Notes
Home / Calculus I / Integrals / Area Problem
Show Mobile Notice Show All Notes Hide All Notes
Mobile Notice
You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.
### Section 5.5 : Area Problem
3. Estimate the area of the region between $$\displaystyle h\left( x \right) = - x\cos \left( {\frac{x}{3}} \right)$$ the $$x$$-axis on $$\left[ {0,3} \right]$$ using $$n = 6$$ and using,
1. the right end points of the subintervals for the height of the rectangles,
2. the left end points of the subintervals for the height of the rectangles and,
3. the midpoints of the subintervals for the height of the rectangles.
Show All Solutions Hide All Solutions
a The right end points of the subintervals for the height of the rectangles. Show Solution
The widths of each of the subintervals for this problem are,
$\Delta x = \frac{{3 - 0}}{6} = \frac{1}{2}$
We don’t need to actually graph the function to do this problem. It would probably help to have a number line showing subintervals however. Here is that number line.
In this case we’re going to be using right end points of each of these subintervals to determine the height of each of the rectangles.
The area between the function and the $$x$$-axis is then approximately,
\begin{align*}{\mbox{Area}} & \approx \frac{1}{2}f\left( {\frac{1}{2}} \right) + \frac{1}{2}f\left( 1 \right) + \frac{1}{2}f\left( {\frac{3}{2}} \right) + \frac{1}{2}f\left( 2 \right) + \frac{1}{2}f\left( {\frac{5}{2}} \right) + \frac{1}{2}f\left( 3 \right)\\ & = \frac{1}{2}\left( { - \frac{1}{2}\cos \left( {\frac{1}{6}} \right)} \right) + \frac{1}{2}\left( { - \cos \left( {\frac{1}{3}} \right)} \right) + \frac{1}{2}\left( { - \frac{3}{2}\cos \left( {\frac{1}{2}} \right)} \right) + \frac{1}{2}\left( { - 2\cos \left( {\frac{2}{3}} \right)} \right)\\ & \hspace{1.5in} + \frac{1}{2}\left( { - \frac{5}{2}\cos \left( {\frac{5}{6}} \right)} \right) + \frac{1}{2}\left( { - 3\cos \left( 1 \right)} \right)\\ & = \require{bbox} \bbox[2pt,border:1px solid black]{{ - 3.814057}}\end{align*}
Do not get excited about the negative area here. As we discussed in this section this just means that the graph, in this case, is below the $$x$$-axis as you could verify if you’d like to.
b The left end points of the subintervals for the height of the rectangles. Show Solution
As we found in the previous part the widths of each of the subintervals are $$\Delta x = \frac{1}{2}$$.
Here is a copy of the number line showing the subintervals to help with the problem.
In this case we’re going to be using left end points of each of these subintervals to determine the height of each of the rectangles.
The area between the function and the $$x$$-axis is then approximately,
\begin{align*}{\mbox{Area}} & \approx \frac{1}{2}f\left( 0 \right) + \frac{1}{2}f\left( {\frac{1}{2}} \right) + \frac{1}{2}f\left( 1 \right) + \frac{1}{2}f\left( {\frac{3}{2}} \right) + \frac{1}{2}f\left( 2 \right) + \frac{1}{2}f\left( {\frac{5}{2}} \right)\\ & = + \frac{1}{2}\left( 0 \right) + \frac{1}{2}\left( { - \frac{1}{2}\cos \left( {\frac{1}{6}} \right)} \right) + \frac{1}{2}\left( { - \cos \left( {\frac{1}{3}} \right)} \right) + \frac{1}{2}\left( { - \frac{3}{2}\cos \left( {\frac{1}{2}} \right)} \right) + \frac{1}{2}\left( { - 2\cos \left( {\frac{2}{3}} \right)} \right)\\ & \hspace{1.5in} + \frac{1}{2}\left( { - \frac{5}{2}\cos \left( {\frac{5}{6}} \right)} \right)\\ & = \require{bbox} \bbox[2pt,border:1px solid black]{{ - 3.003604}}\end{align*}
Do not get excited about the negative area here. As we discussed in this section this just means that the graph, in this case, is below the $$x$$-axis as you could verify if you’d like to.
c The midpoints of the subintervals for the height of the rectangles. Show Solution
As we found in the first part the widths of each of the subintervals are $$\Delta x = \frac{1}{2}$$.
Here is a copy of the number line showing the subintervals to help with the problem.
In this case we’re going to be using midpoints of each of these subintervals to determine the height of each of the rectangles.
The area between the function and the $$x$$-axis is then approximately,
\begin{align*}{\mbox{Area}} & \approx \frac{1}{2}f\left( {\frac{1}{4}} \right) + \frac{1}{2}f\left( {\frac{3}{4}} \right) + \frac{1}{2}f\left( {\frac{5}{4}} \right) + \frac{1}{2}f\left( {\frac{7}{4}} \right) + \frac{1}{2}f\left( {\frac{9}{4}} \right) + \frac{1}{2}f\left( {\frac{{11}}{4}} \right)\\ & = \frac{1}{2}\left( { - \frac{1}{4}\cos \left( {\frac{1}{{12}}} \right)} \right) + \frac{1}{2}\left( { - \frac{3}{4}\cos \left( {\frac{1}{4}} \right)} \right) + \frac{1}{2}\left( { - \frac{5}{4}\cos \left( {\frac{5}{{12}}} \right)} \right) + \frac{1}{2}\left( { - \frac{7}{4}\cos \left( {\frac{7}{{12}}} \right)} \right)\\ & \hspace{1.5in} + \frac{1}{2}\left( { - \frac{9}{4}\cos \left( {\frac{3}{4}} \right)} \right) + \frac{1}{2}\left( { - \frac{{11}}{4}\cos \left( {\frac{{11}}{{12}}} \right)} \right)\\ & = \require{bbox} \bbox[2pt,border:1px solid black]{{ - 3.449532}}\end{align*}
Do not get excited about the negative area here. As we discussed in this section this just means that the graph, in this case, is below the $$x$$-axis as you could verify if you’d like to. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.987605094909668, "perplexity": 789.3395571537475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710503.24/warc/CC-MAIN-20221128102824-20221128132824-00555.warc.gz"} |
https://mathoverflow.net/questions/255084/finite-dimensional-approximations-of-the-shift-operator | # Finite-dimensional approximations of the shift operator
On the standard space $l^2$ let us consider the left shift operator $$L(c_1,c_2,c_3,\ldots)=(c_2,c_3,c_4,\ldots).$$ It is well known that the spectrum of $L$ is the whole unit disk in the complex plane. I would like to approximate $L$ by some sequence of finite-dimensional operators $L_n$. A naive way to do this is to set $L_n$ as follows $$L_n=\left(\begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ 0 & 0 & 0 & \ddots & 0 \\ 0 & 0 & 0 & \ldots & 1 \\ 0 & 0 & 0 & \ldots & 0 \end{array}\right)$$
However the spectrum of $L_n$ consists only of $0$. Could one suggest more reasonable finite-dimensional approximation sequence $L_n$ such that spectrum of operator $L_n$ gradually fills the unit disk? References are welcome.
• What kind of approximation do you have in mind? Certainly it is not possible in the norm topology. – Tomasz Kania Nov 19 '16 at 14:14
• Google Berg's method. It's used for approximating the generators of irrational rotation algebras, one of which can be the shift. – David Handelman Nov 19 '16 at 14:14
• I am not sure what topology is suitable for this task, but would like to understand how to find finite-dimensional approximations which do not ignore continuous spectrum. – Anton Nov 19 '16 at 20:11
• Every $|z|<1$ is an eigenvalue with eigenvector $z^n$, so you could just take finitely many of these, truncate them, and make them eigenvectors of a finite-dimensional approximation. It's not so obvious though (to me) if these "approximations" still converge in the strong operator topology; perhaps this will depend on a suitable choice of the eigenvalues. – Christian Remling Nov 20 '16 at 20:21
• Could you explain why you have accepted an answer which does not provide a "good" approximation of the shift? More generally, perhaps you should look at work of Steffen Roch and his coauthors which has a systematic look at "finite section methods" for approximating operators – Yemon Choi Nov 27 '16 at 14:08
If you replace it with the cyclic shift operator, you get a circulant matrix (the same as your $L_n$ except that the bottom-left entry is $1$). The eigenvalues of that matrix are the $n$th roots of unity. So as $n$ grows, the spectrum fills the unit circle (it does not fill the unit disk, though).
Your $L_n$ is a highly non-normal matrix; the circulant version is normal. If you want to understand this better, read Chapter 7 of Trefethen & Embree's Spectra and Pseudospectra, which deals specifically with your example.
I think that the numerical range is an appropriate tool for your question. Your naive approximations $L_n$ of the shift operator are nilpotent. For such matrices $M$ (nilpotent of size $n$), the numerical range ${\cal H}(M)$ is a disk $D(0;r_n)$ with radius $$r_n=\|M\|\cos\frac\pi{n+1}\,$$ where $\|M\|$ is the standard operator norm. In your situation, $\|L_n\|=1$, so that $$r_n=\cos\frac\pi{n+1}\rightarrow1^-.$$ I suspect that for reasonnable operators $L$, the finite dimensional approximations $L_n=P_n^*LP_n$ ($P_n$ the orthogonal projection on an increasing sequence of subspaces) has the property that the union ${\cal H}(L_n)$, which is a non-decreasing sequence for inclusion, contains the spectrum of $L$, exactly as ${\cal H}(M)$ contains the spectrum of $M$. This would be true if $L\mapsto {\cal H}(L)$ is lcs for a rather weak topology on operators. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9249607920646667, "perplexity": 206.17077983188383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387219.0/warc/CC-MAIN-20200525032636-20200525062636-00437.warc.gz"} |
https://stats.stackexchange.com/questions/399717/conditional-expectation-how-to-find-exy-when-exy-is-known | # Conditional expectation; how to find E[xy] when E[x|y] is known?
In my studies for an exam which I have on Friday I have come across this assignment from last year in which the following question is asked:
"Let $$E[x] = \mu$$ and $$var[x] = \sigma^2$$. If $$E[x \lvert y] = a + bx$$, find $$E[xy]$$ as a function of $$\mu$$ and $$\sigma^2$$."
Now in this assignment of last year, an answer by a group of students from that year was provided:
"The law of total expectations states $$E[xy] = E[E[xy \lvert x]]$$. By linearity of conditional expectations, $$E[E[xy \lvert x]] = E[xE[y \lvert x]]$$. We get
$$E[xy] = E[E[xy \lvert x]] = E[x[E[y \lvert x]] = E[x(a + bx)] =$$ ...."
Where at the end they simply write out the expression and then input the given $$\mu$$ and $$\sigma^2$$. I'm not sure whether this is correct. First of all, I'm not sure whether I understand correctly how $$E[xy] = E[E[xy \lvert x]]$$ follows from the law of total expectation. I thought that it might be as follows:
$$E[E[XY \lvert X = x]] = E[xE[Y \lvert X = x]] = \sum_x xE[Y \lvert X]P[X = x] =$$ $$\sum_x \sum_y x y P[X = x \lvert Y = y]P[X = x] = \sum_x \sum_y x y P[X = x, Y = y] = E[XY].$$
However I would like to know if that line of reasoning is correct.
Secondly I don't see why $$E[y \lvert x] = E[x \lvert y]$$ would have to be true. Could anyone tell me whether this is correct and possibly explain why it is correct or what is correct if it isn't? Mathematically, intuitively, or both?
Thank you in advance
• If you write $Z$ for $XY$ (note $Z$ has nothing to do with normal random variables), then would you agree that $E[Z\mid X=x] = E[xY\mid X=x] = xE[Y\mid X=x]$? If so, then note that when $X$ equals $x$, the random variable $E[Z\mid X] = E[XY\mid X]$ takes on value $xE[Y\mid X=x]$ and so the random variable $E[XY \mid X]$ is $XE[Y\mid X]$ (and is a function of $X$ as it shoud be),. – Dilip Sarwate Mar 27 '19 at 15:41
This is correct: $$E[XY] = E[E[XY\lvert X]] = E[X[E[Y \lvert X]] = E[X(a + bX)].$$
By linearity of the expectation operator, we get $$E[XY]=aE[X]+bE[X^2].$$ Using the fact that $$E[X]=\mu$$ and $$E[X^2]=\sigma^2+\mu^2$$, we get
$$E[XY]=a\mu+b(\mu^2+\sigma^2).$$
• So $E[Y \lvert X] = E[X \lvert Y]$? How? – Anon Mar 27 '19 at 16:08
• Of course not, where did you get this? $E(Y|X)$ is a function of $X$, while $E(X|Y)$ is a function of $Y$. I copied the first line from your post. – dlnB Mar 27 '19 at 16:09
• You write that $E[Y \lvert X] = a + bX$, however, it is given that $E[X \lvert Y] = a + bX$. – Anon Mar 27 '19 at 16:11
• Oh right I didn't notice that. It must be a typo. $E(X|Y)$ would have to be a function of $Y$, not $X$. If your assigned problem has written that $E(X|Y)=a+bX$, it must be a typo that should read $E(Y|X)=a+bX$. – dlnB Mar 27 '19 at 16:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8391762971878052, "perplexity": 153.7163273338076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896905.46/warc/CC-MAIN-20200708062424-20200708092424-00529.warc.gz"} |
https://www.physicsforums.com/threads/pde-separation-of-variables-problem.100406/ | # PDE: separation of variables problem
1. Nov 18, 2005
### eckiller
I am to reduce the following PDE to 2 ODEs and find only the particular solutions:
u_tt - u_xx - u = 0; u_t(x,0) = 0; u(0,t) = u(1,t) = 0
I guess u = X(x)T(t), and plug u_tt, u_xx into PDE and divide by u to get:
T''/T = X''/X + 1 = K
I solve X'' + (1-K)X = 0 first.
From characteristic equation, r = +- sqrt(1-K)
Becausse of boundary conditions, we must have 1-K < 0
So r = +- i sqrt(K-1)
(I think I make an error somewhere around here...)
==> u = c1 cos(sqrt(K-1)x) + c2 sin(sqrt(K-1)x)
From boundary conditions, c1 =0, and sin(sqrt(K-1)) = 0 => sqrt(K-1) = n*pi
=> K-1 = n^2*pi^2
=> K = n^2*pi^2 + 1
Correct so far?
Now for T:
T'' - KT = 0
T'' - (n^2*pi^2 + 1)T = 0
r = +- sqrt(n^2*pi^2+1)
I think my K = n^2*pi^2 + 1 is wrong because it is strictly positive and I don't think an e^t solution will satisfy the initial condition.
The book's answer is cos( sqrt(n^2*pi^2-1)t) * sin(n*pi*x)
2. Nov 18, 2005
### saltydog
When you separate variables and equate to a separation constant, usually need to check the boundary conditions against a positive, negative, and zero separation constant. For now, these types of equations just have a negative separation constant. That is:
$$\frac{T^{''}}{T}=\frac{X^{''}}{X}+1=K=-\lambda;\quad \lambda>0$$
So, need to solve:
$$\frac{X^{''}}{X}+1=-\lambda$$
or:
$$X^{''}+X(1+\lambda)=0$$
Solving for the roots:
$$m=\pm \sqrt{-(1+\lambda)}$$
Thus have:
$$X(x)=A_1Cos(\sqrt{1+\lambda}x)+A_2Sin(\sqrt{1+\lambda}x)$$
Substituting the boundary conditions yield:
$$\lambda=(n\pi)^2-1;\quad n>0$$
and:
$$X(x)=A_2Sin(n\pi x)$$
Can you do T now?
Edit: Also, I think you need another constraint to obtain a particular solution, you know, wave equations usually specify the initial velocity as well.
Last edited: Nov 18, 2005 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9228886961936951, "perplexity": 3048.2116961999554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00550-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://photonics101.com/radiation-and-antennas/directivity-parabolic-reflector-dipole | ## The Directivity of a Parabolic Reflector
Why are parabolic reflectors as part of antenna systems so common? The reason is that they provide a very high directivity. In this problem you will learn about this very basic concept of antenna theory to quantify emitters and receivers at the examples of a dipole, a "half-dipole" and a parabolic reflector.
## Problem Statement
Calculate the directivity of a dipole oscillating in $$z$$ direction. Then, assume the radiation propagating in negative $$x$$ -direction gets absorbed in the farfield of the dipole. How does the directivity change?
Now we want to put this “half-dipole” in the focal point of a parabolic reflector with opening radius $$R$$ and focal distance $$f$$, see figure on the right. How can we calculate the directivity of this device approximately for a small opening angle $$\delta\theta \approx R/f$$ Assume that the backreflected light is not influenced by the “half-dipole” itself and calculate $$D$$ for a wavelength $$\lambda=500\,$$nm and an opening radius $$R=20\,$$cm.
## Hints
A plane wave with wavelength $$\lambda=2\pi/k$$ incident on a hole with radius $$a$$ will cause an intensity profile$\begin{eqnarray*} I\left(\theta^{\prime}\right)&\propto&\left(\frac{2J_{1}\left(kR\sin\theta^{\prime}\right)}{kR\sin\theta^{\prime}}\right)^{2} \end{eqnarray*}$far away from the aperture (Fraunhofer approximation). Here, $$\theta^{\prime}$$ is the angle with respect to the direction of propagation and $$J_{1}$$ is a cylindrical Bessel function. The first zero of this intensity is at the second one of this Bessel function, at $$kR\sin\theta^{\prime}\approx3.83$$.
If an emitter has a single main lobe with an elliptical form, $$\Omega_{p}$$ is approximately given by the product of the angles where the intensities in the main axis system are half of the maximum,$\begin{eqnarray*} \Omega_{p}&\approx&\alpha_{1}\alpha_{2}\ . \end{eqnarray*}$
## Show Solution
Let us first consider the calculation of directivity for the dipole and "half-dipole" to familiarize with the concept. Then we will come to the parabolic reflector where we will have to work a little more to find a meaningful expression for the directivity.
## Dipole and “Half-Dipole”
For a dipole oscillating along the $$z$$-axis we have $$F_{d}\left(\theta,\varphi\right)=\sin^{2}\theta$$, so$\begin{eqnarray*} D_{d}&=&\frac{4\pi}{\int_{0}^{2\pi}\int_{0}^{\pi}\sin^{3}\theta d\theta d\varphi}\\&=&\frac{4\pi}{2\pi\int_{0}^{\pi}\sin^{3}\theta d\theta d\varphi}\\&=&2/\left[-\frac{3}{4}\cos\theta+\frac{1}{12}\cos3\theta\right]_{0}^{\pi}\\&=&3/2\ . \end{eqnarray*}$For the given assumptions, the “half-dipole” is given by$\begin{eqnarray*} F_{hd}\left(\theta,\varphi\right)&=&\begin{cases}\sin^{2}\theta & \varphi\in\left[-\frac{\pi}{2},\frac{\pi}{2}\right]\\0 & \mathrm{else}\end{cases} \end{eqnarray*}$if we accept negative $$\varphi$$. Then, since the calculation is exactly the same as for the dipole except that the $$\varphi$$ integration results in $$\pi$$ rather than $$2\pi$$, we find$\begin{eqnarray*} D_{hd} = 2D_{d}=3\ . \end{eqnarray*}$
## The Parabolic Reflector
Now to the half-dipole in the focus of a prabolic reflector. If we consider this problem in the ray optics picture, we find for the radial part of the Poynting vector$\begin{eqnarray*} S_{r,\mathrm{refl}}\left(\theta,\varphi\right)&\propto&\delta\left(\theta-\pi/2\right)\delta\left(\varphi+\pi\right)\ . \end{eqnarray*}$This formula basically means that all of the energy is focussed in one point, represented by the delta-distributions which also implies $$D\rightarrow\infty$$. This is very bad from two perspectives. From a mathematical viewpoint, this energy density only makes sense inside an integral, i.e. for $$P_{\mathrm{rad}}$$. But how to normalize the Poynting vector to obtain $$F$$? Also from a physical point of view it is not really convincing since diffraction will cause the energy density to be distributed.
We may include this effect to find a somehow more meaningful directivity than just infinity: we may simply introduce the radius of the reflector as an effective circular aperture. We may further assume a plane wave illumination with constant amplitude, see figure on the left. This additional approximation is very good if the opening angle $$\delta\theta \approx R/f \ll 1$$ since the amplitude of the dipole $$\propto \sin \theta ^2$$ is approximately constant. Otherwise, the intensity in the Fraunhofer diffraction would be more complicated to calculate. Then, the intensity around the axis of propagation and angle $$\theta^{\prime}\in\left[0,\pi\right]$$ with respect to this axis is given by$\begin{eqnarray*} I\left(\theta^{\prime}\right)&\propto&\left(\frac{2J_{1}\left(kR\sin\theta^{\prime}\right)}{kR\sin\theta^{\prime}}\right)^{2}\ . \end{eqnarray*}$In our case, the propagation is along the negative $$x$$ -axis. Now we may use that $$\Omega_{p}$$ is approximately given by the product of the angles where the intensities in the main axis system are half of the maximum,$\begin{eqnarray*} \Omega_{p}&\approx&\alpha_{1}\alpha_{2}=\alpha^{2} \end{eqnarray*}$since $$\alpha_{1}=\alpha_{2}=\alpha$$ in our axisymmetric intensity. Then, $$\alpha$$ is approximately given by the first root of the Bessel function,$\begin{eqnarray*} kR\sin\alpha&\approx&3.83\ ,\ \mathrm{so}\\D_{hp}&\approx&\frac{4\pi}{\arcsin\left[3.83/kR\right]} \end{eqnarray*}$with $$k=2\pi/\lambda$$. The diffraction pattern of a circular aperture is one of the basic concepts in diffraction theory - the angle $$\alpha$$ even has a name and is called airy disk, see figure on the right. Coming back to the small opening angle approximation, we can see now, that for higher $$\delta\theta$$, the fall-off of the intensity of the dipole on the reflector would act as a "smooth aperture" itself and we would find an even stronger decay in the farfield intensity. In turn, the directivity would be higher and our result is hence a lower limit for the directivity.
If we take now for example $$R=20\,$$cm and $$\lambda=500\,$$nm, we find $$kR=2.5\cdot10^{6}$$ and $$\alpha\approx1.5\cdot10^{-6}$$. Then, the directivity of the parabolic reflector is given by$\begin{eqnarray*} D_{hp}&\approx&5.4\cdot10^{12}\ . \end{eqnarray*}$Well, this is still pretty much infinity. Nevertheless, this kind of result can be expected for such a small wavelength and this extremely big aperture since the intensity is basically a delta-peak. In reality, an antenna will always have a lower directivity since it may be impossible to produce an ideal parabolic reflector. Surface roughness, position of the dipole, absorption by the source as geometrical obstacle will be the main fabrication limitations. There are also a few other things one has to bare in mind like the angular dependence of the reflection of the dipole's radiation, the losses in the reflector etc.
In the end, the directivity may be lowered by several orders of magnitude - but it will still be an absolutely impressive number. This lets us understand why parabolic antennas are indeed used in a lot of applications where directivity is extremely important like telescopes in astrophysics or as receiver for broadcast signals from satelites.
## Background: Directivity in Antenna Theory
The radiated power $$dP_{\mathrm{rad}}$$ into a certain angle $$d\Omega=\sin\theta d\theta d\varphi$$ is related to the radial component of the time-averaged Poynting vector $$S_{r}\left(r,\theta,\varphi\right)$$,$\begin{eqnarray*} dP_{\mathrm{rad}}&=&r^{2}S_{r}\left(r,\theta,\varphi\right)d\Omega\equiv r^{2}S_{r,\mathrm{max}}\left(r\right)F\left(\theta,\varphi\right)d\Omega\ . \end{eqnarray*}$As defined above, one may also normalize $$S_{r}$$ und use the normalized radiation intensity $$F$$ instead. This quantity is used to define the directivity $$D$$ of a certain antenna or source,$\begin{eqnarray*} D&=&\frac{4\pi}{\Omega_{p}\equiv\int_{0}^{2\pi}\int_{0}^{\pi}F\left(\theta,\varphi\right)\sin\theta d\theta d\varphi}\ . \end{eqnarray*}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237441420555115, "perplexity": 362.37708540333074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831715.98/warc/CC-MAIN-20181219065932-20181219091932-00570.warc.gz"} |
https://math.stackexchange.com/questions/447357/number-of-optimal-paths-through-a-grid-with-an-ordered-path-constraint | # Number of optimal paths through a grid with an ordered path constraint
I found, but the awesome explanation of Arturo Magidin: Counting number of moves on a grid
the number of paths for an MxN matrix. If I am thinking about this correctly (please say something if I am wrong), but the number of optimal/shortest paths from the lower left corner @ (0,0) to the top right corner @ (m,n) is any path that can be done in m + n moves as we would have to at least move m spaces to the right and n spaces up at some point. If there is no "backtracking" i.e. that there are no allowed moves which are down or left, then all the paths will be of size m + n. Thus the total number of paths from (0,0) to (m,n) is $\binom{m + n - 1}{m}$ or $\binom{m + n -1}{n}$ as we don't care about order and so could transverse in respect to m or n - both lead us to the end point.
The question I am trying to figure out is that when we say we have to move up (at least once) for every move to the right. Thus one move to the right has to be followed by one or moves up. Now I see (or think I see) that we would have the total number of possible paths as above (without cycles or down or left movements) minus those paths which have two right movements RR or above in a row. Thus we could have RUURURUUUR, but nothing such as RUURR...
I don't see how to do this. I am curious of both a straight forward way (combinatorically) as well as the recursive solution if anyone wouldn't mind giving me a hand.
Thanks,
Brian
• Is the constraint that every R must be followed by a U, or that the total number of R's must be less than or equal to the total number of U's. If the former, the previous analysis can be modified to fit. If the latter, look up Dyck Paths, the wikipedia article on Catalan numbers is a good place to start. – deinst Jul 19 '13 at 13:21
• I edited the question to make it clearer. In answer though, ever move to the right HAS to be followed by one or more ups. The number of R's don't have to be less than or equal to the U's, this all depends on the rectangular grid i.e. if the grid is longer vertically than horizontally then U's will be more than the number of R's, but if the grid is wider than tall we will have more R's. – Relative0 Jul 19 '13 at 13:32
• Shouldn't the number of paths from (0,0) to (m,n) be C(m+n,m) or C(m+n,n)? (Drat, I can't get LaTex to work for me.) – awkward Jul 19 '13 at 13:35
Wrap RU with duct tape and consider it to be one character; there must be m of these. There must also be n-m U's not preceded by an R. (Hence $n \ge m$.) So there are $$\binom{n}{m} = \binom{n}{n-m}$$ acceptable arrangements.
Let $m$ be the horizontal direction and $n$ be the vertical direction.
If $m>n+1$, there are no such paths since there will always be at least one set of $RR$s in the path.
Since each R must have a U between, we can always start with at least $RURU...URUR$ as a pattern. Thus, we need at least $m-1$ $U$ moves.
The "surplus" can be arranged any way we see fit. Then there are $\Big(\binom{n-m+1}{n+1}\Big)$ ways of arranging your "extra" $U$ moves between the $R$ moves - or before the first $R$ move or after the last $R$ move, where $\Big(\Big(.\Big)\Big)$ is the notation for the multiset coefficient. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8223450183868408, "perplexity": 189.4798692323362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314732.59/warc/CC-MAIN-20190819114330-20190819140330-00461.warc.gz"} |
http://clay6.com/qa/125930/a-charged-oil-drop-is-suspended-in-a-uniform-field-of-3-times-10-4-v-m-so-t | # A charged oil drop is suspended in a uniform field of $3 \times 10^4 V/m$ so that it neither falls nor rises. The charge on the drop will be (take the mass of the charge = $9.9 \times 10^{-15} kg$ and $g = 10\;m/s^2$)
( A ) $1.6 \times 10^{-18} C$
( B ) $4.8 \times 10^{-18} C$
( C ) $3.3 \times 10^{-18} C$
( D ) $3.2 \times 10^{-18} C$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9168123006820679, "perplexity": 1026.8440811956818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529664.96/warc/CC-MAIN-20190723193455-20190723215455-00345.warc.gz"} |
http://physics.stackexchange.com/users/12533/benji-remez?tab=activity&sort=all&page=4 | Benji Remez
less info
reputation
111
bio website location age member for 1 year, 2 months seen Nov 30 at 12:19 profile views 48
69 Actions
Sep27 comment Is the principle of least action a boundary value or initial condition problem? Thanks Qmechanic! I posed my question in a strict classical context, so while I agree that the limitation of uncertainty is indeed problematic, it is technically not existent in the Lagrangian formalism. Sep27 awarded Supporter Sep27 comment Is the principle of least action a boundary value or initial condition problem? I fully understand that imposing both constrains creates an overdetermined problem. Clearly, only one set of conditions is needed - what is unclear to me is which one is used in the context of least action. Why is it derived under one assumption and solved under the other? Sep27 awarded Nice Question Sep26 comment Is the principle of least action a boundary value or initial condition problem? As I understand you, you mean that just because the equations are differential in time, this necessitates that the problem is an initial value problem? I doubt that's really what you meant. Can you refer me to a derivation where this distinction is manifest? Sep26 awarded Teacher Sep26 answered Overcoming Friction Sep26 awarded Student Sep26 asked Is the principle of least action a boundary value or initial condition problem? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9639809131622314, "perplexity": 708.0252045613547}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163037829/warc/CC-MAIN-20131204131717-00057-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/1658-expanding-factoring-q.html | # Math Help - Expanding/Factoring Q.
1. ## Expanding/Factoring Q.
Alright. I do not know the answer to this question, and need to know.
If you were to expand (x-7)(x+7) there would be no x-term. Explain this!
2. Originally Posted by Constance
Alright. I do not know the answer to this question, and need to know.
If you were to expand (x-7)(x+7) there would be no x-term. Explain this!
There is an x term! FOIL. First, Outer, Inner Last.
This gives us $x*x+7*x-7*x-49=x^2-49$. This is a special expansion called the difference of squares.
3. I suppose you mean to expand this, there would be no x in the first order.
$
(x-7)(x+7) \Leftrightarrow (x+7)(x-7) \Leftrightarrow (a+b)(a-b)
$
This last one there is a theorem/rule which I don't know what is called in English.
$
(a+b)(a-b) \Leftrightarrow a^2 - b^2
$
Which gives.
$
(x-7)(x+7) \Leftrightarrow x^2 - 7^2 \Leftrightarrow x^2 - 49
$
There you have it, no x of first degree! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888227105140686, "perplexity": 1033.070896754703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678676855/warc/CC-MAIN-20140313024436-00056-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/voltage-as-the-cause-of-motion-of-charges.734252/ | # Voltage as the cause of motion of charges?
1. ### gralla55
57
Voltage as the "cause of motion" of charges?
I know voltage is defined as the potential energy difference per unit charge between two points A and B. In textbooks they often describe this potential difference as the "cause" of current through a wire. Further, if this potential difference gets high enough, the charges does not even require a wire to get from point A to point B (like lightning).
This is what I fail to understand. Shouldn't the coulomb force still be the "cause" of motion for charges? If you increase the distance between two opposite charges, you also increase the voltage between them, but then the coulomb force is reduced, making their pull towards each other less.
My question is basically, is a high potential difference enough to cause lightning, and if not, is it really correct to label voltage as the cause of current through a wire?
### Staff: Mentor
The Coulomb force is the gradient of the voltage. The two are completely equivalent.
3. ### cabraham
In general voltage is not the cause of current. The 2 are related through Ohm's law or non-linear relation such as diode equation, light bulb, etc. But energy conversion such as hydroelectric generator, coal burning/steam engine, chemical redox in battery, etc., is the "cause" of current, as well as of voltage. To get voltage you have to move charge, but that movement is indeed current. It takes current to produce voltage, and once the charges are separated, they can influence other charges to move.
It's chickens and eggs.
Claude
4. ### gralla55
57
DaleSpam:
You are right of course! Voltage actually "decreases" with distance, so that makes a lot more sense. I think what confused me was that I thought in terms of gravitational potential energy near the surface of the earth (= mgh), which increases with distance.
5. ### gralla55
57
Perhaps I'm just really stupid... but I thought some more about this, and now it does not make any sense again. The electric field is the "negative" of the gradient of the voltage, so the voltage does increase with distance (though not in magnitude).
6. ### gralla55
57
Now I'm spamming my own tread, but say you have a charged plate with infinite area. The force is the same everywhere, while the potential is E times the perpendicular distance away from the plane. So even if you increase the voltage by moving away, it doesn't increase the strength of the electric field, and shouldn't cause any more movement of charges...
7. ### rcgldr
7,645
That would be true for a negatively charged plate. The sign convention for a field and potential assumes a positive charged source. For a positively charged plate, the voltage is at a maximum at the surface of the plate and decreases with distance. Assuming a positively charged particle in the field, accelerating away from the positively charged plate, the potential energy of the charged particle decreases and the kinetic energy of the charged particle increases. I'm not sure if or how the field generated from the accelerating charged particle affects this.
I'm not sure of a convention for reference voltage in such a situation. The voltage could be defined as zero at the surface of a positively charge plate, and approach -∞ as the distance approaches ∞. For a capacitor, voltage could be defined as zero at the surface of the negatively charged plate, linearly increasing with distance from the negative plate to a maximum at the positively charged plate.
Last edited: Jan 23, 2014
8. ### jartsa
604
In this case a water hose analogy works perfectly: We have a vertical elastic water tube filled with water, at the low end of the tube there will be some hydrostatic pressure. Then we stretch the tube and observe an increase of the hydrostatic pressure. Every water molecule weighs the same as before though.
Let's consider some charges inside an elastic wire, placed in a homogeneous electric field. There are some charges side by side (parallel), and some charges one after the other (in series).
When we stretch this wire, charges organize themselves so that they are more in series, and less parallel.
Million point charges one after the other, each feeling a coulomb force of one pico Newtons --> Large voltage
Million point charges side by side, each feeling a coulomb force of one pico Newtons --> Small voltage
Last edited: Jan 23, 2014
9. ### gralla55
57
You're right, I meant for a negatively charged plate. It doesn't really matter where I define the voltage to be zero, the difference in voltage between two points will come out the same.
Anyway, the point of my question was to understand why higher voltage equals higher current. I get that you need a voltage difference for there to even be a current, but it seems to me that a higher voltage does not necessarily equal a higher coulomb force between the two points.
10. ### Crazymechanic
849
Higher voltage is more charges per given area.
more alike charges on a given area means stronger electric field there which means more potential to do work if a current path is formed.More charges flow.
About the lightning and breakdown, well a very high PD across two points of say 1 metre apart have a very strong electric field between those points.since we have air around us and many dielectrics like wood and plastic the field polarizes the stuff wwe normally don't call conductors and if the field is strong enough these materials can start to conduct and become conductors, like the breakdown of air when it has happened the current forms a path called an arc.resistance drops in this arc and it become a conductor like a wire in the air.
Vacuum is the best insulator , mainly because it is almoust empty of particles and matter so there is not much for the field to polarize so a very low probability of a breakdown and a very high PD required to achieve one.
11. ### jartsa
604
There was some problem regarding two charged plates moved away from each other, and some charges between the plates. What was the problem? Are the charges in a wire? Does the wire move when the plates are moving?
Oh yes, one more thing that is unclear: Who were you replying to?
Last edited: Jan 23, 2014
12. ### gralla55
57
jartsa:
I didn't notice your reply, so I wasn't replying to you! What you wrote made sense though. The initial problem was that I saw voltage described as the driving force for current in several places. I then pictured some cases where higher voltage meant lower electric field, or the same electric field, which should mean lower current in spite of higher voltage. This in turn would mean that voltage can't really be seen as the driving force for current.
For the uniform eletric field case:
V = Ed means E = V/d
so the voltage is proportional to the electric field, but given a very high voltage, the electric field could still be small if d is even higher.
But this just looks like some stripped down version of ohms law. I imagine the length of the wire is part of what makes up resistance? If so, higher voltage would drive a larger current, with everything else being equal, but the electric field itself is what drives the current, and that depends on more than voltage.
### Staff: Mentor
Sorry I missed this earlier. You are correct, the E field is the negative gradient of the voltage. Sorry for being sloppy. It depends on the sign of the charge whether voltage increases or decreases with distance.
However, it seems like you still are confused. For a uniform E field the negative gradient of the voltage is uniform. The voltage increases with distance, but it's negative gradient is constant. Recall from your own OP that it is voltage differences which drive current. The gradient operator gives the local "defferences" of a scalar field.
14. ### gralla55
57
I'll admit to still being a little confused about this. The whole point of the thread was that I couldn't understand "why" voltage is the supposed driving force of current, as it is the electric field which asserts a force on a charge (and thereby causes it to move).
As you pointed out, the voltage can increase without an increase in electric field.
### Staff: Mentor
Voltage doesn't drive current, voltage differences do.
Think of the water analogy. Pressure doesn't drive water through pipes, pressure differences do.
As you said in your OP:
There is mathematically no difference between saying that the E field drives it and saying voltage differences drive it. They are mathematically the same thing: the E field is the negative gradient of the voltage in electrostatics.
Last edited: Jan 24, 2014
16. ### jartsa
604
Yeah. Point charge does not know about different voltage differences. That's why charges stay in a million volt power line, the charges flow over voltage difference of one volt and distance of ten meters along the line, instead of flowing over distance of ten meters and voltage difference of million volts to the ground.
The long cylinder shaped charged object inside the wire may be thought to be siphoned from high potential to low potential, maybe. (The electron gas is the long object)
Last edited: Jan 24, 2014
17. ### gralla55
57
Voltage = potential difference.
As for the water analogy, you could say that pressure difference drives the water, but it is the gravitational field thas is responsible for the pressure in the first place.
I don't think the analogy is perfect either, as the significant gravitational field is generated by the earth, not from the water molecules themselves.
"There is mathematically no difference between saying that the E field drives it and saying voltage differences drive it. They are mathematically the same thing: the E field is the negative gradient of the voltage in electrostatics. "
They are related mathematically, but the slope of a function between two points is not "the same" as the differences in function value between those two points.
For linear functions of one variable, there are two ways to increase the difference f(b) - f(a):
1) Increase the difference between b and a.
2) Increase the slope
I understand how 2), which corresponds to increasing the electric field, would increase a current, at the same time as it increases the voltage. 1) on the other hand, should as far as I can tell, have no effect on the current.
### Staff: Mentor
I don't think that this distinction is a real distinction for this particular application.
If you are talking about the "driving" of the current at a point then you have either the E-field or the negative gradient of the voltage. If you are talking about the "driving" of the current over some path between two points then you either have to integrate the E-field along the path or take the difference of the voltage at the end points.
Either way is mathematically equivalent for either case.
Basically, if you say "the E-field drives the current at point a" then it must also be true that "the gradient of the voltage drives the current at point a" since ##E=-\nabla V##. And if you say "the voltage difference drives the current from a to b" then it must also be true that "the E-field along the path drives the current from a to b" since $$-\int_a^b E\cdot ds = \Delta V$$
Last edited: Jan 24, 2014
19. ### gralla55
57
I think the distinction still applies. I imagine a conductor of some length l, placed in a uniform electric field.
If you take three points a, b and c on the conductor, where v(a) < V(b) < v(c), then V(c) - V(a) > V(b) - V(a).
If the current is larger at point c, then it really is the same thing as you're saying. However, isn't current in a conductor always constant? In that case, larger voltage does not equal larger current, unless the larger voltage is due to a larger eletric field.
20. ### Crazymechanic
849
I see what you're thinking.
First of all , how can there be any current in a wire that just hang in mid air , isn't attached to any circuit or anything? There is and cannot be any current in such wire.
You can place the wire in a uniform or non uniform electric field , all that will happen is the wire will polarize , which means that as you implied different parts of the wire will have varying strengths of charge , if you would apply a varying electric field to the wire and the attach the wire to a circuit , then you would get current induced because now the charge would flow as a PD would be formed.
And yes once you have a current path or a circuit the current is constant , before that it itsn't but that's because there is no current at all before that.It's like a river that as been blocked by a dam. If the river isn't flowing we don't call it a river, maybe a lake.So the water has accumulated in this one area and has a potential to do something but only when you let it flow towards the lower place , lower potential only then we say that water flows. So as current.You can have a wire in a high electric field etc but if that wire isn't atached to anything no current can flow and when current does start to flow it is the same everywhere. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285144805908203, "perplexity": 481.7457925174334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673439.5/warc/CC-MAIN-20151001215753-00012-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/try-other-units.58347/ | # Try other units
1. Jan 1, 2005
### marcus
It can be useful to construct and try out alternative systems of units---mind-broadening and you can get a different perspective on physics.
And in several specialties they do this anyway because it is almost a precondition to being able to think. Whether QFT or Gen Rel they all have their own bastardized units and alternatives to straight metric that help them get a grip on things.
for instance in Gen Rel the main equation is the Einstein equation
$$G_{ab} = T_{ab}$$
basically curvature equals energy density, the concentration of matter curves space
When you see that equation you know that the person writing it is using a set of units in which the UNIT OF FORCE IS 500 TRILLION TRILLION TRILLION TONNES. (I am expressing it in the unofficial metric tonnes force because saying it in the official newtons unit would be even worse.)
If you put everything back into the Einstein equation it says this
$$G_{ab} =\frac{8\pi G_{Newt}}{c^4} T_{ab}$$
And the central coefficient in the Einstein equation is the reciprocal of this huge force
$$F_{Einst} =\frac{c^4}{8\pi G_{Newt}}$$
and the relativist writing the equation adjusts his units to make that huge force unity, as well as making the speed of light unity, so he can streamline the equations and think conveniently.
The thing to remember is that just for dimensional reasons if you divide any energy density by a force you get a reciprocal area which is to say a CURVATURE. so to express the curving effect of a concentration of energy you have to divide by some force. the one Nature uses is this universal constant force FEinst and that force is just built into how gravity works and how the universe is shaped.
So let's try taking this force seriously and constructing humanscale practical units that are in harmony with it, or easy power of ten relation to it-----and the same for other basic constants like hbar and c.
We can set up a system like that and try working some simple exercises to see how it goes. The exercises will be conventional vintage phyics and
the majority of them will be at the ATOMS AND MOLECULES scale, or larger, I guess. I will have some exercises about temperature, the speed of sound, electrostatic force between point charges, blackbody radiation,
so I guess the Atoms Molecules Solids forum is the right place for it.
2. Jan 1, 2005
### marcus
so here's how the construction goes
we decide on power-of-ten sizes for three basic quantities
|FEinst| = 1043
|hbar| = 10-32
|c| = 109
Deciding that the constants will have those sizes (in our units) completely determines what our units of force, length, and time must be---and all the other units which can be derived from them.
and unit of duration that is about 222 to the minute. I will call it a count because that is as fast as I can count (to twenty repeatedly, not all the way to 222!) And the unit force, which I call a mark of force just to have a name for it, and which must be 10-43 of the great constant force, comes out to two ounces, or half a metric newton.
FEinst = 1043 marks
hbar = 10-32 mark hand count
c = 109 hand per count
We will not ordinarily need to have metric equivalents because we will be able to work exercises in terms of these units, so only a rough idea of their size is really needed. But here at the beginning I will give multidigit equivalents, which afterwards we shall mostly ignore. BTW the derived energy unit, mark hand---pushing with mark force for unit distance---is about 1/100 of a calorie or 1/25 of metric joule. I will call it a jot.
The unit of mass is pound-size and will be called pound. The unit of volume, cubic hand, is pint-size.
hand 8.10263 centimeters (3 and 1/4 inch, obviously a handwidth)
count 0.270275 second
mark 0.4816 newton
jot 0.039018 joule
pound 434.14 grams
sq. hand 65.65 sq. centimeters
cubic hand 532 cubic cm.
mark per sq.hand 73 pascal
hand per sq. count 1.11 meter per sq. second
for a bit more definition we set sizes for the elementary charge e and the Boltzmann constant k:
|k| = 10-22
|e| = 10-18
that makes the temperature step be half a Fahrenheit step
1000 degrees absolute equals 49 F on the conventional fahrenheit scale
the voltage unit is called a quartervolt (abbr. Q) and the microscopic energy unit eQ (eekyoo, electronquartervolt) is exactly 10-18 of a jot, the macroscopic one.
the relation between photon energy and vacuum (angular) wavelength is given by
$$\hbar \times c = 10^{-5} \text{ eQ hand} = 10 \text{ eQ microhand}$$
As it happens a typical wavelength for green light is one microhand and the corresponding photon energy is 10 eQ
the relation between temperature and energy is given by
$$k = 10^{-4} \text{ eQ per degree}$$
It happens that a common estimate of the surface temp of the sun translates to just over 20,000 degrees. So kT for the solar surface is 2 eQ.
as an illustration, the average photon energy in thermal radiation from a surface at temp T is (in any system of units) 2.701 kT
So the average photon in sunlight has an energy of 5.4 eQ.
Now I hope very much that I have not frightened everybody off by this preliminary flood of information. Let's see what simple exercises we can construct, using these units.
Last edited: Jan 1, 2005
3. Jan 2, 2005
### marcus
Just had an example of calculating with different units
It looks to me like Daniel (dextercioby) used the metric system and had a pretty complicated calculation and his answer is (IMHO) a little off.
We were calculating the number of photons per cubic meter at room temp
for me, on absolute scale, room temp is 1040
so I go:
$$\frac{1}{2.701}\times \frac{\pi^2}{15} \times 10400^3$$
that gives me 2.740E11 which is the number in a pintsize cubic hand volume.
Now I have to get back to metric, so I multiply by 1880 because there are that many cubic hands in a cubic meter.
that gives me 5.15 E14, which I think is right.
But dextercioby, as you see in that thread, has a rather complicated calculation and comes up with 6E14, which I think is mistaken. It is easy to make a mistake because he has messy metric constants like hbar, and k.
In our units this ratio of units k/(hbar c) = 10 exactly.
So I have multiplied the room temperature 1040 by 10 before cubing.
I have no more trouble with constants.
but he is using messy metric versions so he must look up the values and calculate some messy number for k/(hbar c). It will not be a simple 10!
Or he has to do some equivalent bother at some other point in the calculation.
We have room for a slight disagreement about room temp. The conventional metric figure is 293 kelvin. But I find 293 slightly too cold for comfort and prefer 294----this is why I used 1040.
To make everything comparable to dextercioby I should use 1037 degrees.
$$\frac{1}{2.701}\times \frac{\pi^2}{15} \times 10370^3$$
then I get 5.11E14 instead of 5.15E14. So it does not make much difference. But just to be careful.
------footnotes-------
the number 2.701 is interesting, it comes from the zeta function
$$2.701 = \frac{3\zeta(4)}{\zeta(3)}$$
in thermal glow of temp T the average photon energy is 2.701 kT.
the main formula i used, if you put back in the k, hbar, and c, would be:
$$\frac{1}{2.701}\times \frac{\pi^2}{15} \times( \frac{kT}{\hbar c})^3$$
but since k/(hbar c) has the value 10, one can simply multiply the temp by 10 and cube, which is what i did.
You might want to look at dextercioby calculation in the other thread.
It is very professional and comparatively elaborate. the difference is instructive about the system of units and the associated way of thinking.
Last edited: Jan 2, 2005
4. Jan 2, 2005
### marcus
Just as a review, here is how the construction goes.
there is this force in nature FEins which is basic to how gravity works and how space is shaped. And there's a basic energy-time quantity hbar, and a basic speed c, and a basic charge e. We set these things equal to some powers of ten. And also the Boltzmann k which relates the temperature scale to the energy scale.
FEins = 1043 marks
hbar = 10-32 mark hand count
c = 109 hand per count
e = 10-18 dram
k = 10-22 mark hand per degree
the first equation defines the mark force unit. the next two define the hand and count (units of length and duration). the last two define the units of electric charge and temperature. The other types of units can be derived from these and are thus also determined.
The derived energy unit, mark hand---pushing with mark force for unit distance---is about 1/100 of a calorie or 1/25 of metric joule. I will call it a jot. The derived unit of mass is pound-size and will be called pound. The unit of volume, cubic hand, is pint-size. The unit of electric charge is about one sixth of a metric coulomb, and will be called a dram. The voltage unit deriving from this is about one quarter of a conventional volt and will be called a quartervolt, abbr. Q.
The temperature scale one gets from these assignments is an absolute scale with 1000 degrees being the same as 282.6 kelvin. It happens that 282.6 kelvin is close to 49 Fahrenheit. We take that temperature as a point of reference. the degree is about half a Fahrenheit step.
some temperature benchmarks:
1000 approx average earth surface temp (serendipitous)
1040 room temperature
1100 approx. normal body temp
1320 boiling (at normal atmospheric pressure)
1600 moderate oven (350 for Fahrenheit cooks)
20000 approx. surface temp of sun
it is admittedly awkward not to have a relative scale like Celsius with a special zero at freezing. But this is an absolute scale and as such will have to do.
Last edited: Jan 2, 2005
5. Jan 2, 2005
### marcus
a little about speed and the speed of sound in cold air
the unit speed, 1 hand per count, is a billionth of the speed of light, so it is 2/3 mph (for people who think miles per hour) and 0.3 meters per second (for those who dont)
10 hand per count-----6.7 mph
100 hand per count---67 mph
1000 hand per count--670 mph, which is the speed of sound at typical aircraft cruising altitudes
10000 hand per count--earth orbit speed (30 km per second)
I want to make a point about the speed of sound and the weight of a proton.
In our units we have a mass unit called pound which is 434 grams (determined by the values given the constants earlier) and that means that protons are 2.6E26 to the pound. this is an important and useful number and it's lucky that it is easy to remember (because of a chance numerical "rhyme").
That means that because air molecules, on average, have atomic weight 29, the air molecule mass is
$$\frac{29}{2.6E26} \text{ pound}$$
The speed of sound in air is:
$$\sqrt{\frac{7}{5}\times \frac{kT}{ \text{mass of molecule} }$$
It is neat how it depends on very little besides the temperature of the air. And let's do a sample calculation for very cold air that has T = 800 degrees. That is actually a typical temperature 6 miles up, above the clouds and convection layer. One way to think of it is to say it is 200 degrees less than the reference temperature (1000 degrees = 49 Fahrenheit) and that those 200 degrees are each about half the size of F steps.
Or you can just multiply 800 by 0.2826 and get kelvin.
anyway, remember that in our units the value of k is E-22, so that for T = 800 we have kT = 8E-20, and let's calculate the speed
$$\sqrt{\frac{7}{5}\times \frac{8E-20 \times 2.6E26}{ 29} }= \sqrt{\frac{7}{5}\times \frac{8 \times 2.6\times 10^6}{ 29} }$$
and that turns out to be 1000 hands per count----a millionth of the speed of light
6. Jan 2, 2005
### marcus
talking to dextercioby on the other thread, a kind of nice problem came to mind
I guess I thought of it but it certainly helps to have someone to converse with who takes casual dares and calculates stuff!
the problem is this
how massive should a black hole be for it to glow with green light---that is for the Bekenstein Hawking temperature TBH
to be such that 2.701 k TBH is the energy of a green photon.
that would make green photons average in the hawking glow of the hole.
well, in our system we have
hbar x c = 10 eQ microhand
and the ang. wavelength of green light is 1 microhand and the
photon energy of green light is 10 eQ (ten electronquartervolts)
and the eQ is exactly 10-18 of the macroscopic energy unit (provisionally called "jot" because small, like 1/25 of a joule)
So I have to get the hole's mass M right so that the average photon energy is 10-17
Here's what i calculated on the other thread:
The main conclusion is that the mass of a green black hole is
2.701 x 1019 pounds
the reason for saying trillion is that in continental europe they say trillion for 1018
I am still amazed that the green black hole is so massive. I keep expecting someone to drop into this or the other thread and show me how I made some careless mistake. I am used to thinking of holes that are hot enough to glow visible as being very small.
I wonder how big this green black hole is?
Last edited: Jan 2, 2005
7. Jan 2, 2005
### marcus
the size of a green-glowing black hole
OK that can be another easy exercise. the aim of the thread is to test-drive the system by doing easy exercises.
We have a mass 2.701 E19 pounds
and I am going to calculate the Schwarzschild GIRTH of that mass.
the circumference, the 2 pi R of the thing.
It happens that in our system it is always (1/2) E-25 times the mass.
so we multiply 2.701 E19 X (1/2) E-25 = 1.35 E-6
wow!
that is 1.35 microhand
a microhand is the angular wavelength of green light!
the little mother is so small that it is down near the size of the wavelength.
How can it radiate green light at all?
Well, maybe someone here can explain. I have just naively applied the famous BH temperature formula and got the mass that you need for the thing to be hot enough that the thermal radiation from it would be average green. It turns out that the mass was bigger than I imagined it would be and that the size is smaller than I expected and Hawking radiation turns out to be more of a puzzle than I had realized.
It is supposed to be regular old thermal radiation for the given temperature, but how can that be if the radiator is so small? Anybody want to help reconcile all this?
8. Jan 2, 2005
### marcus
Part of a draft essay, maybe could use here:
You could say the premise of these units is that the coefficient in the 1915 Einstein equation is a real force. Or the reciprocal of a force.
the einstein equation is our main model of how gravity works and how matter curves spacetime. It says that if you divide the energy density by the force, you get the curvature. What makes space curved is the density of energy in a region and the way to find the resulting curvature is to take that energy density and divide by a certain universal constant force.
curvature is measured as a reciprocal area and it is a fact of life that if you divide any energy density (energy per unit volume) by a force you get one over some area.
$$\text{curvature} = \frac{1}{F_{Eins}}\text{energy density}$$
the curvature is actually a tensor made up of a lot of curvatures in several directions and written Gab and the energy density is also a tensor made up of lots of terms which are all equivalent to energy densities, and it is called
Tab
So naturally the equation looks like this:
$$G_{ab} = \frac{1}{F_{Eins}}T_{ab}$$
the thing to keep your eye on is the central coefficient because that is one over the Einstein force and it is at the heart of how gravity works and how matter and energy curves space and how the shape of the universe evolves.
this force FEins is about 50 trillion trillion trillion tonnes of force.
what that means is that what we think of as a lot of concentrated mass-energy only curves space a little bit (by our standards) because to get the curvature you are dividing by a force which is large (by our standards).
We can write FEins in terms of the newtonian gravity constant GNewt and the speed of light.
$$F_{Eins} = \frac{c^4}{8\pi G_{Newt}}$$
If you like using a calculator and know the speed of light and that in metric terms you can work it out and you will get some big number of metric newton force units, basically amounting to 50E36 tonnes force.
So now let's put that into the Einstein equation to see what it looks like in the textbooks:
$$G_{ab} = \frac{8\pi G_{Newt}}{c^4}T_{ab}$$
One last shocker, the Relativists, the professionals who do General Relativity for a living, use their own private convenience system of
non-metric units in which the value of this force is ONE, so the coefficient in the Einstein equation completely disappears, and in their books and journals the equation what you often see is simply:
$$G_{ab} = T_{ab}$$
Shall we do like them, and make that be our unit of force?
Last edited: Jan 2, 2005
9. Jan 2, 2005
### Kea
green cheesy smile
Marcus and Dex.
As I'm sure you know, using
$$kT = \frac{\hbar c^{3}}{8 \pi G M}$$
$$r_{S} = \frac{2 G M}{c^{2}}$$
and then comparing to photon energy, one obtains
$$\frac{\lambda}{2 \pi r_{S}} = \frac{2}{2.701}$$
maybe modulo some factor of pi which is an easy mistake....so I'm
not surprised that the size of the hole matches the wavelength, no
matter what fancy units suitable for Martians that you may have
decided were nice....
Kea
10. Jan 2, 2005
### marcus
Green cheesey smile
however these two smilies look like yellow butter balls, not green cheeses.
You could use
you have cleverly pointed out that ultimately units do not make any difference.
(but in various different specialties, physicists are always tinkering with units, to get cleaner equations of for whatever reason: maybe they tinker for no practical purpose, because they enjoy the activity, as parrots do)
Here is a problem for both of you, Kea and Dex:
John Baez in the spacecraft.
John Baez has been exploring the galaxy and on this occasion he discovers that he is in low circular orbit skimming over the surface of a small round planet which appears made of one uniform material.
After he has traveled one radian (1/2pi of the circumference) he looks at the ship's clock and sees that it has taken 31.5 minutes.
Question: What might the planet be made of?
Could it, for instance, be made of green cheese?
11. Jan 3, 2005
### marcus
John Baez in the spacecraft
time units in our system are 222 to the minute, 31.5 minutes is 7000 counts.
In any system of units, metric included if you wish, if T is the minimal radian time of a low orbit then the reciprocal density of the planet is given by
$$\text{reciprocal density } = \frac{4\pi G}{3}T^2$$
In our case this is particularly simple to evaluate because 8 pi G is E-7, so we have the reciprocal density (in cubic hand per pound) given by
$$\text{reciprocal density } = \frac{10^{-7}}{6}T^2 = \frac{49}{60}$$
Baez sagely observes that the density of the planet is 60/49 or about 1.225
pounds per cubic hand, which is the density of water.
The planet is a great shimmering drop of water. As a Californian of course Baez has his swimming trunks handy, so after a moment's preparation he leaps from the spacecraft to enjoy a swim.
12. Jan 3, 2005
### Kea
...which would be foolish, because I calculate that the density is around
1000 kg/m^3 (check please) which indicates water...except that correcting for the error in orbital radius would mean that the actual density was a little less.....oops, no, more.....maybe something like acetate....yuk (let us assume that John has checked the atmospheric conditions).
I assumed, as befits John Baez, that exploration of the galaxy requires a quantum gravitational spacecraft, which is of course not ever quite sure where it is going or where 'where' is .... and inadvertently John Baez has directed it to a planet which appears to made of (dense) green cheese with probability close to 1 on the large number of observations that John Baez's spacecraft has already made of the surface.
Use
$$\frac{G M}{r^{2}} = \frac{v^{2}}{r} [/itex] and the fact that the volume of a sphere is [tex] \frac{4}{3} \pi r^{3}$$
to derive Marcus' formula.
Last edited: Jan 3, 2005
13. Jan 3, 2005
### marcus
serious problem---count the beans of electricity
In some smalltown general stores they would put out a jar of beans and the customers got to guess-----then they would count and the person who guessed closest would get the prize.
Maybe they still do this in New Zealand but in California we have the state lottery so we dont need that old bean stuff.
now here we have two insulated balls with the charge distributed equally between them and we fix them 8 centimeters apart and you gauge the repulsion between them and decide that it is
THREE AND 1/2 NEWTONS
Anybody who knows metric units should kind of know what that feels like, it is about 4/5 of the weight of an oldfashioned conventional pound---which weighed about 4.4 newtons. And a kilogram weighs 9.8 newtons so this force is roughly a third of the weight of a kilogram. So everybody should have a real clear idea of the force between these two balls, shoving them apart.
Now i challenge you. If you think you understand the metric system and have some physics intuition, tell me HOW MANY extra ELECTRONS are on each ball? It is the same number extra on each one. It's two equal negative charges.
for people who want to try using the alternate units, the force is 7.3 mark (that is the equivalent to the 3.5 newtons we said earlier)
BTW Kea thanks for the company! and the confirmation of the formula
I think Baez called one of his friends on the cell phone just before he jumped in for a swim. the friend has a sailboat and will pick him up and they can spend a few days fishing. fishing and sailing is about all they can do on this planet.
Last edited: Jan 3, 2005
14. Jan 3, 2005
### marcus
People who know metric units presumeably do not need any help because it is a straightforward problem.
For anyone who might wish to try out the alternate units, remember that the unit charge (dram) is the charge of 1018 electrons.
And I will tell you that the COULOMB CONSTANT that converts charge and separation into force is
$$\frac{1}{137} \times 10^{13}\text{ mark sq.hand per sq.dram}$$
where I am referring to a special number alpha = 1/137.036....
which we approximate 1/137.
Ordinarily, to find the force you multiply the two charges (in drams) and divide by the square of the distance (in hands) and multiply by this
$$\frac{1}{137} \times 10^{13}$$
and that will turn out to be the force (in marks)
But here it is slightly different because you know the force and must work back to find the two equal charges.
Last edited: Jan 3, 2005
15. Jan 5, 2005
### Gamish
I think we should make a standard time unit that fits into the metric system. But other than that, I hate tha American standard, I like the metric system. Sorry if this may be a bit irrelavent :yuck: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8994624018669128, "perplexity": 1299.509886758512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541995.74/warc/CC-MAIN-20161202170901-00186-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://mathhelpforum.com/pre-calculus/13362-1-graph-ellipse-2-subtracting-polynomials.html | # Thread: #1 Graph ellipse. #2 Subtracting polynomials.
1. ## #1 Graph ellipse. #2 Subtracting polynomials.
Stuck on two questions that was in a review section.
1) How can I graph the ellipse: x^2/4 + y^2/36 = 1?
2) How do I solve for x: 3x/2 - 3/4 = x/8?
Here's what I've done so far:
3x(4)/2 - 3(2)/4 = x/8
12x/2 - 6/4 = x/8
48x/8 - 12/8 = 4x/32.........this is where I got stuck when trying to get all the denominators the same.
2. Originally Posted by Mulya66
2) How do I solve for x: 3x/2 - 3/4 = 8/x?
Here's what I've done so far:
3x(4)/2 - 3(2)/4 = x/8
12x/2 - 6/4 = x/8
48x/8 - 12/8 = 4x/32.........this is where I got stuck when trying to get all the denominators the same.
Hint: Multiply everything by 8x. (See what happens).
3. Originally Posted by Mulya66
Stuck on two questions ...
3x/2 - 3/4 = 8/x
12x/2 - 6/4 = x/8
Hello,
you unfortunately turned over the fraction...
Use the TPH's hint...
EB
4. Hello, Mulya66!
Graph the ellipse: .x²/4 + y²/36 .= .1
First of all, the ellipse is centered at the origin.
Recall that the denominators give you the "dimensions" of the ellipse.
Since a² = 4, a = ±2.
. . The ellipse extends 2 units to the left and right of center.
Since b² = 36, b = ±6.
. . The ellipse extends 6 units up and down from the center.
2) Solve: .3x/2 - 3/4 .= .8/x
Multiply through by the LCM, 4x
. . . . 3x . - . .3 . . . . . 8
. . 4x·--- - 4x·-- .= .4x·-- . . 6x² - 3x .= .32
. . . . .2 . - . .4 . - . - . x
5. Originally Posted by Soroban
Hello, Mulya66!
First of all, the ellipse is centered at the origin.
Recall that the denominators give you the "dimensions" of the ellipse.
Since a² = 4, a = ±2.
. . The ellipse extends 2 units to the left and right of center.
Since b² = 36, b = ±6.
. . The ellipse extends 6 units up and down from the center.
[/size]
Oh okay, so a^2 is for left and right and b^2 is up and down.
#2 was actually x/8, sorry. But last night, I got x = 6/11 ~ 0.5455.
Thanks a lot, everyone. =) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9280067682266235, "perplexity": 2291.6166222681704}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719416.57/warc/CC-MAIN-20161020183839-00014-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/134048-counting-problem.html | # Math Help - counting problem
1. ## counting problem
How do I show that 8 people played in the tournament to the following problem:
At a chess tournament, each person played exactly 1 game with every other person. 28 games were played. How many people played in the tournament?
2. Hello sarahh
Originally Posted by sarahh
How do I show that 8 people played in the tournament to the following problem:
At a chess tournament, each person played exactly 1 game with every other person. 28 games were played. How many people played in the tournament?
If there are $n$ players in the tournament, and each players plays 1 game against everyone else, then the number of different games is the number of ways of choosing $2$ from $n$; which is:
$\binom{n}{2} = \tfrac12n(n-1)$
Equate this to $28$; solve the quadratic equation in $n$, and ignore the negative root.
3. Hi Grandad--that certainly works! I'm not sure how you deduced the formula though.. I though of doing the ordered pairings but if you didn't know it was 8 it would be hard to deduce--is there a more elementary way of solving it too?
4. Hello sarahh
Originally Posted by sarahh
...I'm not sure how you deduced the formula though..
If everyone plays everyone else, then all possible pairs of players must be included in the set of games. So we simply need the number of possible pairs that can be formed from the available players.
If there are $n$ players, the number of ways of choosing the first player to make a pair to play one another is $n$; and, since there are then $(n-1)$ players left to choose from, there are $(n-1)$ ways of choosing his/her opponent. So the number of possible ways of choosing two players in order is $n(n-1)$.
But the order doesn't matter: (A, B) is the same match as (B, A). Since there are $2$ ways of arranging the two players, each pair in the $n(n-1)$ choices will occur twice. So we must divide by $2$ to get the number of different matches.
This is simply what $\binom{n}{2}$, or ${^nC_2}$, means: the number of ways of choosing $2$ from $n$. And, of course, the formula for this is:
$\binom{n}{2}=\frac{n!}{2!(n-2)}$
$=\tfrac12n(n-1)$
I though of doing the ordered pairings but if you didn't know it was 8 it would be hard to deduce--
I don't think I've assumed it's $8$, have I? I let it be $n$, and then set up an equation, the solution of which was $n = 8$.
is there a more elementary way of solving it too?
Trial and error, perhaps? Start with 2 players: that's 1 match. Then 3 players: 3 matches. And so on... until you get 28 matches. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8544664978981018, "perplexity": 344.4631713323772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.66/warc/CC-MAIN-20150521113210-00124-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/172579-question-laws-exponents-print.html | # Question on Laws of Exponents
• February 25th 2011, 07:06 AM
Substince
Question on Laws of Exponents
Ok I'm at the end of my lesson and I have to prove if an expression is true or false.
eg: $(x+y)^{1/2} = x^{1/2}+y^{1/2}$
I know if it were $(xy)^{1/2}$ this would be equal to $x^{1/2}y^{1/2}$ but the addition sign is throwing me off.
I've looked at my rules and can't find anything like $(x+y)^2$ laws just the power of the product.
I hope I'm making sense :P
Thanks
• February 25th 2011, 07:14 AM
Prove It
It's not true. You can check using the Binomial Series...
• February 25th 2011, 07:16 AM
earboth
Quote:
Originally Posted by Substince
Ok I'm at the end of my lesson and I have to prove if an expression is true or false.
eg: $(x+y)^{1/2} = x^{1/2}+y^{1/2}$
I know if it were $(xy)^{1/2}$ this would be equal to $x^{1/2}y^{1/2}$ but the addition sign is throwing me off.
I've looked at my rules and can't find anything like $(x+y)^2$ laws just the power of the product.
I hope I'm making sense :P
Thanks
1. Re-write the expression as:
$\sqrt{x+y}=\sqrt{x} + \sqrt{y}$
2. Square both sides and you'll see that this equation is only true if x = 0 or y = 0
• February 25th 2011, 07:19 AM
Substince
wonderful thank you guys! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8191377520561218, "perplexity": 557.7643366217656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398455135.96/warc/CC-MAIN-20151124205415-00266-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://mathhelpforum.com/pre-calculus/152154-partial-fraction-decomposition.html | Thread: Partial Fraction Decomposition.
1. Partial Fraction Decomposition.
I'm having trouble solving the following:
3x^2 / x^3+14x^2+65x+100
I don't know what to do with the constant 100. So far i have obtained for the denominator, x(x^2+14x+65)+100 but unsure what to do for the next step.
2. That's not how you factorise the denominator.
Find the positive and negative factors of $100$ and substitute each of them into the polynomial in the denominator.
If any of these factors, call it $a$, makes the polynomial $= 0$, then $x - a$ is a factor.
Once you have that factor of the polynomial, long divide and factorise whatever is left.
3. Originally Posted by nando29
I'm having trouble solving the following:
3x^2 / x^3+14x^2+65x+100
I don't know what to do with the constant 100. So far i have obtained for the denominator, x(x^2+14x+65)+100 but unsure what to do for the next step.
I hope u wrote this there :
$\displaystyle \frac {3x^2}{x^3+14x^2+65x+100}$
$x^3+14x^2+65x+100=0$
so u have
$x_1=-4$
$x_2=-5$
$x_3=-5$
meaning that $x^3+14x^2+65x+100=(x+4)(x+5)^2$
$\displaystyle \frac {3x^2}{x^3+14x^2+65x+100} =\frac {3x^2}{(x+4)(x+5)^2}$
P.S. u can use Horner's rule (or method or schema) to do this
4. thank you people for helping me out. i google/youtube Horner's rule and confused myself more. they provide the definition but it makes no sense to me without a well explained example. could someone provide an example of Horner's rule-formula in action. btw, my precalculus class didn't mention horners rule so this is new to me.
5. Originally Posted by nando29
thank you people for helping me out. i google/youtube Horner's rule and confused myself more. they provide the definition but it makes no sense to me without a well explained example. could someone provide an example of Horner's rule-formula in action. btw, my precalculus class didn't mention horners rule so this is new to me.
check this one
YouTube - Horner Schema
or u would like to show u on this example how to use it ?
$x^3+14x^2+65x+100=0$ ?
6. For a Partial Fractions decomposition of
$\frac{3x^2}{(x + 4)(x + 5)^2}$
try
$\frac{A}{x + 4} + \frac{B}{x + 5} + \frac{C}{(x + 5)^2} = \frac{3x^2}{(x + 4)(x + 5)^2}$.
7. Originally Posted by yeKciM
check this one
YouTube - Horner Schema
or u would like to show u on this example how to use it ?
$x^3+14x^2+65x+100=0$ ?
yeahhh all the horner schema videos are in german and i have no clue what's happening lol. i do know how to solve a partial fraction decomposition problem once the denominator/numerator is factored if needed.
what isn't so clear is how the denominator was factored for my question. i can see the ans is right but i would like to know how was the formula of horner schema (which isn't clear to me due to videos and the complicated definition of it) used to solve my question.
8. Originally Posted by nando29
what isn't so clear is how the denominator was factored for my question. i can see the ans is right but i would like to know how was the formula of horner schema (which isn't clear to me due to videos and the complicated definition of it) used to solve my question.
Factorisation of polynomials is a technique all of itself. If you're out here in the areas of mathematics where partial fraction expansion is needed, then you ought to be confident at factorisation. I can't immediately lay my hands on a manual which will give you lots of practice at it, I recommend you ask your teacher / mentor / tutor to point you towards some sources.
The first thing we did when I started my A-level mathematics (that's equivalent of American upper high-school, I believe) was spend a week or two doing nothing but factorise ever more complicated polynomials. Wow. I was in heaven, I can tell you - for the first time in my life I was able to do mathematics for hours on end, all day in fact, on occasion.
9. Originally Posted by nando29
yeahhh all the horner schema videos are in german and i have no clue what's happening lol. i do know how to solve a partial fraction decomposition problem once the denominator/numerator is factored if needed.
what isn't so clear is how the denominator was factored for my question. i can see the ans is right but i would like to know how was the formula of horner schema (which isn't clear to me due to videos and the complicated definition of it) used to solve my question.
See my post immediately below your original post...
10. Originally Posted by nando29
yeahhh all the horner schema videos are in german and i have no clue what's happening lol. i do know how to solve a partial fraction decomposition problem once the denominator/numerator is factored if needed.
what isn't so clear is how the denominator was factored for my question. i can see the ans is right but i would like to know how was the formula of horner schema (which isn't clear to me due to videos and the complicated definition of it) used to solve my question.
let's say it like this (sorry for late response, i was to busy)
if u have $P_n(x)$ and u need it to divide it with $(x-x_1)$ u'll get something like this ( not something, this u'll get )
$\displaystyle P_n(x)=(x-x_1)(b_0x^{n-1}+b_1x_{n-2}+ . . . +b_{n-1}) + R$
to calculate $b_0 ; \; b_1 ; \; . . . ; \; b_{n-1}$ u can use Horner's scheme
$\displaystyle \begin{array}[b]{c||c|c|c|c|c|}
x_1&a_0 &a_1 &a_2 & . . . &a_n\\ \hline
& b_0=a_0 & b_1=x_1b_0+a_1 & b_2=x_1b_1+a_2 & . . . & R=x_b_{n-1}+ a_n \\
\end{array}$
okay let's say now u need to divide
$P_5(x)=2x^5-6x^4-17x^3+x-4$
with
$(x-5)$
now u'll have that your $x_1=5$ because of this $(x-5)$ and
$a_0=2$, $a_1=-6$, $a_2=-17$, $a_3=0$, $a_4=1$, $a_5=-4$
and now as u see these $a_0, \; a_1 . . .$ and so on are coefficients...
now u can make a Horner's scheme...
$\displaystyle \begin{array}[b]{c||c|c|c|c|c|c|}
x_1=5&a_0=2 &a_1 =-6&a_2 =-17 & a_3=0 & a_4=1 &a_5=-4\\ \hline
& b_0=2 & b_1=4 & b_2=3 & b_3=15& b_4=76 &R=376 \\
\end{array}$
so u get that :
$P_5(x)=(x-5)(2x^4+4x^3+3x^2+15x+76)+376$
now if u want to let's say find zeros... U have something like
$P_3(x)=2x^3+3x^2-1$
and u need to find zeros. U'll do it like this
if equation like this one have a rational root $\frac {p}{q}$ where $(p \in \mathbb{Z} , \; q \in \mathbb {N}) \;\;\; (p,q)=1$ then $p$ is divisor of free member, and $q$ is divisor of coefficient with highest level ...
so for our $P_3(x)=2x^3+3x^2-1$ we have :
$\frac {p}{-1}$ and $\frac {q}{2}$ so it's
$p=\{\pm1\}$ and $q=\{\pm1, \pm2\}$
now u need to check first zero meaning that u put those solutions and see for which will be R=0
$P_3(-1) =0$
so u have first zero and now u can use scheme so u don't have R any more
$\displaystyle \begin{array}[b]{c||c|c|c|c|}
x_1=-1&a_0=2 &a_1 =3&a_2 =0 & a_3=-1 \\ \hline
& b_0=2 & b_1=1 & b_2=-1 & b_3=0 \\
\end{array}$
so u have now that :
$P_3(x)=2x^3+3x^2-1=(x+1)(2x^2+x-1)$
u can continue using Horner's schema or quadratic formula since u got these what we got
now if u need let's say to express $P_5(x)=x^5-2x^4+3x^3-4x^2+2x+5$ by the $(x-1)$ again u can use Horner's schema ....
it's same procedure like before so i'll just put table hope u'll get it
$\begin{array}[b]{c||c|c|c|c|c|c|}
x_1=1&a_0=1 &a_1 =-2&a_2 =3 & a_3=-4 &a_4=2 &a_5=5\\ \hline
& 1 & -1 & 2 & -2 &0 &5 \\ \hline
& 1 & 0 & 2 & 0 & 0 \\ \hline
& 1 & 1 & 3 &3\\ \hline
& 1 & 2 & 5\\ \hline
& 1 & 3 \\ \hline
&1 \\
\end{array}$
and finally u got :
$P_5(x)=x^5-2x^4+3x^3-4x^2+2x+5= (x-1)^5+3(x-1)^4+5(x-1)^3+3(x-1)^2+5$
hope i helped if not, say where's the problem
11. @yeKciM
for the following: "and now as u see these and so on are coefficients...
now u can make a Horner's scheme... "
i understand how you got a_0 through a_5.
im not sure on this part. you wrote " b_0 - a_1 ". b_0 =2 , a_1 = -6 therefore 2 - (-6) = 8. so then what makes sense to me to get your ans (which is b_1=4) is the following:
x_1 =5 , b_0=2 so 5*2 = 10. 10-(-6) = 16 =/= 4.
im not sure how to find b_2=9. the formula says x_1(b_1) - a_2= so we have 5*4 = 20 - (-17) = 37.
a_2 = -17
i understand this part:
so u have first zero and now u can use scheme so u don't have R any more
i understand how you obtained row (1,-1,2,-2,0,5) but not the ones thereafter-- im completely lost for those remaining.
12. I corrected those few errors up there sorry... I't was pretty late here when i done that
(it's $b_1=x_1b_0+a_1$ not minus....
as for that last one u do it the same way as u done that one (u say u understand) it's just when u get that row (1,-1,2,-2,5) that numbers will be your $a_0, \; a_1, \; . . .$ and then u do work for next row and so on ... when u do each row your $a_0, \; a_1, \; . . .$ are the row above that one u are working on and after u finish just take numbers from the diagonal and ur done
sorry again for bad typo
13. OHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH HHH. LOL
everything makes sense now. thank you for helping me out | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 53, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8767151236534119, "perplexity": 777.4398107955802}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738663010.9/warc/CC-MAIN-20160924173743-00198-ip-10-143-35-109.ec2.internal.warc.gz"} |
http://mathhelpforum.com/number-theory/25248-bernoulli-numbers.html | ## Bernoulli numbers
Say we try to define the Bernoulli numbers using Faulhaber's formula...
$\sum_{k=0}^{m-1}k^n=\frac{1}{1+n}\sum_{k=0}^n\binom{n+1}{k}B_km^ {n+1-k}$
how do we show that the values $\{B_n\}$ are uniquely defined? That is, in the coefficients of $m^{n+1}, m^n, m^{n-1}\cdots$, $\frac{B_0}{1+n}, B_1, B_2 \frac{n}{2}, \cdots$ the values $B_0, B_1, B_2, \cdots$ will always be the same regardless of the power $n$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9913234114646912, "perplexity": 175.121248575118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500550977093.96/warc/CC-MAIN-20170728183650-20170728203650-00221.warc.gz"} |
https://www.physicsoverflow.org/14314/space-as-flat-plane | # Space as "flat" plane
+ 1 like - 0 dislike
1322 views
I was watching the documentary Carl Sagan did about gravity (I believe it's quite old though) and wondered about space being "flat" and that mass creates dents in this plane as shown at about 3 minutes in this clip ()
Is this simply a metaphor or is it something more than that? Does gravity follow this just instead of being flat the dent is "rotated" in all directions?
This post imported from StackExchange Physics at 2014-04-09 16:16 (UCT), posted by SE-user Jonathan.
asked Feb 12, 2011
+ 1 like - 0 dislike
The picture by Sagan is somewhat of a simplification of the true geometry - so in a sense that picture is a metaphor. The interpretation given there of Einstein's equations is indeed that of curvature of 4 dimensional Space-Time (not just 3 dimensional space).
However in these higher dimensional geometries there is more than one notion of curvature: Scalar Curvature R; Ricci curvature; Riemann Curvature and Weyl Curvature. The Scalar curvature exists in all dimensions and is simply a function giving the curvature at each point. For an embedded (2-dim) sphere it is R$=2/r^2$ (twice the Gaussian curvature). So for a sphere it is non-zero everywhere on its surface. Higher dimensional spaces use these other Curvature objects (built like generalised matrices and called Tensors) to represent their curvature more precisely than a single number. In higher dimensions the Scalar represents a kind of "average curvature" (at each point).
Corresponding to these different notions of curvature, there are different notions of "flatness" as we will see below.
Now the Einstein equations directly equate the Ricci curvature to something; and in the example shown in the Sagan excerpt it was equated to zero. Ricci$= 0$ is the Einstein Vacuum equation alluded to in other answers, and is appropriate because outside a star there is a vacuum. This has an immediate mathematical consequence that R$=0$ ie the scalar is zero as well! So in this sense the vacuum is flat (called Ricci-flat).
However there is still curvature around in that space, so it is not Minkowski (ie Euclidean) flat. The experimental demonstration of this was the bending of light rays near the Sun (assuming as one does in Einstein's theory that light rays measure the "straight lines"). So where does the curvature come from if it is Scalar and Ricci flat? The answer is that it is not Riemann flat: the tensor Riemann$\neq 0$. However this does not quite explain the origin of curvature here. Expressed very loosely we have the following equation:
Weyl = Riemann - Ricci - R
So the real source of curvature in the Sagan excerpt is the Weyl component of the Riemann tensor: everything else is zero. Now we come to the representation problem that Sagan had: the Weyl tensor is always zero in two and three dimensions. In other words the kind of curvature it represents does not exist in two and three dimensions: only four and above dimensions have this kind of curvature. So it cannot be directly represented on a 2 or 3 dimensional picture.
Instead what Sagan appears to have represented here is the gravitational potential (like in Newton's theory) but expressed as space curvature. It is not completely wrong perhaps, but it is not quite correct and so is just a metaphor.
This post imported from StackExchange Physics at 2014-04-09 16:16 (UCT), posted by SE-user Roy Simpson
answered Feb 12, 2011 by (165 points)
Nicely put..+1
This post imported from StackExchange Physics at 2014-04-09 16:16 (UCT), posted by SE-user Gordon
+ 0 like - 0 dislike
Lets look at Einstein's field equations: $G{\mu\nu}=8\pi.T{\mu\nu}$ where $G{\mu\nu}=R{\mu\nu}-1/2g{\mu\nu}R$ The left side is the curvature of space (the metric). The right side is the stress-energy-momentum tensor, which is the totality of what is producing the curvature. $g{\mu\nu}$ is the metric tensor , R, the scalar curvature, and $R{\mu\nu}$ the Ricci curvature tensor, but the terminology doesn't matter here. Simply the equation is saying the the curvature of space (the metric) is produced by what is in the space (pressure, energy etc)
John Wheeler, who always used colorful language said, "Matter tells space how to curve. Space tells matter how to move."
This post imported from StackExchange Physics at 2014-04-09 16:16 (UCT), posted by SE-user Gordon
answered Feb 12, 2011 by (30 points)
+ 0 like - 0 dislike
In the context of general relativity theory the following things happen:
In the absence of matter, spacetime remains flat. The relevant space is a 4 dimensional Minkowski space. It has some similarities with Euclidean flat geometry, which you have learnt in the school. There are also important difference. In an Euclidean space the symmetry group is Euclidean group whereas in Minkowki space it is called a Poincare group. The former space has only space like dimensions, the latter has 3 space and 1 time like dimensions. However the 4 dimensional Minkowski space is also like a table top.
But if there is some mass energy the geometry of the surrounding spacetime no longer remains Minkowskian, It changes to a more general geometry (although locally within an infinitesimal region it remains Minkowskian). The geometry of Minkowski space is analogous/corresponds to geometry of a plane surface and the latter more general geometry is analogous/corresponds to a geometry of a curved surface. In this curved geometry the sum of three angles of a triangle may be more than or less than $\pi$ depending on the curvature. This curvature does not move itself in static case. Only a particle follow the straightest possible path. Since the spacetime is itself curved the particle appears to be moving in a curved path as if by a force.
This post imported from StackExchange Physics at 2014-04-09 16:16 (UCT), posted by SE-user user1355
answered Feb 12, 2011 by (85 points)
You conflate space-time and space in this answer. Some of those "Euclidean" should in fact be "Minkowski". Also, what space you get depends on the observer because different space-like slices of curved space-time can generally differ (contrary to the Minkowski space-time where you'll always get Euclidean space as a slice).
This post imported from StackExchange Physics at 2014-04-09 16:16 (UCT), posted by SE-user Marek
@Marek: You are right. In fact I made a deliberate attempt to avoid terms like "Minkowski space" and present the affairs in a simple manner assuming (perhaps unjustifiably) the questioner does not already know about special relativity. Secondly I thought putting things in this way may appeal to those members who might be interested in learning relativity but never studied even SR. Maybe, I have made oversimplifications.
This post imported from StackExchange Physics at 2014-04-09 16:16 (UCT), posted by SE-user user1355
Technically, we don't really know if the massless space-time is flat or curved, because of $\Lambda$ which can be thought to be non zero in even empty space.
This post imported from StackExchange Physics at 2014-04-09 16:16 (UCT), posted by SE-user Sklivvz
@Sklivvz: There is something called Einstein's vacuum equation. Where we simply put $R{\mu\nu} = 0$ i.e. Ricci curvature vanishes.
This post imported from StackExchange Physics at 2014-04-09 16:16 (UCT), posted by SE-user user1355
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsO$\varnothing$erflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8171126842498779, "perplexity": 963.7232490701181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488552937.93/warc/CC-MAIN-20210624075940-20210624105940-00457.warc.gz"} |
https://cstheory.stackexchange.com/questions/38594/white-box-sparse-interpolation | # White-box sparse interpolation
Let $C$ be an arithmetic circuit that represents a polynomial $f\in\mathbb K[x_1,\dotsc,x_n]$, with the promise that $f$ has at most $k$ nonzero terms. What is (known about) the complexity of computing $f$ in its sparse representation, given $C$?
I am interested in deterministic and randomized complexity, and in the link with PIT. In particular, does the promise that $f$ is sparse imply good algorithms? A priori, I am more interested in the case of $\mathbb K$ being some finite field, though results over other fields may be relevant.
There are deterministic and randomized algorithm running in time $\mathrm{poly}(n,d,k)$, where $n$ is the number of variables, $d$ is the degree and $k$ is the sparsity. AFAIK, the results are stated for characteristic zero fields but work over any field large enough (again, polynomially large in the parameters).
There are deterministic algorithms that can do it even in time polynomial in $n$, $k$, $\log d$, and $L$ ($n$ numbers of variables, $k$ sparsity, and $d$ the degree, $L$ the bit length of the coefficients), see e.g. Garg and Schost, Interpolation of polynomials given by straight-line programs. Theor. Comput. Sci. 410(27-29): 2659-2662 (2009) or Bläser and Jindal, A new deterministic algorithm for sparse multivariate polynomial interpolation. ISSAC 2014: 51-58. The first algorithms should even work over arbitrary rings. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888239324092865, "perplexity": 180.02911039013028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585305.53/warc/CC-MAIN-20211020090145-20211020120145-00589.warc.gz"} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.