url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://www.shaalaa.com/question-bank-solutions/areas-sector-segment-circle-find-area-minor-segment-circle-radius-14-cm-when-its-central-angle-is-60-also-find-area-corresponding-major-segment_1697 | Share
# Find the area of the minor segment of a circle of radius 14 cm, when its central angle is 60˚. Also find the area of the corresponding major segment. - CBSE Class 10 - Mathematics
ConceptAreas of Sector and Segment of a Circle
#### Question
Find the area of the minor segment of a circle of radius 14 cm, when its central angle is 60˚. Also find the area of the corresponding major segment.[use π=22/7]
#### Solution
Radius of the circle = 14 cm
Central Angle, 𝜽 = 60°,
Area of the minor segment
=theta/360^@xxpir^2-1/2r^2sintheta
=60^@/360^@xxpixx14^2-1/2xx14^2xxsin60^@
=1/6xx22/7xx14xx14-1/2xx14xx14xxsqrt3/2
=(22xx14)/3-49sqrt3
=(22xx14)/3-(147sqrt3)/3
=(308-147sqrt3)/3 cm^2
Area of the minor segment= (308-147sqrt3)/3 cm^2
Is there an error in this question or solution?
#### APPEARS IN
Solution Find the area of the minor segment of a circle of radius 14 cm, when its central angle is 60˚. Also find the area of the corresponding major segment. Concept: Areas of Sector and Segment of a Circle.
S | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4688836336135864, "perplexity": 984.0453253028998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540542644.69/warc/CC-MAIN-20191212074623-20191212102623-00360.warc.gz"} |
https://link.springer.com/chapter/10.1007/978-3-319-39687-3_6 | # A Comparison of Performance of Sleep Spindle Classification Methods Using Wavelets
• Elena Hernandez-Pereira
• Isaac Fernandez-Varela
• Vicente Moret-Bonillo
Conference paper
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 60)
## Abstract
Sleep spindles are transient waveforms and one of the key features that contributes to sleep stages assessment. Due to the large number of sleep spindles appearing on an overnight sleep, automating the detection of this waveforms is desirable. This paper presents a comparative study over the sleep spindle classification task involving the discrete wavelet decomposition of the EEG signal, and seven different classification algorithms. The main goal was to find a classifier that achieves the best performance. The results reported that Random Forest stands out over the rest of models, achieving an accuracy value of $$94.08 \pm 2.8$$ and $$94.08 \pm 2.4\,\%$$ with the symlet and biorthogonal wavelet families.
## Keywords
Sleep spindles Wavelets Machine learning
## Notes
### Acknowledgments
This research was partially funded by the Xunta de Galicia (Grant code GRC2014/035) and by the Spanish Ministerio de Economa y Competitividad, MINECO, under research project TIN2013-40686P both partially supported by the European Union ERDF.
## References
1. 1.
Acir, N., Güzelis, C.: Automatic recognition of sleep spindles in EEG by using artificial neural networks. Expert Syst. Appl. 27(3), 451–458 (2004)
2. 2.
Ahmed, B., Redissi, A., Tafreshi, R.: An automatic sleep spindle detector based on wavelets and the teager energy operator. In: Proceedings of Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 2596–2599 (2009)Google Scholar
3. 3.
Berry, R.B., et al.: The AASM Manual for Scoring of Sleep and Associated Events: Rules, Terminology and Technical Specifications. American Academy of Sleep Medicine, Darien, Illinois (2015)Google Scholar
4. 4.
Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, New York (1995)
5. 5.
Breiman, L., Friedman, J., Olshen, R., Stone, C.: Classification and Regression Trees. Chapman & Hall, New York (1984)
6. 6.
Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)
7. 7.
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
8. 8.
Castillo, E., Fontenla-Romero, O., Alonso-Betanzos, A., Guijarro-Berdiñas, B.: A global optimum approach for one-layer neural networks. Neural Comput. 14(6), 1429–1449 (2002)
9. 9.
Daubechies, I.: Ten lectures on wavelets. In: Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics (1992)Google Scholar
10. 10.
Devuyst, S., Dutoit, T., Stenuit, P., Kerkhofs, M.: Automatic sleep spindles detection. Overview and development of a standard proposal assessment method. In: 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC (2011)Google Scholar
11. 11.
Duman, F., Erdamar, A., Erogul, O., Telatar, Z., Yetkin, S.: Efficient sleep spindle detection algorithm with decision tree. Expert Syst. Appl. 36(6), 9980–9985 (2009)
12. 12.
Efron, B.: Bootstrap methods: another look at the jackknife. Ann. Stat. 7, 1–26 (1979)
13. 13.
Fish, D., Allen, P., Blackie, J.: A new method for the quantitative analysis of sleep spindles during continuous overnight eeg recordings. Electroencephalogr. Clin. Neurophysiol. 70(3), 273–277 (1988)
14. 14.
Fontenla-Romero, O., Guijarro-Berdiñas, B., Pérez-Sánchez, B., Alonso-Betanzos, B.: A new convex objective function for the supervised learning of single-layer neural networks. Pattern Recognit. 43(5), 1984–1992 (2010)
15. 15.
Fung, G., Mangasarian, O.: Proximal support vector machine classifiers. In: Provost, F., Srikant, R. et al. (eds.) Proceedings KDD-2001: Knowledge Discovery and Data Mining. pp. 77–86. San Francisco, CA, Asscociation for Computing Machinery, New York (2001)Google Scholar
16. 16.
Gennaro, L.D., Ferrara, M.: Sleep spindles: an overview. Sleep Med. Rev. 7(5), 423–440 (2003)
17. 17.
Görür, D.: Automated Detection of Sleep Spindles. MSc thesis (2003)Google Scholar
18. 18.
Güneş, S., Dursun, M., Polat, K., Yosunkaya, C.: Sleep spindles recognition system based on time and frequency domain features. Expert Syst. Appl. 38(3), 2455–2461 (2011)
19. 19.
Imtiaz, S.A., Saremi-Yarahmadi, S., Rodriguez-Villegas, E.: Automatic detection of sleep spindles using teager energy and spectral edge frequency. In: Biomedical Circuits and Systems Conference (BioCAS), 2013 IEEE, pp. 262–265 (2013)Google Scholar
20. 20.
Kumar, A., Hofman, W., Campbell, K.: An automatic spindle analysis and detection system based on the evaluation of human ratings of the spindle quality. Waking Sleep. 325–333 (1979)Google Scholar
21. 21.
Mashao, D.: Comparing SVM and GMM on parametric feature-sets. In: Proceedings of the 15th Annual Symposium of the Pattern Recognition Association of South Africa (2004)Google Scholar
22. 22.
MATLAB: version 8.4.0.150421 (R2014b). The MathWorks Inc., Natick, Massachusetts (2014)Google Scholar
23. 23.
Mitchell, T.: Machine Learning. McGraw Hill (1997)Google Scholar
24. 24.
Nonclercq, A., Urbain, C., Verheulpen, D., Decaestecker, C., Bogaert, P.V., Peigneux, P.: Sleep spindle detection through amplitude? Frequency normal modelling. J. Neurosci. Methods 214(2), 192–203 (2013)
25. 25.
Quinlan, J.R.: Induction of decision trees. Mach. Learn. 1, 81–106 (1986)Google Scholar
26. 26.
Rao, R.M., Bopardikar, A.S.: Wavelet Transformations. Introduction to Theory and Applications (1998)Google Scholar
27. 27.
Ray, L.B., Fogel, S.M., Smith, C.T., Peters, K.R.: Validating an automated sleep spindle detection algorithm using an individualized approach. J. Sleep Res. 19(2), 374–378 (2010)
28. 28.
Vapnik, V.: Statistical learning theory. Adaptive and learning systems for signal processing, communications, and control (1998)Google Scholar
29. 29.
Ventouras, E.M., Monoyiou, E.A., Ktonas, P.Y., Paparrigopoulos, T., Dikeos, D.G., Uzunoglu, N.K., Soldatos, C.R.: Sleep spindle detection using artificial neural networks trained with filtered time-domain EEG: a feasibility study. Comput. Methods Progr. Biomed. 78(3), 191–207 (2005)
© Springer International Publishing Switzerland 2016
## Authors and Affiliations
• Elena Hernandez-Pereira
• 1
Email author
• Isaac Fernandez-Varela
• 1
• Vicente Moret-Bonillo
• 1
1. 1.Faculty of Informatics, Department of Computer ScienceUniversity of A CoruñaA CoruñaSpain | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45974454283714294, "perplexity": 29928.746546855014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00179.warc.gz"} |
https://www.physicsforums.com/threads/probability-urn-problem-with-replacement-kinda.616814/ | # Homework Help: Probability Urn Problem with Replacement Kinda
1. Jun 26, 2012
### RaeganW
This probabilily summer course I am in is so bizarre; I have no idea where the professor is going during the lecture. Luckily I have awesome notes from another Prob & Stats course, but this problem has me stumped.
1. The problem statement, all variables and given/known data
Consider two urns. The fi rst one has N red balls and the second one has N blue balls. Balls are removed randomly from urn 1 in the following manner: after each removal from urn 1, a ball is taken from urn 2 (if urn 2 still has balls) and placed in urn 1. The process continues until all balls have been removed. What is the probability that the last ball removed from urn 1 is red?
Hint: consider first one particular red ball and compute the probability that it is the last one to be removed.
2. Relevant equations
P(Bi|A) = [P(A|Bi)*P(Bi)] / Ʃ 1->n P(A|Bi)*P(Bi)
Maybe???
3. The attempt at a solution
I figured out there will be 2n trials. During the first n trials there are n balls in urn 1, in the n+1th trial there are n-1 balls in urn 1, n+2th trial there are n-2 balls in urn 1, etc so the 2nth trial will draw the last ball out of urn 1.
In my other course, we didn't really get into Bayesian probability. We were shown how to make a tree, but we've never gone over that in this class so I don't know if the professor will accept that kind of answer. But, for a specific red ball to be drawn last:
Hope that makes sense... branches to the left are the probability the specific red ball is chosen, branches to the right is the probability the specific red ball is not chosen.
For the first n trials, the probability that the specific red ball is not chosen is ((n-1)/n)^n.
For the second n trials, the probability that the specific red ball is not chosen is ∏ 1-> (n-1) 1-(1/(n-i)) which I know isn't in the image but I just figured out that's a better way of writing it.
The 2nth trial the probability of choosing the specific red ball is 1, it's the last ball.
But that's only for a particular red ball, not any red ball. No idea how to scale this up to that, and I'm pretty sure there's a more succinct way of doing this... one that doesn't involve drawing so many pictures...
2. Jun 27, 2012
### awkward
I think you are close to the answer, and I don't think you need Baye's Rule.
You have already figured out that the probability the designated red ball is not chosen in the first n draws is $\left( \frac{n-1}{n} \right) ^n$.
So assume the ball has not been drawn yet, after n draws. There are still n balls in urn 1.
What is the probability it is not drawn on the n+1 th draw?
Now there are n-1 balls left in urn 1.
What is the probability the special red ball is not drawn on the n+2 th draw?
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7819521427154541, "perplexity": 464.8660409705524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676595531.70/warc/CC-MAIN-20180723071245-20180723091245-00606.warc.gz"} |
https://arxiv.org/abs/hep-ph/0012024 | # Title:Report of the Working Group on Goldstone Bosons
Abstract: An overview is presented of the talks in the working group on Goldstone Bosons. Topics touched on are CP-violation in the Kaon system, rare Kaon decays, $\pi\pi$-scattering, $\phi$-meson decays, scalar mesons, form-factors and polarizabilities, $\eta$-decays, chiral symmetry breaking, connections with QCD at short-distances and effective theories for electroweak physics.
Comments: 15 pp, working group summary talk, Chiral Dynamics 2000: Theory and Experiment, July 17-22, 2000, Newport News, to be published in the proceedings Subjects: High Energy Physics - Phenomenology (hep-ph) DOI: 10.1142/9789812810977_0022 Report number: LU TP 00-50 Cite as: arXiv:hep-ph/0012024 (or for this version)
## Submission history
From: Johan Bijnens [view email]
[v1] Mon, 4 Dec 2000 08:57:14 UTC (21 KB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23174330592155457, "perplexity": 20748.545092988716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608062.57/warc/CC-MAIN-20200123011418-20200123040418-00502.warc.gz"} |
http://sdss.kias.re.kr/astro/Horizon-Runs/Horizon-Run4.php | ## Scientific Purpose
The Horizon Run 4 is a cosmological N-body simulation designed for the study of coupled evolution between galaxies and large-scale structures of the Universe, and for the test of galaxy formation models. Using $6300^3$ gravitating particles in a cubic box of $L_\mathrm{box} = 3150 h^{-1}\mathrm{Mpc}$, we build a dense forest of halo merger trees to trace the halo merger history with a halo mass resolution scale down to $M_\text{s} = 2.7 \times 10^{11} h^{-1} M_\odot$. We build a set of particle and halo data, which can serve as testbeds for comparison of cosmological models and gravitational theories with observations. We find that the FoF halo mass function shows a substantial deviation from the universal form with tangible redshift evolution of amplitude and shape. At higher redshifts, the amplitude of the mass function is lower, and the functional form is shifted toward larger values of $\ln (1/\sigma)$. We also find that the baryonic acoustic oscillation feature in the two-point correlation function of mock galaxies becomes broader with a peak position moving to smaller scales and the peak amplitude decreasing for increasing directional cosine μ compared to the linear predictions. From the halo merger trees built from halo data at 75 redshifts, we measure the half-mass epoch of halos and find that less massive halos tend to reach half of their current mass at higher redshifts. Simulation outputs including snapshot data, past lightcone space data, and halo merger data are available.
## Authors
• Juhan Kim at CAC of Korea Institute for Advanced Study (KIAS; email contact: kjhan _at_ kias.re.kr)
• Changbom Park at KIAS
• Benjamin L'Huillier at KIAS (corresponding author: lhuillier _at_ kias.re.kr)
• Sungwook E. Hong at KIAS
## Cosmological model of the Horizon Runs (HR's)
All the HR's share the same cosmology.
### Cosmological parameters of the HR's
Cosmology used for the HR's
Cosmological model $\Omega_{m,0}$ $\Omega_{b,0}$ $\Omega_{\Lambda,0}$ $n_\mathrm{s}$ $H_0$ (km/s/Mpc) $\sigma_8$
$\Lambda$CDM WMAP5 0.26 0.044 0.74 0.96 72 1/1.26
### Simulations specifics
Simulations Name Box Size ($h^{-1}\mathrm{Mpc}$) Number of CDM particles Starting redshift HR1 HR2 HR3 HR4 6592 7200 10815 3150 $4120^3$ $6000^3$ $7210^3$ $6300^3$ 23 32 27 100 Eisenstein & Hu (1998) CAMB Source CAMB Source CAMB Source Zel'dovich Zel'dovich Zel'dovich 2LPT
## Outputs from the simulation
• Snapshot data at $z= 0, 0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 1,$ and $4$
• All-sky past lightcone data out to $z=1.5$
• Merger trees of FoF halos from $z = 16$ to $0$, with their gravitationally most bound member particles | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.758209764957428, "perplexity": 3777.529442535306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822625.57/warc/CC-MAIN-20171017234801-20171018014801-00423.warc.gz"} |
http://math.stackexchange.com/questions/245165/the-graph-of-a-smooth-real-function-is-a-submanifold?answertab=oldest | # The graph of a smooth real function is a submanifold
Given a function $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ which is smooth, show that $$\operatorname{graph}(f) = \{(x,f(x)) \in \mathbb{R}^{n+m} : x \in \mathbb{R}^n\}$$ is a smooth submanifold of $\mathbb{R}^{n+m}$.
I'm honestly completely unsure of where or how to begin this problem. I am interested in definitions and perhaps hints that can lead me in the right direction.
-
This is an application of the Implicit Function Theorem. You have a function $f : \mathbb{R}^n \to \mathbb{R}^m$ and you construct the graph given by: $\{ (x,y) \in \mathbb{R}^n \times \mathbb{R}^m : y = f(x) \}.$ Let me define a new map, say, $G : \mathbb{R}^{n+m} \to \mathbb{R}^m$ given by $G : (x,y) \mapsto y-f(x).$ I have defined this the way I have so that the graph of $f$ is the zero-level set of $G$, i.e. the graph of $f$ is the set of $(x,y) \in \mathbb{R}^n \times \mathbb{R}^m$ such that $G(x,y) = 0.$
In brutal detail this map is really:
$$G : (x_1,\ldots,x_n,y_1,\ldots,y_m) \mapsto (y_1-f_1(x_1,\ldots,x_n),\ldots,y_m-f_m(x_1,\ldots,x_n)) \, .$$
We need to calculate the Jacobian Matrix of $G$. A quick calculation will show you that:
$$J_G = \left[\begin{array}{c|c} -J_f & I_m \end{array}\right] ,$$
where $J_f$ is the $m \times n$ Jacobian matrix of $f$ and $I_m$ is the $m \times m$ identity matrix. The matrix $J_G$ is an $m \times (m+n)$ matrix.
To be able to apply the IFT, we need to show that $0$ is a regular value of $G$. (After all, the graph of $f$ is $G^{-1}(0).$) We can do this by showing that none of the critical points get sent to 0 by $G$. Notice that $G$ has no critical points because $J_G$ always has maximal rank, i.e. $m$. This is clearly true since the identity matrix $I_m$ has rank $m$.
It follows that the graph of $f$ is a smooth, parametrisable $(n+m)-m=n$ dimensional manifold in a neighbourhood of each of its points.
-
+1 This is surely a better explanation than mine! – yo' Nov 26 '12 at 19:53
I'm having trouble understanding your definition of G. There are multiple functions $f_1,\ldots , f_m$ because of the values $y_1,\ldots,y_m$ to be equal to those, and if that is the case, wouldn't each of the coordinates of the range be equal to 0? – Ezea Nov 26 '12 at 19:55
The function $f$ goes from $\mathbb{R}^n$ to $\mathbb{R}^m$, i.e. you give it $n$ numbers and it gives you $m$ numbers back. The numbers you give it are $x_1,\ldots,x_n$ while the numbers it gives you are $f_1,\ldots,f_m.$ In longhand: $$f(x) = f(x_1,\ldots,x_n) = (f_1(x_1,\ldots,x_n), \ldots, f_m(x_1,\ldots,x_n)).$$ Does this help? – Fly by Night Nov 26 '12 at 20:12
The the graph of $f$ is given by the equation $G(x,y)=0$. As you said: the coordinates of the range of the graph will all be zero. This is what the IFT does: It takes a system of equations and tells you if you get a smooth, parametrisable manifold as the solution space. The trick is to find a system of equations whose solution gives you what you're interested in. – Fly by Night Nov 26 '12 at 20:23
Ok, that means my real problems lie in my understanding of IFT. That helps a considerable amount now. Thank you! – Ezea Nov 26 '12 at 20:45
The map $\mathbb R^n\mapsto \mathbb R^{n+m}$ given by $t\mapsto (t, f(t))$ has the Jacobi matrix $\begin{pmatrix}I_n\\f'(t)\end{pmatrix}$, which has a full rank $n$ for all $t$ (because of the identity submatrix). This means that its value range is a manifold. Is there anything unclear about it?
How is this a proof that it is a manifold?
A manifold of rank $n$ is such set $X$ that for each $x\in X$ there exists a neighborhood $H_x\subset X$ such that $H_x$ is isomorphic to an open subset of $\mathbb R^n$. In this case, the whole $X=graph(f)$ is isomophic to $\mathbb R^n$. The definition of a manifold differs, often it is required for the isomophism to be diffeomophism, which is true here as well.
Think of it this way: A manifold $X$ of rank $2$ is something, in which: wherever someone makes a dot there by a pen, I can cut a piece of $X$ and say to this person: "See, my piece is almost like a piece of paper, it's just a bit curvy.
The definition of manifold might seems strage here because here you can take the neighborhood as the whole $X$. This is not always the case: A sphere is a manifold as well, but a whole sphere is not isomorphic to $\mathbb R^2$, you have to take only some cut-out of it.
-
Let me try to reiterate. This matrix comes from the derivative of my graph function where $I_n$ is the derivative of $t$ and $f'(t)$ is the derivative of $f(t)$. That I believe I understand. I'm not sure about how rank $n$ means that the range is a manifold or even exactly what it is to be a submanifold. Could you please elaborate a little on that? – Ezea Nov 26 '12 at 19:38
It's been long since I had Calculus, so I'm not sure I'll give the exact explanation, but I'll try. – yo' Nov 26 '12 at 19:40
Your further explanation as well as the other answer were very helpful. +1 Thank you. – Ezea Nov 26 '12 at 20:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.925631046295166, "perplexity": 150.3824986258491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651727.46/warc/CC-MAIN-20150417045731-00195-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=65.35&jrnl=one&onejrnl=mcom | # American Mathematical Society
My Account · My Cart · Customer Services · FAQ
Publications Meetings The Profession Membership Programs Math Samplings Washington Office In the News About the AMS
You are here: Home > Publications
AMS eContent Search Results
Matches for: msc=(65.35) AND publication=(mcom) Sort order: Date Format: Standard display
Results: 1 to 30 of 52 found Go to page: 1 2
[1] Robert K. Brayton, Fred G. Gustavson and Ralph A. Willoughby. Some results on sparse matrices . Math. Comp. 24 (1970) 937-954. MR 0275643. Abstract, references, and article information View Article: PDF This article is available free of charge [2] David M. Young. Convergence properties of the symmetric and unsymmetric successive overrelaxation methods and related methods . Math. Comp. 24 (1970) 793-807. MR 0281331. Abstract, references, and article information View Article: PDF This article is available free of charge [3] Ake Björck and Victor Pereyra. Solution of Vandermonde systems of equations . Math. Comp. 24 (1970) 893-903. MR 0290541. Abstract, references, and article information View Article: PDF This article is available free of charge [4] D. Kershaw. Inequalities on the elements of the inverse of a certain tridiagonal matrix . Math. Comp. 24 (1970) 155-158. MR 0258260. Abstract, references, and article information View Article: PDF This article is available free of charge [5] P. Schlegel. The explicit inverse of a tridiagonal matrix . Math. Comp. 24 (1970) 665. MR 0273798. Abstract, references, and article information View Article: PDF This article is available free of charge [6] Robert J. Herbold. A generalization of a class of test matrices . Math. Comp. 23 (1969) 823-826. MR 0258259. Abstract, references, and article information View Article: PDF This article is available free of charge [7] Richard J. Hanson and Charles L. Lawson. Extensions and applications of the Householder algorithm for solving linear least squares problems . Math. Comp. 23 (1969) 787-812. MR 0258258. Abstract, references, and article information View Article: PDF This article is available free of charge [8] Jerry A. Walters. Nonnegative matrix equations having positive solutions . Math. Comp. 23 (1969) 827. MR 0258264. Abstract, references, and article information View Article: PDF This article is available free of charge [9] P. A. Businger. Reducing a matrix to Hessenberg form . Math. Comp. 23 (1969) 819-821. MR 0258255. Abstract, references, and article information View Article: PDF This article is available free of charge [10] Victor Lovass-Nagy and David L. Powers. Reduction of functions of some partitioned matrices . Math. Comp. 23 (1969) 127-133. MR 0238480. Abstract, references, and article information View Article: PDF This article is available free of charge [11] Peter A. Businger. Extremal properties of balanced tri-diagonal matrices . Math. Comp. 23 (1969) 193-195. MR 0238476. Abstract, references, and article information View Article: PDF This article is available free of charge [12] C. H. Yang. On designs of maximal $(+1,\,-1)$-matrices of order $n\equiv 2({\rm mod}\ 4)$. II . Math. Comp. 23 (1969) 201-205. MR 0239748. Abstract, references, and article information View Article: PDF This article is available free of charge [13] C. W. Gear. A simple set of test matrices for eigenvalue programs . Math. Comp. 23 (1969) 119-125. MR 0238477. Abstract, references, and article information View Article: PDF This article is available free of charge [14] D. Kershaw. The explicit inverses of two commonly occurring matrices . Math. Comp. 23 (1969) 189-191. MR 0238478. Abstract, references, and article information View Article: PDF This article is available free of charge [15] Beresford Parlett. Global convergence of the basic ${\rm QR}$ algorithm on Hessenberg matrices . Math. Comp. 22 (1968) 803-817. MR 0247759. Abstract, references, and article information View Article: PDF This article is available free of charge [16] Harold Willis Milnes. A note concerning the properties of a certain class of test matrices. . Math. Comp. 22 (1968) 827-832. MR 0239743. Abstract, references, and article information View Article: PDF This article is available free of charge [17] T. L. Markham. An iterative procedure for computing the maximal root of a positive matrix . Math. Comp. 22 (1968) 869-871. MR 0239741. Abstract, references, and article information View Article: PDF This article is available free of charge [18] Choong Yun Cho. On the triangular decomposition of Cauchy matrices . Math. Comp. 22 (1968) 819-825. MR 0239740. Abstract, references, and article information View Article: PDF This article is available free of charge [19] G. Dahlquist, B. Sjöberg and P. Svensson. Comparison of the method of averages with the method of least squares. . Math. Comp. 22 (1968) 833-845. MR 0239742. Abstract, references, and article information View Article: PDF This article is available free of charge [20] L. A. Hageman and R. B. Kellogg. Estimating optimum overrelaxation parameters . Math. Comp. 22 (1968) 60-68. MR 0229371. Abstract, references, and article information View Article: PDF This article is available free of charge [21] C. H. Yang. On designs of maximal $(+1,\,-1)$-matrices of order $n\equiv 2({\rm mod}\ 4)$ . Math. Comp. 22 (1968) 174-180. MR 0225476. Abstract, references, and article information View Article: PDF This article is available free of charge [22] T. L. Jordan. Experiments on error growth associated with some linear least-squares procedures . Math. Comp. 22 (1968) 579-588. MR 0229373. Abstract, references, and article information View Article: PDF This article is available free of charge [23] Erwin H. Bareiss. Sylvester's identity and multistep integer-preserving Gaussian elimination . Math. Comp. 22 (1968) 565-578. MR 0226829. Abstract, references, and article information View Article: PDF This article is available free of charge [24] Gilbert C. Best. Powers of a matrix of special type . Math. Comp. 22 (1968) 667-668. MR 0226830. Abstract, references, and article information View Article: PDF This article is available free of charge [25] S. Charmonman and R. S. Julius. Explicit inverses and condition numbers of certain circulants . Math. Comp. 22 (1968) 428-430. MR 0226831. Abstract, references, and article information View Article: PDF This article is available free of charge [26] Henry E. Fettis and James C. Caslin. Eigenvalues and eigenvectors of Hilbert matrices of order $3$ through $10$ . Math. Comp. 21 (1967) 431-441. MR 0223075. Abstract, references, and article information View Article: PDF This article is available free of charge [27] F. D. Burgoyne. Practical $L\sp p$ polynomial approximation . Math. Comp. 21 (1967) 113-115. MR 0224254. Abstract, references, and article information View Article: PDF This article is available free of charge [28] J. Schönheim. Conversion of modular numbers to their mixed radix representation by a matrix formula . Math. Comp. 21 (1967) 253-257. MR 0224252. Abstract, references, and article information View Article: PDF This article is available free of charge [29] Leopold B. Willner. An elimination method for computing the generalized inverse . Math. Comp. 21 (1967) 227-229. MR 0223082. Abstract, references, and article information View Article: PDF This article is available free of charge [30] I. Borosh and A. S. Fraenkel. Exact solutions of linear equations with rational coefficients by congruence techniques . Math. Comp. 20 (1966) 107-112. MR 0187379. Abstract, references, and article information View Article: PDF This article is available free of charge
Results: 1 to 30 of 52 found Go to page: 1 2 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9750871658325195, "perplexity": 1841.5510262790046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320736.82/warc/CC-MAIN-20170626115614-20170626135614-00297.warc.gz"} |
https://hal.archives-ouvertes.fr/hal-01792508 | The Infinitesimal Moduli Space of Heterotic G$_{2}$ Systems
Abstract : Heterotic string compactifications on integrable G$_{2}$ structure manifolds Y with instanton bundles ${(V,A), (TY,\tilde{\theta})}$ yield supersymmetric three-dimensional vacua that are of interest in physics. In this paper, we define a covariant exterior derivative ${\mathcal{D}}$ and show that it is equivalent to a heterotic G$_{2}$ system encoding the geometry of the heterotic string compactifications. This operator ${\mathcal{D}}$ acts on a bundle ${\mathcal{Q}=T^*Y \oplus {\rm End}(V) \oplus {\rm End}(TY)}$ and satisfies a nilpotency condition ${\check{{\mathcal{D}}}^2=0}$ , for an appropriate projection of ${\mathcal D}$ . Furthermore, we determine the infinitesimal moduli space of these systems and show that it corresponds to the finite-dimensional cohomology group ${\check H^1_{\check{{\mathcal{D}}}}(\mathcal{Q})}$ . We comment on the similarities and differences of our result with Atiyah’s well-known analysis of deformations of holomorphic vector bundles over complex manifolds. Our analysis leads to results that are of relevance to all orders in the ${\alpha'}$ expansion.
Keywords :
Type de document :
Article dans une revue
Commun.Math.Phys., 2018, 360 (2), pp.727-775. 〈10.1007/s00220-017-3013-8〉
Domaine :
Liste complète des métadonnées
https://hal.archives-ouvertes.fr/hal-01792508
Contributeur : Inspire Hep <>
Soumis le : mardi 15 mai 2018 - 15:13:33
Dernière modification le : mercredi 16 janvier 2019 - 10:21:52
Citation
Xenia De La Ossa, Magdalena Larfors, Eirik E. Svanes. The Infinitesimal Moduli Space of Heterotic G$_{2}$ Systems. Commun.Math.Phys., 2018, 360 (2), pp.727-775. 〈10.1007/s00220-017-3013-8〉. 〈hal-01792508〉
Métriques
Consultations de la notice | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9315876364707947, "perplexity": 1492.6728832519236}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660877.4/warc/CC-MAIN-20190118233719-20190119015719-00019.warc.gz"} |
https://devel.isa-afp.org/entries/Kleene_Algebra.html | # Kleene Algebra
Title: Kleene Algebra Authors: Alasdair Armstrong, Georg Struth and Tjark Weber (tjark /dot/ weber /at/ it /dot/ uu /dot/ se) Submission date: 2013-01-15 Abstract: These files contain a formalisation of variants of Kleene algebras and their most important models as axiomatic type classes in Isabelle/HOL. Kleene algebras are foundational structures in computing with applications ranging from automata and language theory to computational modeling, program construction and verification. We start with formalising dioids, which are additively idempotent semirings, and expand them by axiomatisations of the Kleene star for finite iteration and an omega operation for infinite iteration. We show that powersets over a given monoid, (regular) languages, sets of paths in a graph, sets of computation traces, binary relations and formal power series form Kleene algebras, and consider further models based on lattices, max-plus semirings and min-plus semirings. We also demonstrate that dioids are closed under the formation of matrices (proofs for Kleene algebras remain to be completed). On the one hand we have aimed at a reference formalisation of variants of Kleene algebras that covers a wide range of variants and the core theorems in a structured and modular way and provides readable proofs at text book level. On the other hand, we intend to use this algebraic hierarchy and its models as a generic algebraic middle-layer from which programming applications can quickly be explored, implemented and verified. BibTeX: @article{Kleene_Algebra-AFP, author = {Alasdair Armstrong and Georg Struth and Tjark Weber}, title = {Kleene Algebra}, journal = {Archive of Formal Proofs}, month = jan, year = 2013, note = {\url{https://isa-afp.org/entries/Kleene_Algebra.html}, Formal proof development}, ISSN = {2150-914x}, } License: BSD License Used by: KAD, KAT_and_DRA, Multirelations, Quantales, Regular_Algebras, Relation_Algebra Status: [ok] This is a development version of this entry. It might change over time and is not stable. Please refer to release versions for citations. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6135578155517578, "perplexity": 2647.2448838535165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153931.11/warc/CC-MAIN-20210730025356-20210730055356-00499.warc.gz"} |
https://brilliant.org/problems/dont-try-to-complicate-it/ | # Don't try to complicate it!
Algebra Level 3
$\begin{eqnarray} \frac{a^3 + 1}{a^5 - a^4 - a^3 +a^2 } \end{eqnarray}$
Let $$a$$ be one of the roots of the equation $$x^2-x-4=0$$. The value of the expression above is a form of $$\dfrac{m}{n}$$. Submit your answer as $$m+n$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7216759920120239, "perplexity": 284.40270048019437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645604.18/warc/CC-MAIN-20180318091225-20180318111225-00127.warc.gz"} |
https://mathhelpboards.com/threads/locus-in-the-complex-plane.811/ | # Locus in the complex plane.
#### jacks
##### Well-known member
Apr 5, 2012
226
Area of Region Bounded by the locus of $z$ which satisfy the equation $$\displaystyle \arg \left(\frac{z+5i}{z-5i}\right) = \pm \frac{\pi}{4}$$ is
#### Mr Fantastic
##### Member
Jan 26, 2012
66
Area of Region Bounded by the locus of $z$ which satisfy the equation $$\displaystyle \arg \left(\frac{z+5i}{z-5i}\right) = \pm \frac{\pi}{4}$$ is
What have you tried?
#### Mr Fantastic
##### Member
Jan 26, 2012
66
Area of Region Bounded by the locus of $z$ which satisfy the equation $$\displaystyle \arg \left(\frac{z+5i}{z-5i}\right) = \pm \frac{\pi}{4}$$ is
You can take a geometric approach.
Your relation can be written $$\arg(z + 5) - \arg(z - 5) = \pm \frac{\pi}{4}$$, that is, $$\alpha - \beta =\pm \frac{\pi}{4}$$.
Consider the line segment joining z = 5 and z = -5 as the chord on a circle and consider the rays $$\arg(z +5) = \alpha$$ and $$\arg(z - 5) = \beta$$ subject to the restriction $$\alpha - \beta =\pm \frac{\pi}{4}$$. Consider the intersection of these rays and the angle between them at their intersection point. The angle is constant .... Now think of a circle theorem involving angles subtended by the same arc at the circumference .....
It's not hard to see you that have a circle with 'holes' at z = 5 and z = -5 (why?).
Now your job is to determine the radius of this circle and use it to get the area.
Last edited: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9834454655647278, "perplexity": 379.9511440435738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107871231.19/warc/CC-MAIN-20201020080044-20201020110044-00447.warc.gz"} |
https://www.physicsforums.com/threads/potential-energy-for-magnetic-fields.269770/ | # Potential energy for magnetic fields
1. Nov 6, 2008
### jaejoon89
1. The problem statement, all variables and given/known data
A circular 10 turn coil that has a radius of 0.05 m and current of 5A lies in the xy plane with a uniform magnetic field B = 0.05 T i + 0.12 T k (i and k are the unit vectors). What's the potential energy for the system???
2. Relevant equations
U = -m*B where m is the dipole moment = I*A
3. The attempt at a solution
B = sqrt((0.05 T)^2 + (0.12 T)^2) = 0.13 T
So for this I would get U = -0.00511 J, but the answer key says -0.000472 J... where's the mistake???
2. Nov 6, 2008
### jaejoon89
I'm assuming from the answer that the magnetic moment must not be aligned with the field, but how do you know this given the problem? And how do you calculate this?
3. Oct 5, 2010
### Olfus
Well, we have to keep in mind that -m*B is actually a "dot product". ;) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9443336129188538, "perplexity": 943.8873539323033}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660242.52/warc/CC-MAIN-20160924173740-00282-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/actions-and-cross-sections.819049/ | Actions and Cross Sections
1. Jun 14, 2015
Cluelessluke
Can someone point towards how to derive that the cross section is proportional to the imaginary part of the Action? Also, I thought the Action was a real number?
Thanks!
2. Jun 14, 2015
fzero
You are probably referring to the Optical Theorem. In that case, $S$ is not the action but the scattering matrix (S-matrix), which is basically $S = e^{iHt}$. An explanation of the scattering matrix and Optical Theorem can be found in http://www.itp.phys.ethz.ch/research/qftstrings/archive/12HSQFT1/Chapter10.pdf [Broken].
Last edited by a moderator: May 7, 2017
3. Jun 14, 2015
Cluelessluke
Thanks for the reply! To be more specific, I'm referring to equation (14) in http://arxiv.org/pdf/1206.5311v2.pdf.
They have an e^{-2Im(S)} contribution in their cross section (where I believe S in the action not the S-matrix) and I'm having a hard time seeing where it comes from.
Last edited: Jun 14, 2015
4. Jun 14, 2015
fzero
Their equation (3) expresses the cross-section in terms of the S-matrix and they credit reference [7] with a calculation in the path-integral formalism that introduces the action. It is natural in the path-integral formulation that the action would appear,. Afterwards, they suggest that the expression is dominated by a saddle-point in a certain limit that takes $g\rightarrow 0$. This saddle-point approximation is closely related to the WKB approximation that should be familiar from ordinary QM. What is happening is that, in this limit, the classical paths (critical points of the action) dominate the path integral, so the path integral expression can be approximated by their result $\exp W$. As to why the action can be complex, I would suggest looking at their references for the details that they're clearly leaving out. There is some discussion of working in the Euclidean formalism, but I can't follow them well enough to give a concrete explanation.
You should try to understand the details of their arguments (perhaps some of their references might give further details), but you should know that the fact that they can express the cross section in terms of the imaginary part of the action is not a general rule. The Optical Theorem is general, but the expression from this paper relies on this physical problem having the correct properties to allow the saddle point approximation to work. There are many examples of physics problems where the saddle point approximation is useful, so it's worth learning why it works here. However the statement you present in your OP is most definitely not true in general.
5. Jun 14, 2015
Cluelessluke
Great! Thanks so much for your help! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8813278675079346, "perplexity": 319.7926044534537}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589270.3/warc/CC-MAIN-20180716115452-20180716135452-00253.warc.gz"} |
https://www.eurotrib.com/story/2007/6/17/111516/831 | Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going.
## A personal reaction to 9/11
by Jerome a Paris Sun Jun 17th, 2007 at 11:15:16 AM EST
The text below was sent to me by a regular reader, who is unable to post himself for professional reasons. I am not endorsing it (see my reaction as the first comment below) but i found it an interesting an enlighening story.
At his request, a few real names have been redacted.
(Everything below by private).
I was never someone you could label and certainly never one to hold to a party line if I felt the party was wrong. Always a registered Democrat I vote based on conscience and not always Democrat. But I did vote for Bill Clinton twice and never regretted those votes - at least not until September 11. That day changed me irreversibly and had a profound impact on how I view politics.
September 11 was a beautiful sunny day right after our end-of-summer Labor Day holiday. I had a neighborhood reunion 2 weeks prior which we have every five years. I renewed old ties with people including [childhood friend] who remembered that I taught him to swim the butterfly stroke. My office was on 57th Street and Third Avenue and across the hall from a major public relation firm. My son had just started his senior year at Duke University in economics and was looking forward to the litany of investment banking firm interviews in the Fall. Just before the holiday weekend I had to cancel a west coast trip because of a sudden meeting I had in New York that was more important. I was supposed to be on the American Airlines 9:00 AM flight out of JFK to Los Angeles.
The first notice I got was from my wife who said that a plane had crashed into the World Trade Center but the initial news report seemed to indicate that it was a private jet. I walked across the hall to the P.R. firm because they had a television in their conference room.
When I walked in the entire firm was in front of the TV and crying. It was not a small plane and the reason for their crying was that they were the public relations firm for Cantor Fitzgerald whose offices were at the top of the World Trade Center north tower. They knew their clients were probably dead. Then came the second jet into the South tower. After that everything else seemed to happen in flash.
My wife had already been on the phone with the wives of two of my friends who worked in the South tower. The last their wives heard was that they were leaving which was before the plane hit the south tower and they wanted me to meet them and make sure they were okay. Then came the plane crashing into the Pentagon, and then another somewhere in Pennsylvania.
Before I could leave my office both towers came crashing down and Arab Muslims were literally celebrating and dancing in the streets of Paterson, New Jersey when the towers collapsed. There was a report about some Arab looking passengers escaping from grounded aircraft that had been set to depart New York including the American Airlines flight I was originally booked on. So we knew that there were more intended flights to be used by these terrorists.
The streets of Manhattan were empty of all vehicles except emergency and military vehicles. Guiliani had shut down all means of transportation in and out of Manhattan and had directed people to evacuate certain landmark skyscrapers. He also made sure that all bridges and tunnels were evacuated. He also directed Governor Pataki to have armed military personnel at every corner and in every major building. People were wandering in stunned silence. Guiliani had forewarned of maybe up to 10,000 dead.
As I was rushing downtown to meet my two friends who I had hoped got out of the World Trade Center I kept muttering (referring to Bill Clinton), "You SOB, you really did it now" over and over again. I was also thankful that my son was not a year older, because he could have been working for one of the investment banking firms in the WTC and would have been incinerated with others.
I got down to about 14th Street and got cellphone calls that both Tim and Dave were okay. We made up a meeting point and did meet up. Both were covered in soot and were in shock. When the first plane hit the WTC north tower, Tim heard an explosion and falling debris. As he looked out his window that faced that North tower he actually saw people making the decision to jump from higher floors rather than probably be incinerated. He left before the second jet hit his tower, as did Dave.
We waited in an ad agency I knew down ther until the danger had subsided and we got on the Long Island Railroad.
In my neighborhood there were seven 4 year old children at Miss Sue's Nursery School who came home that day to find that they no longer had a daddy and in one case a daddy and mommy.
The following day I got a call from one of my friends that [childhood friend], who I had met at the reunion, was killed on the jet that was crashed into the Pentagon. His crime? He went to visit his daughter who had just started college in Washington DC before going off on his business trip to Los Angeles. I was to find out that one of my son's best friends in college lost his father that day.
About a month later I got a strange call from a woman in Massachusetts who called my phone inadvertently because she had gotten a call that informed her that they recovered the driver license of her 25 year old son who was killed that day in the WTC. She just started to pour everything out to me even when she knew it was the wrong number about how her son graduated Boston College and had a great career in finance with a new apartment in Manhattan.
These are indelible images and memories which is why I said that September 11 changed everything for me and for so many people.
-------------------------------
I am one of those people who are naturally curious and are always thirsting for more knowledge and for answers.
My initial reaction ended up being right as I learned more and more information over time. The key learning came about a year ago as we were coming to the five year anniversary of 9/11 and I was on the Editorial Board of an old public policy magazine. We agreed to cover 9/11 differently and look to see if we are winning or losing which required having in-depth understanding of the development of these radical Islam groups, their intentions, and whether they are or are not achieving them.
Wars are fought for political and ideological power and superiority. Wars are also defined by a series of battles that you would hope you lose some but win most as long as you achieve your desired objective.
We have been engaged in a war with a fascist form of Islam for decades. The "we" includes all forms of civilization other than this radical fundamentalist form of Islam. So it includes the USA, Europe (East and West), as well as all of Asia and the Pacific.
The enemy are all groups that have branched off the old Muslim Brotherhood from the 1920's which includes Al Qaeda, Hamas, Hezbollah. Some are Shia based and others are Sunni based. While they may hate each other, what they have in common is hatred for a common enemy which is us. Other than Iran, none are nation states and operate cross boundaries outside the bounds of conventional warfare and treaties. They use terror as their primary tactic to breakdown the will of the people and therefore the governments.
If you saw the movie Syriana it was a glimpse into the recruiting and indoctrination methods these groups have used for decades within the educational systems they created in all the Arab countries with the help of their host governments in exchange for the promise of allowing the host governments to continue their power.
Their goals are absolute rule in all Muslim countries and then the elimination of all modern western society. They are far more focused and committed to their cause than communists ever were. They are very smart, very resourceful, very well financed, and very, very patient. But they do believe their time is now.
It is a mistake for any political leader not to believe in what they write and preach. We have already learned that lesson.
The U.S. has made more mistakes regarding these people than any other country going back to the 1950's. We turned a blind eye to these groups and their beliefs and teachings. Carter became obsessed with human rights violations by the Shah which opened the flood gates for the return of the Ayatollah Khomeini and the fundamentalist revolution in Iran. Reagan turned his back on the threat when the Marines barracks was blown up by Hezbollah in Lebanon. Bush Sr. let Hussein maintain his power to sponsor terrorists. Clinton treated the first WTC attack as a police action and all the other attacks on U.S. personnel abroad as crimes and not as acts of war. Throughout this time we did not recognize the threat of radical Islam and were unprepared for what we were to confront. When Clinton finally did recognize it our intelligence capabilities were in a state of chaos, we had zero border and airline security, and he failed to take out bin Laden on multiple occasions for fear of political consequences. During the Clinton administration they even had the opportunity to rectify airport and airline security with measures recommended from a Commission, but were all dismissed by the Clinton administration and Republicans under pressure from the airline industry. They were measures that would have prevented those 19 hijackers from hijacking any jets, or even boarding those jets.
What should not been in doubt since 9/11 are that there is a war between all radical Muslim groups and us. It has been called a clash of cultures, but that dismisses the political aspects of this. As did the various fascists groups in the 1930's these Islamic fascist groups also want ultimate power and are fanatical enough to stop at nothing to achieve their goals. As with Hitler, they cannot be negotiated with or appeased. They actually view appeasement as a sign of weakness and will use it against their enemies. And as in the lead up to World War II the blame for everything centered in Jews.
When it comes to this War, President Bush's moves and adjustments were necessary, proper, and lawful. The U.S. had (and may still have) Al Qaeda and Hezbollah sleeper cells within our boundaries. They were cells that were still being financed through international means. And there were countries and leaders that harboring them, training them, and supporting them. Bush was dealing with a dismantled CIA and FBI as well as a greatly reduced military because we thought the war (Cold War) was over and were in a false sense of security.
The Patriot Act was necessary in order to infiltrate all of the Muslims groups in this country and determine who were threats. In spite of all the protests there is not a single shred of evidence of someone's civil right being violated with the Patriot Act. The NSA program of listening in on overseas telephone conversations from known terror suspects was also necessary. Again, in spite of charges being made there is not a single shred of evidence that this program was abused and violated anyone's civil rights. The Swift program was necessary to track down the money trail leading to these terror groups. And again, in spite of charges of invading people's bank records there is not a shred of evidence that anyone's rights were violated. The net result was the U.S. was able to rapidly round up terror suspects in the U.S. (and hopefully those bastards that were dancing in celebration in Paterson) and ship them down to Gitmo. There has also been successful worldwide coordination of intelligence to break other potential attacks, particularly after Madrid and London.
Was it a mistake to invade Iraq? As I said the Middle East was the nest for this fascist Islam and its propagation. It was a self-feeding system that had to be broken. I agreed, as did nearly everyone in this country, that the invasion to take out Hussein was correct. But the war was fought on a shoe-string and accomplished nothing more than deposing Hussein. There should have been forces to immediately close off the borders to Syria and Iran and through saber rattling threatened to go further if they interfered. That was not done. In addition, they disbanded Hussein's military and police rather than give them the chance to switch sides and thereby secure the country immediately. Those were two enormous mistakes that have cost thousands of American lives.
For those that criticize the Bush administration for usurping the Constitution and our laws with their efforts to secure this country I would say, "Prove it" because other than hyperbole and insinuations thee has been no credible evidence of so-called "lying."
For those, like Hillary Clinton, that have conveniently switched sides on the Iraq issue by revealing, "If I knew then what I know now," they should understand that a President does not get do-overs. A President must have the clarity to make a decision, the conviction of that decision, and the courage to see that decision through. That disqualifies Hillary Clinton as a qualified candidate for President.
Edwards idiot pronouncement that "the war on terror is just a bumper sticker slogan" is further proof of his superficial used-car sales mentality when it comes to politics. Obama is just not qualified enough yet.
When you listen to the Republican debates your heard a number of their candidates clearly define this war against radical Islam, while not a single Democrat could (or would) define it.
Frankly the only qualified candidate the Democrats have is not running and that would be former Senator Bob Kerrey who is a highly decorated former Navy Seal and was on the 9/11 Commission.
When the founding fathers finally created the form of federal government we have now, they envisioned the primarily role of the President as the commander-in-chief to lead the army against outside enemies. In that role Bill Clinton was a failure and G.W. Bush has done his job. And that is why whenever the American people choose a President during a time of war they never elect an anti-war candidate.
September 11 was a wake-up call. While some Americans have fallen back to sleep into some sort of state of denial, fortunately most Americans do understand that we are in danger and at war beyond Iraq.
I think that will give you a perspective of what really happened that day and what most Americans truly believe.
While there is little I can comment on the personal experience of 9/11, I'd like to react to the second part, i.e. the political reaction, which places 9/11 in the context of a total war supposedly waged against the rest of the world by extremist islamists. there a number of major omissions in that narrative: the fact that the US armed and supported the Afghan Mujahidin against the Soviets, before they turned against America, the historical context in Iran whereby the US engineered a coup against the democratically elected government of Mossadegh, the fact that Saddam Hussein was a (ferociously) secular dictator; to a large extent, Islamism was born as a reaction against the dictatorial and corrupt regimes of the region, and has included anti-Americanism and/or anti-Westenr sentiments because the West supported (and supports) the hated regimes they are fighting. In many of these countries, religion has been the only political acceptable (and tolerated) outlet for political frustration, thanks to its social (help to the poor) and spiritual role and its ability to lead multitudes; that it turned against West, and has become associated in local populations' minds with freedom has come from our brainless support for the dictators whom we felt would be more favorable to us and th our access to oil. (It is of course ironic that a country like Iran is more open today to Western investment than Saudi Arabia). Seeing islamism as an all-encompassing movement neglects the local roots, and local grievances of most of its members. Maybe it's too late for non-meddling by the West to be sufficient to cure that ill, but it will ultimately be necessary - and we certainly haven't tried it yet; the only successes in the fight against Islamist terrorism have come from good old fashioned police and intelligence work. So mocking Clinton for taking the police route to the first WTC bombing is wrong , in my view: it was the correct way to respond, and it was successful. That it was not sufficient to prevent other attacks says more about the persistent nature of the underlying grievances than about the failure of the law enforcement route; as to the claim that no civil rights have been breached, it is clearly disingenuous. The evidence pointing the other way is overwhelming, and the several recent court decisions about Guantanamo, the inability of the US government to sentence any of the supposed terrorists in that base, and the examples of people like Maher Ahar (the Canadian guy sent to Syria to be tortured) all point ot grievious violations that show that we are giving up all that we're supposed to stand for in a misguided (and doomed to fail) attempt to sink to the level of the terrorists to fight them. In the long run, we're all dead. John Maynard Keynes
Well, I suppose this is included as being "controversial" and likely to promote discussion - even as some of the main usual contributors are recovering/returning from the great meet-up in Paris (thanks again Jerome). But it's just sad rubbish and Jerome's points against it are obviously valid to anyone who has really read a bit of informed writing on the subject. It's sad that someone evidently not totally stupid believes such junk and it helps explain to non-Americans why Bush got elected a SECOND time (wasn't the first time an obvious enough gross error?!). Normally I'd try to back up such a dismissal with chapter and verse, but Jerome has aleady given some very good reasons - and I'm one of those recovering from the meet-up, and from having eaten and drunk too much at M's parents. I suspect someone who writes stuff like this is already pretty far beyond the each of rational argument. But here are a few links anyway, with an informed rational approach (as usual) from Fisk and Chomsky: Maybe it's because I'm a Londoner - that I moved to Nice.
Thanks for posting this, I always wondered what happened to that guy('s opinion).
José Padilla is a normal American. Remember "Innocent until proven guilty"? He has not been proven anything, and the fact that he was tortured actually prevents him to be ever convicted now, if if ever was guilty. That's the whole point of follow legal procedure: that there be no doubt that those convicted are guilty. It does not work perfectly in normal circumstances, but if you activley corrupt the process, there's no way of knowing. And as to normal Americans, just look at all the homonyms that get stuck on the no-fly list and cannot get out of it, whetever their good faith. And what happens if you're unlucky to be the neighbor of someone who turns out to be a terrorist (or a designated terrorist, as we don't know), and had a barbecue with him, and are forever tainted as a terrorist associate because he was your neighbor and you were sociable? You think that doesn't happen? In the long run, we're all dead. John Maynard Keynes
For those that criticize the Bush administration for usurping the Constitution and our laws with their efforts to secure this country I would say, "Prove it" because other than hyperbole and insinuations thee has been no credible evidence of so-called "lying." That's laughable. The Bush administration has clearly breached the law and only corrected it ex-post with the NSA wiretapping. It has de-facto abolished the central elements of Habeas corpus, allowing the president to hold even American citizens indefinitely and without trial. It has legalised torture. Most of all, it has invaded and completely destroyed a whole country that had nothing at all to do with 9/11. As for the "lies": The Bush administration lies constantly as a matter of habit. For the very latest example, just watch this: http://www.crooksandliars.com/Media/Download/18415/1/TDS-TonySnow-lying.wmv
The first paragraph in my above post should be in blockquote, it's from the original poster.
The question that is not addressed in this diary is 'What drives Islamic Fundamentalism?'. It does not come out of nowhere, and it certainly isn't part of the religion. The Crusades - The Jihads. Are they mirrors, or is the latter a time-lagged reaction to the former? You can't be me, I'm taken
I agree that Islamic Fundamentalism did not appear out of thin air. However I would want to be specific. There is nothing wrong with Islamic Fundamentalism or fundamantalist principles in any religion. The following of a religion based on a very orthodox tenets is acceptable and should not interfere with the rights of other people. This perverse form of Islam that we all connect with "terror" crosses beyond religious boundaries and into political power. It has been called fascism derived out of religion (Islam) or just a radical form of Islam. I would agree that the growth of this movement began in the early 20th century and not in 2001. I have not studied the Crusades in quite some time but it would be a worthwhile study in comparisons, although I would imagine it would upset certain groups. But what drives it? There are underlying issues that have been there for many decades and were, and still are not, addressed. To be blunt, we (Europe and the US) took Arabs for granted for too many years. We supported whomever were their leaders primarily for reason of oil. But we never sought to develop that part of the world and bring greater prosperity and opportunity to its people as we did in other parts of the world. In the absence of alternatives, these people turned to these local radical mosques who promised them basic necessities, provide them with people to blame (USA, Europe and Jews), and taught their children this radical form of Islam. The solution to this requires more than bullets, spies, border security, and prisons. The full answer is much more complex and I have yet to hear any of the candidates articulate it.
There is nothing wrong with Islamic Fundamentalism or fundamantalist principles in any religion. I disagree. Implicit in such fundamentalism is the dismissal of law and natural moral. It has been called fascism derived out of religion (Islam) Called by whom? I would agree that the growth of this movement began in the early 20th century and not in 2001. Do you still mean the Muslim Brotherhood as single origin of a monolythic movement? By that logic, you could go back to the first Wahhabites, or even the first Salafists. took Arabs for granted What about non-Arab Muslims? Khomeinist Shi'a fundamentalism and Pakistani-origin Sunni fundamentalism aren't Arab-based, nor is Ferghana valley fundamentalism or the tribal madness of Afghan fundies. we never sought to develop that part of the world 'Develop that part of the world'? We? What about their democratic will and self-control? So who wants to rule it all? bring greater prosperity and opportunity to its people as we did in other parts of the world Your logic fails. Our oil money already brought great prosperity into the parts where there is oil, there is misery in the Muslim world where there is no oil, plus Iraq where the US invaded with claims of bringing freedom. As for greater prosperity and opportunity brought to other parts of the world, list them... (And today anyway, it's the rest of the world that brings prosperity to the US, by feeding its credit binge.) In the absence of alternatives ...including alternatives destroyed with the active help of the CIA, The solution to this requires more than bullets, spies, border security, and prisons. It requires none of those, or at least not in the way currently applied. I have yet to hear any of the candidates articulate it. What about the incumbent? And why do you think your country has a mandate to 'solve' these questions, especially with its present record of success at solving other people's problems? *Lunatic*, n. One whose delusions are out of fashion.
"Implicit in such fundamentalism is the dismissal of law and natural moral." - fundamentalism is a strict adherence to religious practices. I am not sure what you mean by "dismissal of law and natural moral?" Please explain. "fascist form of religion." - there have been many political commentators who have used this term. It depends on whose definition of fascism you use. "you could go back to the first Wahhabites, or even the first Salafists." - you could but these modern movements took their inspiration from the Muslim Brotherhood after the creation of Israel and the presence of western oil interests in the region. 'Develop that part of the world'? - economic development and investment. "Our oil money already brought great prosperity into the parts where there is oil." - great prosperity for who exactly? "What about the incumbent?" - Bush is not going to be President in 2009 and I would not expect coherent insight on this matter from him other than "it's hard work." "solving other people's problems?" - no but they became our problem on 9/11/01 and the root cause of that needs to be addressed.
"you could go back to the first Wahhabites, or even the first Salafists." - you could but these modern movements took their inspiration from the Muslim Brotherhood after the creation of Israel and the presence of western oil interests in the region. Wahhabism, goes back to the eighteenth century. Although Wahhabites usually call themselves Salafists, modern salafism started at the beginning of the twentieth century, before the creation of the Muslim Brotherhood by Hassan al-Banna, who called himself a salafist and a soufi. And long before the creation of Israel. You should do some research and reading before posting... "Dieu se rit des hommes qui se plaignent des conséquences alors qu'ils en chérissent les causes" Jacques-Bénigne Bossuet
You are not reading what I wrote.
"What should not been in doubt since 9/11 are that there is a war between all radical Muslim groups and us" two questions. 1) Who is this homogenous group "us"? I'd say people so insecure of themselves that they easily scar. 2) Have you been watching the news in the last week? Fatah against Hamas? aren't these two radical islamic groups? Where is "us" in that fight? The war is as much internally in Islam as with "us" and "them".
Actually, no: Fatah is not an Islamic group. It is Arab nationalist in origin. *Lunatic*, n. One whose delusions are out of fashion.
Thanks DoDo, I stand corrected on the facts... head in shame, proofing my own ignorance... again.
The "us" I refer to is what we would call western civilization (U.S., East and West Europe, and the Far East). The "us" also includes all forms of Islam that conflict with this perverse radical form of Islam. The fight between Hamas and Fatah is interesting. As already stated, Fatah is a secular organization that never promoted itself on religious principles. Hamas has. It is an organization supported by Iran as is Hezbollah. Both are acting as armies for Iran's interest in the region. "Us" in this fight is with the secular Fatah.
Fatah is a secular organization that never promoted itself on religious principles False. When their popularity faltered as Israel shitted on them during the death of the peace process, some of them tried to promote themselves by forming the Al Aksa Martyr Brigades. *Lunatic*, n. One whose delusions are out of fashion.
The 'western world' I would count myself as part of believes in democracy. Supporting Fatah in an armed insurrection against an elected Hamas government, for no better reason than a turf war, has nothing to do with it. It has everything to do with fundamentalist and tribal thinking, however. *Lunatic*, n. One whose delusions are out of fashion.
As did the various fascists groups in the 1930's these Islamic fascist groups also want ultimate power and are fanatical enough to stop at nothing to achieve their goals Well that appears to run entirely contrary to their quoted words, probably runs more along thethe lines of the quoted words of the PNAC. Any idiot can face a crisis - it's day to day living that wears you out.
If you feel that runs contrary to their quoted words, please provide some of their quoted words.
Please. You provide a quote from Hassan Nasrallah a top Hamas leader Iran's current President and current top ayatollah or for fun indeed Bin Laden himself that prove that "want ultimate power and are fanatical enough to stop at nothing to achieve their goals". *Lunatic*, n. One whose delusions are out of fashion.
they never had any intention of fighting radical Islamism. they created radical Islamism, or at least nurtured and tended it. militarised states need a bogeyman. how else can they justify diverting the national wealth away from the needs of the people and into the pockets of arms dealers? time to re-visit "The Power of Nightmares." The difference between theory and practise in practise ...
I felt very embarrassed while reading this article. The person who wrote this seems educated. If that's the case, I feel even more embarrassed and there seems little point in trying to argue with him. I have a cousin in Los Angeles, a lawyer who could have written this but he was sounding like this long before 9/11. To me, this is a right wing rant, which gives no facts or analysis and properly belongs on Red State or Powerline. Hey, Grandma Moses started late!
I'm with you LEP - obviously (see comments above). Maybe it's because I'm a Londoner - that I moved to Nice.
The NSA program of listening in on overseas telephone conversations from known terror suspects was also necessary. The problem is that the group of possible terrorist subjects has been drawn incredibly widely, so that a large number of people have had their rights interfered with. For those that criticize the Bush administration for usurping the Constitution and our laws with their efforts to secure this country I would say, "Prove it" because other than hyperbole and insinuations thee has been no credible evidence of so-called "lying." Well the existence of Guantanamo bay sort of proves you wrong there, It was a site that was specifically chosen to attempt to sidestep the constitution. By positioning the site there it was theorised that legal authorities would not have juristiction. It was also announced that Geneva convention rights did not apply to people captuerd. either of these events would be enough to prove that the Bush administration was Usurping the constitution. When the founding fathers finally created the form of federal government we have now, they envisioned the primarily role of the President as the commander-in-chief to lead the army against outside enemies. In that role Bill Clinton was a failure and G.W. Bush has done his job. And that is why whenever the American people choose a President during a time of war they never elect an anti-war candidate. I think you have that backwards, Any good general has to know when not to use military force, (as someone said the problem is that if your only tool is a hammer, then every problem looks like a nail.) President clinton kept Saddam bottled up and under him Al quaida was kept relatively well under control. However under Bush we can see that although a couple of impressive looking battles have appeared to have been won, Strategically he has been a complete and total failure. by withdrawing forces from Afghanistan before the situation had been thoroughly resolved he made Afghanistan more of a mess than it should now be. By putting a large part of the US army into an unnecessary war in Iraq has managed to damage in a large part the US army, with the only result being that he has managed to vastly increase the pool of capable terrorists opposing the US. Any idiot can face a crisis - it's day to day living that wears you out.
". . . the group of possible terrorist subjects has been drawn incredibly widely, so that a large number of people have had their rights interfered with." Understandable conclusion, although we have seen no facts that suggest anyone's rights were violated in this program. Considering that on 9/2/01 our intelligence capabilities were limited and we had to assume we were going to be attacked again immediately, you must cast a wide net just to safeguard people. Anyone suggesting otherwise did not do so at that time and to suggest it in retrospect is disingenuous. "Gitmo and prisoners" This enemy has waged a conventional war. Without going into rules of warfare, it is easy enough to pick up. The enemy combatants were also not typical POWs under the Geneva Convention and the U.S. military had to create new rules in real time. I would agree that it has been five years and these prisoners need to be dealt with soon. As far as the Geneva Convention, please name one American POW in this war that our enemy has right now. "Clinton" Clinton bottled up Hussein how? He was shooting at our surveillance planes. He was cheating on the oil-for-food program. He was paying the families of suicide bombers in Israel. He was refusing to comply with any U.N. resolution. So how exactly was he bottled up? Al Qaeda was kept relatively well under control? Of course that does not count the rash of Al Qaeda attacks on U.S. interests going as far back as the '93 WTC bombing. Bin Laden was convinced as a result of Clinton's lack of response that the U.S. was a paper tiger and authorized the 9/11 attacks. From reducing our intelligence capabilities to agreeing with the airline industry NOT to move forward with the Gore Commission recommendations, he placed us in harms way. When Clinton left office the Middle East was a mess, with almost daily suicide bombings in Israel. If you are going to support this man please come with material facts rather than Clinton talking points.
Before I respond to this... interesting article, I need to ask this: Jerome, are you sure this guy is for real? He's not just some stranger who e-mailed a manifesto to you out of the blue?
I have been receiving his emails for a while - usually in the form of comments/reactions on my diaries. There's a very clear disagreement between us on many things, but he's always been civil and dialogue has been maintained. The text I used for this diary struck me as interesting, and representative of a very real strain of opinion in the US. While we disagree, it's provided in good faith and thus I felt it was worthy of a serious discussion. If we're confident we have the arguments to fight the points made (which I think we do), this is a case where there's a chance that they may be listened to if they are made without invective. Call me naive, or otpimistic., if you will! In the long run, we're all dead. John Maynard Keynes
Oh, and Afghanistan was not like that either before the Muyahideen. But, of course, Carter and Brzesinski had to support the Muyahideen against the secularists, because the latter were - gasp - socialist. Can the last politician to go out the revolving door please turn the lights off?
the new (non-Hamas) Palestinian PM is a communist. Should be fun. In the long run, we're all dead. John Maynard Keynes
Bah, Abbas is complicit with foreign powers in keeping the winners of the last election out of power, and having been effectively ousted by the said winners, he now has overstepped his constitutional powers by appointing a cabinet without parliamentary approval. So, in effect, the Palestinian Authority is in a state of constitutional meltdown and Abbas and his new PM are Western puppets with as much legitimacy as Karzai and Allawi. Can the last politician to go out the revolving door please turn the lights off?
To the Stormy Present: Yes I am for real, although it depends on your definition of what real is? The purpose of what I wrote Jerome was a "snapshot" of what happened to me that day and how it affected me. I will be honest that by the time Clinton left I was no longer a fan. I felt he let us down by not doing anything in healthcare and alternative energy and was basically coopting Republican initiatives to salvage his legacy. That is not why I voted for him twice. "Because we went there to free their women." How could you not have lost it? I would have come up with a few one-liners. This is a very civil blog so I will restrain myself.
I shall have to go for "naive", I'm afraid. This seems like "discussing" with a creationist. My reaction would be (to misquote Dale Carnegie) to put a stop-loss order on it. Wish I had your optimism. -----sapere aude
"So we knew that there were more intended flights to be used by these terrorists." Ya, you KNEW all of this hours after the towers fell, what from watching the "what to think TV networks? There was indeed at least one flight from Canada which was stopped when the planes were grounded. The information on it was first reported in the media then not reported. I'm not going to try to find an actual reference to it because it would be quite difficult given the media blackout, and because I don't see what difference it makes one way or the other in this discussion. One additional plane means nothing. aspiring to genteel poverty
These are all American right wing talk radio (Rush Limbaugh) talking points. So either he isn't what he says he is or, after 9/11 he stopped getting his news from anywhere else and he is one of the brainwashed many. Who, thank goodness, are becoming fewer.
I'd even question the assumption that his news sources were rightwing AFTER 9/11. Why on earth would he be running down the street thinking "that damned Bill Clinton?" That's just nuts. Maybe we can eventually make language a complete impediment to understanding. -Hobbes
It has been pretty well established that the Clinton administration tried to get Bush interested in Al Qaeda [remember the USS Cole at the end of 2000] but they couldn't care less, hasn't it? And the take on Clinton's reaction to the first 9/11 bombing is all backwards. Clinton bombed Sudan and Afghanistan in retaliation for the African embassy bombings, he did take flak for killing civilians and it did seem like he was trying to create a diversion from the Lewinski case. Can the last politician to go out the revolving door please turn the lights off?
Richard Clarke's "Against All Enemies" lays it all out pretty well, I think, particularly the Bush administrations complete and utter disinterest in Al-Qaeda. "The basis of optimism is sheer terror" - Oscar Wilde
memory is a funny thing. I'm reading "Stumbling on Happiness" and part of it is how we make memories. We never remember exactly what we were really thinking or doing. That's not how our brains work. Our brain stores snippets and then when called upon to remember our brain re-weaves the story. Part of the re-weaving often involves things that didn't happen that day but happened later. I wish I could remember where I read it, but someone did a story along this vein recently specifically about memories of 9/11 and showed how people's memories aren't completely accurate and how people remembered things as happening that day that they couldn't have actually known until later. It might have been diaried at Orange but I don't know how I'd find it. But you're exactly right. It seems unbelievable that someone actually would have been thinking that on that day unless they were already part of the Rush brainwashed group. So either he isn't what he says he was or his memory is faulty and he's let later Rush infitrate his memory.
Maybe I was thinking about the information in this article although I'm sure I didn't read it in this publication and I thought I read it more recently. A study conducted weeks after the terrorist attacks found large numbers of participants had rearranged the order of the day's events in their minds and had forgotten some of its key moments, said the study's author, Kathy Pezdek, a psychology professor at Claremont Graduate University. Although the study did find that the memories of people watching TV in New York were more accurate than the memories of people watching on TV in Hawaii. So proximity to the traumatic event did make a difference.
That's why I asked if Jerome knew for sure that this guy was whoever he claims to be. There are just too many rightwing talking points, too much misinformation, half-truths and outright untruths, too much attempted emotional manipulation. It just doesn't read genuine to me.
To be honest it reads like a composite. Can the last politician to go out the revolving door please turn the lights off?
I agree
Zombie-speak... depressing, but also there's (to my ear) a tone of desperation in it which I think reflects the wingnut echo chamber's fear of losing control of the media and majority opinion. Kinda like a guy lying about an affair that he thinks his partner is beginning to suspect: the lies get more and more elaborate and circumstantial, trying to compel belief out of the listener. But I am interested in this persistent meme of "Arabs dancing in the streets for joy as the towers fell." The first time I heard this it was supposed to be Palestinians in the OT dancing for joy [which under the circs doesn't seem entirely incomprehensible], and I heard it from hardline Zionists -- it was supposed to indicate how inhuman and barbaric the P's really are and thus justify the Wall and the Occupation. And then there was controversy over the photo/video that allegedly showed this dancing-for-joy scene, that it was perhaps old video from some different event entirely... did this alleged dance for joy ever happen? And now we have this very specific reference to Arab-Americans (Muslims, of course) "dancing in the streets in Paterson New Jersey" -- where does this come from and how is it attested? Is this another wholly manufactured rightwing meme like the "hippie protester chick spat on me as I got off the plane at San Francisco" Viet Nam urban legend [ably and wittily deconstructed in the documentary 'Sir No Sir' by Jerry Lembcke, who spent a chunk of his life chasing this myth through old microfiche and public records and oral histories and could not find even one credible attestation]? The specificity of the reference is very typical of urban legends; OTOH Paterson is home to a thriving Arab immigrant community and would be targeted for defamation by anti-Arab propaganda much as "Harlem" or "Watts" would be targeted by anti-Black propaganda. How is this meme related to the "dancing Israelis with the white van" meme that ran around the internet in about the same time frame? Was that story attested in any credible way or was it an antisemitic meme coined by the other wing of the wingnut meme-bomber? This "dancing for joy because a bunch of people got killed" seems like one of those "and they kill babies and poison wells" accusations -- used to demonise the hated Other by "proving" that they are deficient in all human feeling. And yet, tickertape parades for returning "heroes" who have killed thousands or tens/hundreds of thousands are perfectly civilised occasions and a matter of national pride. Go figure. The difference between theory and practise in practise ...
I was curious about that too, and I checked it out. It's rumors, nothing more, based on the fact that there's a large Muslim community in South Paterson, and Paterson is a very bleak place to live at best. There is no solid evidence of any sort that this happened; no video, no photos, just second- and third-hand accounts by someone who's cousin's next-door-neighbor says he saw it. But you notice how "seamlessly" this person worked that alleged fact into his narrative, as if it was somehow a part of his actual memory of that day? As if he was personally confronted by dancing Muslims before he could even wipe the dust off? Of the Muslims that I know, and it's a fair number of them in a variety of countries, just about everyone was doing exactly the same thing -- watching their TV screens in abject horror, and hoping and praying that whoever did this wasn't a Muslim. Because they knew what would be next, and it's what came next -- the man who pulls up next to your car, looks through the window at you with your brown face and half-covered hair, and draws his finger across his throat, a slice-mime threat that nobody could possibly misunderstand. Because if the attackers wanted a culture war, as I suspect they did, there are sure folks who are willing enough to give it to 'em. And somehow they think that giving the terrorists what they want is "fighting back." The schoolboy logic of that response is just mind-boggling. You want to tear the world apart? Not if I get to it first, bub.
Stormy: Do you just sit there and fabricate stuff? Your reading comprehension needs work. The point about Arabs dancing in the streets of Paterson, New Jersey was extensively reported by local television, newspaper and radio stations. There have been and still are radical groups in that area, so it should have been no surprise. Regarding a previous comment concerning "right wing talking points" which points are you talking about? The nursery school kids whose fathers were killed? Or Tim watching people jump out of the north tower? Or is it the woman who poured her heart out after loisng her young son? Which one was it you referred to? What I had initally sent to Jerome included some real names and I asked him to leave them out to protect the privacy of the families.
The point about Arabs dancing in the streets of Paterson, New Jersey was extensively reported by local television, newspaper and radio stations. It was also extensively reported that there were car bombs exploding and helicopters crashing into buildings. None of it was true. Maybe we can eventually make language a complete impediment to understanding. -Hobbes
Actually no I don't know what you mean by "right wing talking points." Be specific. Your problem is that you view the world political arena in very superficial terms which need to coincide with standard right or left views. That usually coincides with receiving information exclusively through blogs rather than through primary information sources and being able to draw your own conclusions. In otherwords your comments reflect a left wing mouthpiece rather than thoughtful reflection of facts. Let me ask you to role play. You are elected President of the U.S. to succeed Bill Clinton. September 11 takes place. As President what actions and strategy would you take?
Do you want some mainstream media? CBS News: Clarke's Take On Terror: What Bush's Ex-Adviser Says About Efforts to Stop War On Terror (March 21, 2004)In the aftermath of Sept. 11, President Bush ordered his then top anti-terrorism adviser to look for a link between Iraq and the attacks, despite being told there didn't seem to be one. ... Clarke also tells CBS News Correspondent Lesley Stahl that White House officials were tepid in their response when he urged them months before Sept. 11 to meet to discuss what he saw as a severe threat from al Qaeda. Can the last politician to go out the revolving door please turn the lights off?
Clarke was very pissed-off for being passed over, rightly or wrongly, by Bush. However, if you read his book he paints a very bleak picture of how Clinton handled this. The point is that if Clinton did his job and did no cower, more than once, in confronting bin Laden it would not have been an issue for his successor and there is no denying that set facts.
I was of course expecting you to say that Clarke had sour grapes. So, if we discount all the insiders that have criticised Bush after stepping down, what "evidence" is left? Bush's own claims that he's doing everything right? There's also no denying Bush sat on his ass for 7 1/2 months. What was Clinton supposed to have done about Bin Laden? He already took a lot of flak for the missile attacks on Sudan and Afghanistan after the embassy bombings. What do you suggest should have been done about the USS Cole? Can a president in his last 60 days in office do anything with substantial foreign policy implications? Why did Bush do nothing about it when he took office? Can the last politician to go out the revolving door please turn the lights off?
He sat on his ass on Al Qaeda. Can the last politician to go out the revolving door please turn the lights off?
Of a certain type... I'm not sure what type. The type that thinks "targetted assassinations" are, on balance, better than "major military incurstions", beacuse you kill your enemy and replace him or her with your protege--you hope!--without too much bloodshed. I'll be basic and say, "These people need to have good sex, often and slowly." They are TOO WOUND UP! All of 'em. The ultimate orgasm will occur when... Bin Laden is killed. Saddam is killed. Fundamentalism is killed. Kill kilkilk ilk ilk! The abstract is our enemy, perhaps, in that it allows such ridiculous violent proposals between people who, the mafia know, are redundant. Ineffectual. When Private finds his phone tapped, he won't mind. He isn't saying anything wrong, is he? After all, he's parroting the company line. What could be wrong in that? http://www.aclu.org/safefree/general/17564prs20050404.html Don't fight forces, use them R. Buckminster Fuller.
Didn't Bin Laden die of kidney failure at the end of 2001, anyway? Can the last politician to go out the revolving door please turn the lights off?
Let me ask you to role play. You are elected President of the U.S. to succeed Bill Clinton. September 11 takes place. As President what actions and strategy would you take? After being told "America is under attack" I would not have continued to listen to "My Pet Goat" for several minutes. Can the last politician to go out the revolving door please turn the lights off?
I don't feel a need to be polite about this. It's at least as detached from reality as the controlled demolition people are. The fact that it was the official propaganda line for so long doesn't mean it's related in any way to what actually happened, why it happened, or who was responsible for it. More than that, if someone wants to debate these talking points the least they can do is do is post out in the open so we can respond to them directly.
Regarding debating out in the open: I would agree in general. There are sites that will allow you to select a random ip address. It may be possible to set up a ET account without a valid e-mail address - or using jerome's. I don't use my real name. In particular, I am reluctant to condemn. Perhaps there really are reasons that I do not understand, that make this the only possible or reasonable method of communication. I've learned that some things are just plain more complicated that I could have imagined. Perhaps this would be one example. aspiring to genteel poverty
That could be true. But I'm always suspicious of indirection, and the only reasons I can imagine why someone might not be able to post from home on a site like this under an anonymous user name, via an IP proxy if they need one, aren't good ones.
What more IP masking does one need than the fact that home internet service has dynamically assigned IP numbers that change with each connection? Also, the e-mail address used to create the ET account is not public, need not be traceable (or even exist after account creation), and it is not necessary to have another e-mail address attached to the account, let alone displayed. Can the last politician to go out the revolving door please turn the lights off?
I am saying that there is some doubt in my mind at this time. Perhaps other people who are more knowledgeable than I would be able to extinguish that doubt. I know that my IP address seems to stick around, sometimes for days or even longer. I haven't actually tried to figure out exactly when it changes, though I do know it changes. I am willing to bet that I can be tracked to ET through even a temporary IP address if there was a desire to do so by my service provider, unless they are very careful to erase their logs or direct them to dev/null. Of course my service provider may refuse to release such information, but in the US I wouldn't bet very much on that. I have noted that there are services that provide random ip addresses. Come to think of it, I believe that at least one person has been tracked through such a service and charged with a crime. aspiring to genteel poverty
Come on, it's not like posting the kind of content in this diary is going to get anyone in trouble in the US. Can the last politician to go out the revolving door please turn the lights off?
:) Ok I can't think of a good response to that. aspiring to genteel poverty
This kind of stuff is e-mailed around the US every day. My right wing conservative uncle (who grew up in our family which is otherwise almost completely left-leaning Democrats) sends this kind of stuff all the time. I (and most of the rest of the family) finally blocked him as spam so I didn't have to read it. I can't imagine any reason why he would need anonymity to post this - at least from a US point of view. It's what Rush Limbaugh spews from the radio on a daily basis. In a way I'm glad Jerome posted it though so everyone can see what kind of crap inundates our e-mail boxes and radios over here. Not to mention our eardrums. Two years ago I went to a festival in a small rural town and sat next to a local couple at a fish fry. While I waited for the person I was with to get us drinks, I eavesdropped on the local couple. For the next fifteen minutes I heard about how the country was in a terrible mess and it was all because of Bill Clinton and how untrustworthy Bill Clinton was and did you know that Bill Clinton stole all the china off of Airforce One when he left office? That just goes to show how evil Bill Clinton is. It was all I could do to not turn to them and say "turn off your radios, stop listening to Rush Limbaugh, google up some news reports and see how that story was debunked YEARS ago." But I didn't because the person I was with would have been pissed at me for getting into (yet another) political argument.
agreed, but, on the flipside, posting on the potentially commie 'eurotrib' blog just might!
Trust me on this one. They know who I am.
Oh shit. And we don't.
That list is doing the rounds, but I think there's a simpler definition - which is that fascism is what you get when those in power abuse those out of power, not just for profit (which is bad enough) but simply because they want to, and they can. Fascism is the pathology of abuse for its own sake. There's always a smokescreen of justification and rationalisation based on how different and aggressive The Other is. But the core issue is a need to abuse and control other human beings, purely for the sake of abuse and control.
Glad you stayed up till 3am. Saved me a lot of writing. I can swear there ain't no heaven but I pray there ain't no hell. _ Blood Sweat & Tears
I also want to know how this: When the founding fathers finally created the form of federal government we have now, they envisioned the primarily role of the President as the commander-in-chief to lead the army against outside enemies. In that role Bill Clinton was a failure and G.W. Bush has done his job. ...how this squares with this: But the war was fought on a shoe-string and accomplished nothing more than deposing Hussein. *Lunatic*, n. One whose delusions are out of fashion.
Regarding the "dancing in the streets" all I can say is that your position that there is no evidence is blatantly false. If you wish to contact the local news stations in the New York/New Jersey area that would probably have archives of tapes. Regarding the Muslim Brotherhood, I would suggest you try going to a library and read some books on the subject. This organization exists in every middle east country and had provided the inspiration and arms to most of these groups. Hezbollah and Hamas are missionary armies for Iran. Hezbollah in particular has attacked outside the Middle East and has cells in western countries including the U.S. "When it comes to this War" - I did not mention Iraq - you did here. I am not referring to the Iraq War. Try and comprehend better. "How exactly was that supposed to "help"?" - It certainly cut down on suicide bombings in Israel. Libya certainly gave up their WMD without so much as a fight. At the time, Arafat suddenly wanted to be a peace partner. The toppling of Hussein was necessary (even Clinton wanted to do it while he was in office). The problem was the incompetant strategy beyond his toppling which is why thousands of Americans have been killed there. "We are less safe" - tell that to the thousands who were killed on 9/11 or the hundreds killed in London and Madrid. What was the last terror attack in Europe or the United States that you can remotely claim "we are less safe?" However I fully agree with your assessment as to why these radical religious organizations have been able to propagate increasing numbers of people into their ranks since the 1960s. Bombs, bullets, spies, and border security is a stop gap. The solution is far more complex and requires a greater investment in the Middle East with the goal of benefiting more of the populace and not lining the pockets of its leaders. The radical groups come to them with food, clothing, and medical care. We come to them with threats. Guess who wins their hearts and minds?
Again with this idea that only dead Americans matter. "We are less safe" - tell that to the thousands who were killed on 9/11 or the hundreds killed in London and Madrid. What was the last terror attack in Europe or the United States that you can remotely claim "we are less safe?" Huh? Are you even paying attention?I hate it when these civilisation warriors talk about Madrid as if they knew what they are talking about. Maybe private would like to read these two diaries: Can the last politician to go out the revolving door please turn the lights off?
Wow, glad you pulled that back up. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin
Hamas was in the process of a similar transformation, but that transformation has likely been aborted by the developments of the last few months. When was Hamas a missionary army for Iran? I'd say never, and if I am not mistaken, even relations in terms of funding are more recent. *Lunatic*, n. One whose delusions are out of fashion.
That's a fair point. I don't think Hamas was ever a missionary army for Iran, and I didn't mean to say that it was. I was thinking more in terms of the transformation involved in becoming a part of the political process.
Just collecting some crumps: There was a report about some Arab looking passengers escaping from grounded aircraft that had been set to depart New York including the American Airlines flight I was originally booked on. So we knew that there were more intended flights to be used by these terrorists. Heh. So how is it that not a single trace of them was ever found, and they haven't attempted anything ever since? Especially if you believe this: The Patriot Act was necessary in order to infiltrate all of the Muslims groups in this country and determine who were threats. Now as for the lumping together stormy also addressed: The enemy are all groups that have branched off the old Muslim Brotherhood from the 1920's which includes Al Qaeda, Hamas, Hezbollah. Some are Shia based and others are Sunni based. While they may hate each other, what they have in common is hatred for a common enemy which is us. No Shi'a group ever branched off the Muslim Brotherhood. Meanwhile, other Sunni Muslim militant fundamentalist groups, most notably groups with origins in Pakistan including the Taliban, have an independent source. Meanwhile, a lot of Islamists with similar views are US allies: most the ex-Mujahedeen in the Northern Alliance that took over from the Taliban are no less woman-haters and strict observationists; the strongest parties in the current US-supported Iraqi government are a Shi'a fundamentalist party with Iraqi origins that used to have a long history of terrorism, including against the US embassy in kuweit (Daawa) and a Shi'a fundamentalist party established by Iran's Khomeini from Iraqi refugees, which had/has a large militia (SCIRI and its Badr Brigades). The Saudi state finances mosques with Wahhabite fundamentalist preachers around the world, from oil money. I also note that in the eighties, an until then insignificant Hamas got big and strong against its secular PLA rivals with covert Israeli help, where the Israelis didn't think Hamas could grow into a problem hoped for divide-and-rule... while, as others mentioned, the US helped to eliminate a left-wing alternative in Iran when the CIA organised the overthrow of a democratically elected PM, and then helped the Shah's bloody dictatorship. Those were two enormous mistakes that have cost thousands of American lives. What about non-American lives, 700,000 of them? What about non-American eyewitnesses of war and terror, including by US forces? And what about non-American democratic will? (You seem to believe US public opinion support makes US actions in another country democratic...) *Lunatic*, n. One whose delusions are out of fashion.
What about non-American lives, 700,000 of them? I had an encounter with an old American [possibly a WWII veteran] while waiting in line at See's Candies in Riverside, CA with my Mother around Christmas of 2003. The encounter started with him congratulating me for Aznar and ended with him saying that it would be justified to kill half the world's population to make america safer. Can the last politician to go out the revolving door please turn the lights off?
i think this site has finally reached the popularity point of being a target for the disinfo campaigners. i guess we've been taken seriously, a compliment perhaps! the two pincers of this are: supposedly rational attempts to justify dismantling of constitutional citizen protection measures set in place by wise leaders to avoid repetitions of tyranny, consisting mostly of application of selective memory concerning why so many people hoòd the usa in disfavour, due to the inhuman foreign policy choices since taking on the role of globocop... comments like the one about wanting to see washington go up on a nuclear explosion. ( a sentiment that should be reason enough for banning a poster, imo, for its obviously unhinged tone), and may well be a giveaway as to how this site can be 'set up' to later be defamed as a haven for treasonous opinions). what i love about the posters here is their humour and desire to use reason and truth to counter the fascistic trends in many different governments...no other agenda but to see sane energy policies and to raise awareness about how politics and energy are siamese twins, and how both are about to have a major overhaul, long overdue. i sense some excellent brains hiving here to think of ways to make a future softer landing, and it makes me proud to see the internet being used so well. how we can arrive there without boosting the arms trade or embracing lunatic notions of revenge by death and destruction, is what we largely gather here to discuss....pancakes and shoes and trains notwithstanding! so now we see the results...we are starting to make a little wave, and right on cue, enter stage right pseudo-rational justifications and extreme barking inflammatory bs as counterwave. for us achieve true traction, we have to 'deal' with the strange attractor phenomenon... 'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty
As someone who was clearly for years into the lunatic notion that a single state is most desirable in Israel I am somewhat hesitant about your desire for banning. Fortunately that notion is no longer quite so lunatic, but I still seem to end up on the wrong side of sanity a significant number of times. While I believe that the nuclear comment in this thread is worthy of being troll rated, I kind of feel that way about the entire diary. In a way the two very nicely, unintentionally, complement one another. In that sense, they both take on much more (unintentional) meaning than either separate. Call it synergy. Gringo's comment is a comment I will miss. We need to put a human face on the violence we propose, whether it be Iraq, Iran, or Washington DC. The cycle of violence that I believe lasthorseman has bought into along with this diary is something that I believe does not hurt for us to see. It certainly helps to limit my own desire for extremism. This diary and lasthorseman clearly show exactly what trying to control violence with more extreme violence brings us - as his Nome de plum so eloquently states. When it comes to how he presents his opinions I don't feel that he floods the site with them demanding that everyone buy into his opinions. It is not my choice though, and it is reasonable to talk about what people want this site to look like. I do feel a bit like an honourary European, (or maybe that's a can we join the EU too wanna be) so perhaps it doubly isn't my call. aspiring to genteel poverty
Why ban the user when you can troll rate the comment into oblivion? Hate the trolling and love the troll, until they get tired and leave. Can the last politician to go out the revolving door please turn the lights off?
You are a better, more patient man than I. :) -----sapere aude
Not always. Can the last politician to go out the revolving door please turn the lights off?
Why ban the user when you can troll rate the comment into oblivion? Because violence is unacceptable, even in comments. It is not opinion open to any kind of debate. If we were all at a conference together and one of the attendees said about another that he wished someone would bash his head in with a chair, the reaction of the group would make a difference. There is an implicit threat there and the threat-maker should be asked to leave. If instead, everyone just hushes him and pretends like it didn't happen, then it has a chilling effect on everyone there. The person who was threatened would feel fearful, as would anyone who identifies with the proposed victim. Maybe we can eventually make language a complete impediment to understanding. -Hobbes
Bush on Islamocommunism: President Bush, attending the dedication of a memorial to an estimated 100 million victims of communist regimes, yesterday compared the fight against radical Islam to the Cold War battle against totalitarian communism. In warning Americans that "evil is real and must be confronted," Bush also equated the Sept. 11, 2001, attacks on the World Trade Center and the Pentagon with the tyrannical rule imposed on residents of countries like China, North Korea and the former Soviet Union. Frontpager Lawrence of Arabia over at Eteraz.org: What's next? I will grant that the comparison between the ideology of the revolutionary movements present in the Middle East to post-WW2 Communism is an improvement. It is at least a better comparison than the poor comparison to fascism. At least Communism was transnational in its own self-description. And certainly, Mao and Stalin destroyed their countries in the mid-twentieth century, murdering freely, imprisoning arbitrarily and generally using fear and power to extend their rule. Their programs of national reform destroyed the heritage of their people. And Mao and Stalin had just about as much in common with Marx as the Taliban (for instance) does with most practitioners of Islam: each tyrant twisting the words of a prophet to justify the deaths of any and all who disagree with them. But the fact that one day the Islamic terrorists can be fascists and the next day they are communists, one day the Nazis, the next day its a Red Islam (not that Shariati minds), makes clear the extent to which the rhetoric is just that: rhetoric. Bush and company are no closer to understanding who and what they are fighting against today, than they were the day before or will be tomorrow. As with all good Islamophobia, the rhetoric is not meant to identify the enemy so much as rally public opinion into a cohesive and deadly force. Bush and company are grasping at straws, desparately comparing their enemy to enemies of old in an effort to contain them, comprehend them and make the American people understand why Islam is such a threat to America (not the "good Muslims" of course. * wink, wink *). Its a major victory if government policy makers can tell you the difference between Sunni and Shia, let alone the differences between an Al Qaeda and the Muslim Brotherhood. And finally, one must ask, will we allow our country, our governments to kill in the name of its own idols? Has the fanaticism of Bush been less deadly? The Goddess of Democracy has been the justification for the destruction of Iraq, and many within our government would to build a new Temple to her in Iran as well. Her hands are red with blood and her priests are calling out for more victims.
Point of order: although Eteraz.org is an Islam-oriented site, Lawrence of Arabia is not a Muslim himself, he's a Catholic theologian.
Okay, this is sort of obvious, and it might have been pointed out already in here, but just in case: WTC attacks occured on: September 11, 2001 George W. Bush was sworn in as president on: January 20, 2001 George W. Bush's term expires on: January 20, 2009 Based on the information above, I conclude that George W. Bush was president on 9/11 2001. "G.W. Bush has done his job?" If the man had done his bloody job, we wouldn't be in this mess to begin with! "The basis of optimism is sheer terror" - Oscar Wilde
Also, Clinton's term expired in January 2001, but the first thing that crossed this person's mind on 11 September 2001 was (referring to Bill Clinton), "You SOB, you really did it now". Can the last politician to go out the revolving door please turn the lights off?
Especially given that nobody knew who had actually carried out the attacks at that point, or for what reason.
Remember that he did stay in a classroom to finish reading My Pet Goat,his entourage of trained SS(Hmm,Secret Service/SS, is this a karmic message) guards did not haul him immediately out of there even when America was clearly under "attack". www.augustreview.com
What was the specific timeline of events based on the 9/11 Commission? To expect a new President to come into office, have months go by before the Congress approves of his cabinet and NSA appointments (the last being the FBI in August), then try and figure out what threats are real or not and where they are, is ridiculous. As I said before, Clinton did not have the courage to pull the trigger, not once, but many times to kill bin Laden. This should NOT have been Bush's or Gore's problem if Clinton did his job.
Obviously you're going to believe what you want to believe. What I read was a Bush administration that ignored intelligence and openly scoffed pre-9/11 that terrorism was "a Clinton thing" that we didn't need to worry about. Your claim about Clinton not pulling "the trigger" has been repeatedly debunked. Maybe we can eventually make language a complete impediment to understanding. -Hobbes
And of course you get your information from biased blogs that filter information and promote bias viewpoints. Try looking at unvarnished facts and making your own conclusions.
My information was from mainstream news sources. There were no blogs that I was aware of in 2001. Perhaps you should take your own advice. Maybe we can eventually make language a complete impediment to understanding. -Hobbes
"that ignored intelligence and openly scoffed pre-9/11 that terrorism was "a Clinton thing" that we didn't need to worry about." - show me in the 9/11 Commission document where that is stated as you claim. "Your claim about Clinton not pulling "the trigger" has been repeatedly debunked." - - yeh by Clinton. Scheuer who headed Clinton's bin Laden unit in the CIA has a different take. In fact he outright calls him a liar and of course we know lying and Clinton are synonymous. Clinton had no less than 8 - 10 opportunities to ether kill or capture bin Laden. I wonder what was in those documents that Sandy Berger stole from the national archives that was so incriminating? Come at me with facts and not blog generated fiction.
You want facts? Executive Order 12333, signed by President Reagan, says "No person employed by or acting on behalf of the United States Government shall engage in, or conspire to engage in, assassination," which confirmed and expanded the bans on assassination laid down by his two prior presidential predecessors. Even if I accepted your figures-and without independent verification I certainly don't-maybe Clinton didn't have him killed because it was...er...against the law?
Sorry-the quote should be attributed to Metafilter The confirming link to US government archives is my own. I like to check my facts.
Bush has had not a few opportunities to kill Bin Laden too. And lookit this. He not only hasn't done it, he's on the record as saying that it's not something he worries his beautiful mind over any more. No - wait - yes he does. No - wait again - he doesn't. But look - scary and frightening! And then you go blaming Bill Clinton and telling us we read bad, biased things. Feh. I'm not sure which is more embarrassing - the possibility that you may be getting paid for this, or the possibility that you're not.
What is embarrasing is your intellectual dishonesty. If Clinton did his job then all your rants over Bush this and that would be moot. Did Bush's administration screw up Tora Bora? Yes. Has Rumsfeld and Cheney screwed up Iraq? No doubt. But Bush would not have had to deal with bin Laden had Clinton authorized bin Laden's capture or killing. Why is Clinton's lack of actions so relevant today? Because his wife could be the next President and will she bring back the same feckless approach to dealing with terrorists as her husband had.
Um, no. Mack. Sorry. Bush is a fuck up even without 9/11. Point of fact, he'd be perceived as a much larger fuck up if he didn't have the horribly misnamed piece of propaganda shit you keep drooling on about to cover his sorry ass. Here's a highly abbreviated list: Fiscal Management: America is broke. No wait, we're worse than broke. In less than five years these borrow and spend-thrifts have nearly doubled our national debt, to a stunning $8.2 trillion. These are not your father's Republicans who treated public dollars as though they were an endangered species. These Republicans waste money in ways and in quantities that make those old tax and spend liberals of yore look like tight-fisted Scots. This administration is so incompetent that you can just throw a dart at the front page of your morning paper and whatever story of importance it hits will prove my point. Katrina relief: Eleven thousand spanking new mobile homes sinking into the Arkansas mud. Seems no one in the administration knew there were federal and state laws prohibiting trailers in flood zones. Oops. That little mistake cost you$850 million -- and counting. Medicare Drug Program: This $50 billion white elephant debuted by trampling many of those it was supposed to save. The mess forced states to step in and try to save its own citizens from being killed by the administration's poorly planned and executed attempt to privatize huge hunks of the federal health safety net. Afghanistan: Good managers know that in order to pocket the gains of a project, you have to finish it. This administration started out fine in Afghanistan. They had the Taliban and al Queda on the run and Osama bin Laden trapped in a box canyon. Then they were distracted by a nearby shiney object -- Iraq. We are now$75 billion out of pocket in Afghanistan and its sitting president still rules only within the confines of the nation's capital. Tribal warlords, the growing remnants of the Taliban and al Qaeda call the shots in the rest of the county. Iraq: This ill-begotten war was supposed to only cost us $65 billion. It has now cost us over$300 billion and continues to suck $6 billion a month out of our children's futures. Meanwhile the three warring tribes Bush "liberated" are using our money and soldiers' lives to partition the country. The Shiites and Kurds are carving out the prime cuts while treating the once-dominant Sunnis the same way the Israelis treat the Palestinians, forcing them onto Iraq's version of Death Valley. Meanwhile Iran is increasingly calling the shots in the Shiite region as mullahs loyal to Iran take charge. (More) Iran: The administration not only jinxed its Afghanistan operations by attacking Iraq, but also provided Iran both the rationale for and time to move toward nuclear weapons. The Bush administration's neocons' threats to attack Syria next only provided more support for religious conservatives within Iran who argued U.S. intentions in the Middle East were clear, and that only the deterrent that comes with nuclear weapons could protect them. North Korea: Ditto. Also add to all the above the example North Korea set for Iran. Clearly once a country possesses nukes, the U.S. drops the veiled threats and wants to talk. Social Programs: It's easier to get affordable -- even free -- American-style medical care, paid for with American dollars, if you are injured in Iraq, Afghanistan or are victims of a Pakistani earthquake, than if you live and pay taxes in the good old U.S.A. Nearly 50 million Americans can't afford medical insurance. Nevertheless the administration has proposed a budget that will cut$40 billion from domestic social programs, including health care for the working poor. The administration is quick to say that those services will be replaced by its "faith-based" programs. Not so fast... "Despite the Bush administration's rhetorical support for religious charities, the amount of direct federal grants to faith-based organizations declined from 2002 to 2004, according to a major new study released yesterday....The study released yesterday "is confirmation of the suspicion I've had all along, that what the faith-based initiative is really all about is de-funding social programs and dumping responsibility for the poor on the charitable sector," said Kay Guinane, director of the nonprofit advocacy program at OMB Watch.." The Military: Overused and over-deployed. Former Defense Secretary William Perry and former Secretary of State Madeleine Albright warned in a 15-page report that the Army and Marine Corps cannot sustain the current operational tempo without "doing real damage to their forces." ... Speaking at a news conference to release the study, Albright said she is "very troubled" the military will not be able to meet demands abroad. Perry warned that the strain, "if not relieved, can have highly corrosive and long-term effects on the military. With military budgets gutted by the spiraling costs of operations in Iraq and Afghanistan, the administration has requested funding for fewer National Guard troops in fiscal 2007 -- 17,000 fewer. Which boggles the sane mind since, if it weren't for reserve/National Guard, the administration would not have had enough troops to rotate forces in and out of Iraq and Afghanistan. Nearly 40 percent of the troops sent to those two countries were from the reserve and National Guard. The Environment: Here's a little pop quiz: What happens if all the coral in the world's oceans dies? Answer: Coral is the first rung on the food-chain ladder; so when it goes, everything else in the ocean dies. And if the oceans die, we die. The coral in the world's oceans are dying (called "bleaching") at an alarming and accelerating rate. Global warming is the culprit. Nevertheless, this administration continues as the world's leading global warming denier. Why? Because they seem to feel it's more cost effective to be dead than to force reductions in greenhouse gas emissions. How stupid is that? And time is running out. Trade: We are approaching a $1 trillion annual trade deficit, most of it with Asia,$220 billion with just China -- just last year. Energy: Record high energy prices. Record energy company profits. Dick Cheney's energy task force meetings remain secret. Need I say more? Consumers: Americans finally did it last year -- they achieved a negative savings rate. (Folks in China save 10 percent, for contrast.) If the government can spend more than it makes and just say "charge it" when it runs out, so can we. The average American now owes \$9,000 to credit card companies. Imagine that. Human Rights: America now runs secret prisons and a secret judicial system that would give Kafka fits. And the U.S. has joined the list of nations that tortures prisioners of war. (Shut up George! We have pictures!) But all you want to talk about is 9/11, you're silly and hideously misnamed GWOT. Here's a newsbrief: the 'global war on terror' is a pathetic joke, hyped and believed in by folks--particularly politicians-- who find it useful for various reasons. Generally speaking, morons gravitate to it like flies to shit because it covers their utter incompetence in other areas -- like dealing with hurricane disasters, say, or global warming. In many ways, the profound 'transformation' that supposedly overtook American policy after 9/11 is like a religious epiphany in which any thought of practical realities of actually, you know, governing, is subjugated to the demands of commemorating that most searing initial experience. Some mystics believe in that kind of stuff, which is fine. But it's no way to run a country. You're apparently a part of that 33% of this country that still think it is.
This is unquestionably the most incoherent comment I have read yet. First of all this post is about 9/11 or do you need to read things more than seven times. Second, show me where I supported the concept of a Global War on Terror? It was a wrongly labeled war from the outset. Third what does anything you just posted have to do with the subject? And finally, I would like to know if you believe we are or are not at war. Anyone who does not believe we are at war with radical Muslim groups who use terror as their primary tactic is either ignorant of events or choosing to play retail politics with people's lives.
Um no. My comment is specifically directed to this sentence which you wrote on this thread: If Clinton did his job then all your rants over Bush this and that would be moot. Is it the big words in my reply, or the really big sentences that you find incoherent?
The post has to do with 9/11 and not Katrina or the other issues you brought up. The point is basic. The day Bush took the oath of office, had Clinton done the most fundamental job of President (commander-in-chief) there would no longer be any need for a bin Laden unit in the CIA. And we are not talking about one lucky shot. We are talking about at least 8 certain chances of capture or kill. The issue is what would each of the current (or projected) list of Presidential candidates have done in the same situation? There will be one issue that will be paramount on people's minds in 2008 in the U.S. and before any discussion begins what will the candidate do to make sure there are no more 9/11's.
"that ignored intelligence and openly scoffed pre-9/11 that terrorism was "a Clinton thing" that we didn't need to worry about." - show me in the 9/11 Commission document where that is stated as you claim. I never said it was a quote from the 9/11 Commission document. Yes, they ignored intelligence prior to 9/11: Yes, they scoffed: The department's [of Homeland Security's] creation was first proposed in the report of the bipartisan U.S. Commission on National Security/21st Century, commissioned by President Clinton and delivered to President Bush. "A direct attack against American citizens on American soil is likely," it warned. Like the pre-9/11 alarms by the CIA and the National Security Council's former counterterrorism chief, Richard Clarke, this was studiously ignored by the Bush administration, which had dismissed terrorism as a "soft" issue, a "Clinton thing." Maybe we can eventually make language a complete impediment to understanding. -Hobbes
Well, that and biased mainstream media news reports, from a biased press, spinning an extremely political event in a biased direction. Luckily we only ever get truth and honesty from the Right. And especially from Bush, who's famous for his stern and uncompromising attitude to terminological immorality. Honestly, I don't know what we'd do without him.
Aaah, so Bush wasn't incompetent, it's just that the US has a peculiar political system that effectively leaves the country without a functioning government nine months after the inauguration of the new president. I'm not buying that. "The basis of optimism is sheer terror" - Oscar Wilde
Do you think the assassination of Massoud was directly related to 9/11? Can the last politician to go out the revolving door please turn the lights off?
What do you mean by direct? *Lunatic*, n. One whose delusions are out of fashion.
Is the near simultaneity coincidental? Or was the assassination of Massoud intended to decapitate the most likely local ally of the Americans in case of retaliation? Can the last politician to go out the revolving door please turn the lights off?
The story I heard was the "payment" version, Bin Laden 'paying rent' to the Taleban by knocking out their top enemy, but I haven't checked on this for a long long time. *Lunatic*, n. One whose delusions are out of fashion.
Best article I found so far. *Lunatic*, n. One whose delusions are out of fashion.
" do some outside reading" and you recommend dKos. That I find humorous. The fact is when you sit on the editorial board of a news or public policy magazine you need to read extensive research and ignore the slanted views of blogs. I actually majored in politlcal science and have read more on certain subjects (including the rise of radical islam) than most and certainly much more than you. I would suggest that rather than recite the rants of the lunes on Dkos you may want to consider the library and getting to primary sources of information and history. I did mention in that post that I blamed U.S. Presidents (and policy) going back to Eisenhower. Why blame Clinton? Because at that moment (without knowing any facts) I knew his lack of action against Al Qaeda, that this was planned long in advance, and that Bush was only in office for 7 months. I voted for Clinton twice. I even met the guy once in hotel in Houston. I was one of those people who thought that Clinton would solve some of the major problems we have had. Two terms and he did not do anything of significance to accelerate development of alternative energies and less than nothing for healthcare. Instead he coopted Republican policy initiatives and made them his own so he could have a legacy. Sorry, but in fact Clinton was a failure as a President. Great used car salesman but never had the courage to face challenges. Unfortunately as we now know, he did not even have the courage to kill bin Laden. Sorry, but this should not have been Bush's or Gore's problem. I love how you people can only think in a single dimension and feel compelled to categorize people who think objectively in a certain box: either left wing or right wing. I don't fit those boxes.
kinda says it all. Maybe we can eventually make language a complete impediment to understanding. -Hobbes
Says what Izzy? Says that I use my brain and spent time reading things other than lunatic blogs that dumb you down to a level of feckless followers. Been inside a library recently?
It says you identify yourself as being part of a different "group," which you've just confirmed by calling this venue a "lunatic blog" which begs the question -- what are you doing here? If you have no respect at all for the format or the participants, why are you interacting? Maybe we can eventually make language a complete impediment to understanding. -Hobbes
"recite the rants of the lunes on Dkos" - - Don't put words in my mouth Izzy. I said dKos is a lune blog. Obviously reading comprehension is a problem for you. This blog is far different in tone and intelligence.
This blog is far different in tone and intelligence. You are absolutely right. Here there is no place for handing out ad hominem attacks. So stop it. "Pretending that you already know the answer when you don't is not actually very helpful." ~Migeru.
You said: I use my brain and spent time reading things other than lunatic blogs As far as anyone reading could tell, this was not aimed at a specific blog. This comment came after you've been insulting 3 of the front page contributors to this blog. I believe anyone would find my interpretation reasonable. I did not put words in your mouth. If you want to be understood, you should quit attacking people for their reading comprehension and work on clarifying your communication. Maybe we can eventually make language a complete impediment to understanding. -Hobbes
Izzy - I believe I was the one attacked from the outset. You want to dish it out but can't take it yourself. If you want to make comments, try being specific and not use hyperbole and false characterizations. The latter is apparently a way for respondents to dismiss comments they cannot answer intelligently.
I've used neither hyperbole nor false characterizations. I presented a reasonable interpretation of your own words. You used the word "blogs" -- plural. I also explained my interpretation, politely, after you insulted my reading comprehension. So how do you reply? You don't say you were misunderstood and clarify. Instead, you attack again, framing my response as hyperbole, etc., making a childish "dish-it-out" accusation, and casting aspersions on respondents' intelligence. Do you think this displays a good faith effort to communicate? Maybe we can eventually make language a complete impediment to understanding. -Hobbes
I actually majored in politlcal science and have read more on certain subjects (including the rise of radical islam) than most and certainly much more than you. What do you know? Some on this blog have a deep knowledge of history, some speak and read Arabic, some have lived (or still live) in the Arab world. Don't try to be condescending, it's beyond your capabilities. That you think that wahhabism and salafism are modern movements born of Muslim Brotherhood shows that you have still a lot of basic reading and research to do about the rise of radical Islam you pretend to have studied... "Dieu se rit des hommes qui se plaignent des conséquences alors qu'ils en chérissent les causes" Jacques-Bénigne Bossuet
The 'lunes' over at dKos include frontpagers from this site, leading politicians of the majority party in both the House and the Senate of the United States, journalists, essayists and professors from all over the world. Nevertheless, since you can't rise above your own quivering prejudices to click a link, allow me to introduce you to David Michael Green. He is the original author from which the linked information was culled. He is a professor of political science at Hofstra University in New York. If you happen to disagree with his points, he is delighted to receive readers' reactions to his articles (mailto:[email protected]), but regrets that time constraints do not always allow him to respond. More of his work can be found at his website, www.regressiveantidote.net. Here's a link to the same material at a different site. And if you don't like that 'site', I can provide you with a few more links. Just let me know. Happy reading! Get back to me after you've managed to absorb the main points and we can further deconstruct your 'personal' view of 9/11.
Why am I spending time responding to a pathetic rant when I've work piled up on my desk.... I wish the brilliant Walt Kelly was still among us to skewer that lunatic and his lunatic views with the cartoony skills he used on Simple J Malarkey and the Jack Acids.
"pathetic rant" - tell that to the nursery school children of the dead parents. Can't have a terribly good job if you have time to do this during work hours.
Aw shut up, you idiot. I wish Steve Gilliard was still among us to expose your lunacy.
I agree on the second part, but let's avoid insults and getting personal on ET. *Lunatic*, n. One whose delusions are out of fashion.
What about the nursery school children whose parents died in a car crash? Or the millions of other people who lost someone close through some kind of accident or disease? You know, you are going to die. You, everyone you know and love, and everyone else too. So get over it already! What was so special about those who died on that often quoted date? Are their children any sadder than the ones whose parents died in a car crash? How is it helpful to anything to keep fixating on this one event? Why is this so important to you?
My guess is that for Private it symbolises the battle--they will try to attack us ("us" = the west) with nuclear weapons if they can. They want to kill us all. And 9/11 symbolises that. For me (I must be in a minority of minorities here) it symbolised: Twin Towers--attack the finance people (you are not safe, even though you're rich!) Pentagon--attack the military (you think you are tough, we die as we attack your centre!) Pennsylvania--(was supposed to be heading for the White House?)--attack the U.S. govt (you politicians are not safe!) I don't see any interest in killing americans per se. I mean, if the hijackers wanted to terrorise, not the rich, the military, or the politicians, but ordinary people, they could have flown their planes into...say...the largest shoppest mall in the area, or into the largest block of flats--attack the people (you american citizens are not safe in your homes!) That was the approach of the chechnyans when they blew up blocks of flats (unless the russian military/secret service/mafia/other planted the bombs).. ...so that's my guess why that date is important. It symbolises the association, for some, of themselves via their govt., military, and financial leaders--their way of life, a way of life (U.S. govt. financial military) that is, indeed under attack. Some americans think that no matter what their govt. military or financial institutions do, they will still be attacked. From what I hear (my anecdotal evidence), most of the arabic world and latin america just want the yanquis to go home. They don't care about the lives of the average american any more than an american cares about the life of an average jordanian or equadorian. As you (and I) don't hold the same philosophy as Private, we see death...and then we look at Iraq and we see...death. And we look at Nicargua...death...Tibet...death. Bombs in London....death. Bombs in Spain...death. Bombs in Kenya...death. In Ethiopia...death. In Bali...death. In fact, the Bali bomb is, as far as I understand it, another example of an "against the people" bombing. It targeted a night club--young australians. "Don't come here: you are a target." There was a palestinian bombing of an "indie" night club in Tel Aviv ("Indie" being my description from what I saw on a report): "You who think you are 'politically safe' because you hold the correct opinions: you are targets too. There is no neutral space." That is Private's position as I understand it: There is no neutral space--for us or against. black, white. Do or die. He needs to take some mushrooms and sit on a hill and stare at clouds for an afternoon...calm down. Lose the rage... "But all of those dead people!" All over the world! Dead people! Don't fight forces, use them R. Buckminster Fuller.
He needs To be clear: In my opinion he or she needs... Don't fight forces, use them R. Buckminster Fuller.
Another emotional appeal argument followed by a personal insult. Please avoid that here. You should also be aware of timezones different from US ones, BTW. Lupin's comment was at 4h35m PM local time. *Lunatic*, n. One whose delusions are out of fashion.
....to post the ravings of a lunatic, especially so soon after the death of Steve Gilliard, a New Yorker, a historian, a man who understood 9/11 like few others, and who spent a considerable amount of time on his blog debunking and rebutting (in very forceful terms) the insanities that "Private" is expounding. I know it was never intended that way, but it feels like an insult to a great man recently departed, whom I miss very much. It saddens me that while Steve is no longer among us to articulate his views, creatures like "Private" crawl out from under their rocks and continue to spew their venom. Someone recently pointed out that the "signal-to-noise" ratio on ET was excellent; it has been lowered considerably today.
Sorry- the quote should be attributed to Metafilter. The confirming link to the US government archives is my own.
Means don't feed the trolls. We have far more important things to accomplish here through true debates. You will soon be mine, My Precious. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin
Damn good comment.
Jerome, as always, your diary reflects great intelligence and morality. Unfortunately, it is based on a false narrative of 9/11. I challenge people to watch all 6 parts of these videos. I think you will conclude, as I have, that false images of planes were broadcast on 9/11. There are other reasons I believe this, not least of which is the complete lack of authenticated affirmative evidence for the official story. However, these videos alone, viewed as a whole, place great doubt on the verity of the broadcast images, and believe prove their falsity. Cognitive dissonance is a poweful thing. Try not to let it keep you from viewing these videos objectively and critically.
# Top Diaries
## The end of the Tories
by Frank Schnittger - Sep 17
## A Democratic Backstop
by Frank Schnittger - Sep 19
1 comment
## Rerversing the dominant submissive British Irish polarity
by Frank Schnittger - Sep 15
## Defending Ireland
by Frank Schnittger - Sep 19
by Oui - Sep 19
## Reforming the UK Constitution
by Frank Schnittger - Sep 13
## A Glossary of Brexitology
by Frank Schnittger - Sep 11
## The penny has dropped
by Frank Schnittger - Sep 12
# Recent Diaries
## Defending Ireland
by Frank Schnittger - Sep 19
## A Democratic Backstop
by Frank Schnittger - Sep 19
1 comment
by Oui - Sep 19
by Oui - Sep 18
## The end of the Tories
by Frank Schnittger - Sep 17
## Agrophotovoltaics , Agriphotovoltaics , Solar Sharing
by gmoke - Sep 17
by Oui - Sep 17
by Oui - Sep 16
## Rerversing the dominant submissive British Irish polarity
by Frank Schnittger - Sep 15
by Oui - Sep 14
## Reforming the UK Constitution
by Frank Schnittger - Sep 13
## The penny has dropped
by Frank Schnittger - Sep 12
by Cat - Sep 11
## A Glossary of Brexitology
by Frank Schnittger - Sep 11
## LQD: EU27 [including France] not to say "non"
by Bernard - Sep 10
by Oui - Sep 10
by Oui - Sep 10
## Boris and Leo press conference
by Frank Schnittger - Sep 9 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2003006488084793, "perplexity": 3408.5877739012617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573561.45/warc/CC-MAIN-20190919163337-20190919185337-00177.warc.gz"} |
https://ecommons.cornell.edu/browse?type=author&value=Shapiro%2C+Vadim&value_lang=en_US | Now showing items 1-4 of 4
• #### Boundry-Based Separation for B-rep $\rightarrow$ CSG Conversion
(Cornell University, 1991-08)
We have shown earlier that one of the most difficult steps in performing b-rep $\rightarrow$ CSG conversion for a curved solid object consists of determining a set of halfspaces that is sufficient for a CSG representation ...
• #### Chain Models of Physical Behavior for Engineering Analysis and Design
(Cornell University, 1993-08)
The relationship between geometry (form) and physical behavior (function) dominates many engineering activities. The lack of uniform and rigorous computational models for this relationship has resulted in a plethora of ...
• #### Real Functions for Representation of Rigid Solids
(Cornell University, 1991-11)
A range of values of a real function $f$ : $E^{d}$\rightarrow \Re$can be used to implicitly define a subset of Euclidean space$E^{d}\$. Such `implicit functions' have many uses in geometric and solid modeling. This ...
• #### Theory of R-functions and Applications: A Primer
(Cornell University, 1991-07)
An R-function is real-valued function characterized by some property that is completely determined by the corresponding property of its arguments, e.g., the sign of some real functions is completely determined by the sign ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4121655225753784, "perplexity": 2703.2107792783895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542520.47/warc/CC-MAIN-20161202170902-00383-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://arxiv.org/abs/0908.3433 | nucl-ex
(what is this?)
# Title: Nuclear physics for geo-neutrino studies
Abstract: Geo-neutrino studies are based on theoretical estimates of geo-neutrino spectra. We propose a method for a direct measurement of the energy distribution of antineutrinos from decays of long-lived radioactive isotopes. We present preliminary results for the geo-neutrinos from Bi-214 decay, a process which accounts for about one half of the total geo-neutrino signal. The feeding probability of the lowest state of Bi-214 - the most important for geo-neutrino signal - is found to be p_0 = 0.177 \pm 0.004 (stat) ^{+0.003}_{-0.001} (sys), under the hypothesis of Universal Neutrino Spectrum Shape (UNSS). This value is consistent with the (indirect) estimate of the Table of Isotopes (ToI). We show that achievable larger statistics and reduction of systematics should allow to test possible distortions of the neutrino spectrum from that predicted using the UNSS hypothesis. Implications on the geo-neutrino signal are discussed.
Comments: 8 pages RevTex format, 8 figures and 2 tables. Submitted to PRC Subjects: Nuclear Experiment (nucl-ex); Nuclear Theory (nucl-th); Geophysics (physics.geo-ph) Journal reference: Phys.Rev.C81:034602,2010 DOI: 10.1103/PhysRevC.81.034602 Cite as: arXiv:0908.3433 [nucl-ex] (or arXiv:0908.3433v1 [nucl-ex] for this version)
## Submission history
From: Marcello Lissia [view email]
[v1] Mon, 24 Aug 2009 13:15:15 GMT (308kb) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8828973174095154, "perplexity": 7031.241694826217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186530.52/warc/CC-MAIN-20170322212946-00475-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://lautarolobo.xyz/blog/february-of-fortran/ | # February of Fortran
## What is Fortran?
Fortran is a 63 y.o. programming language developed by IBM for scientific and engineering applications. Its name is an acronym from FORmula TRANslation, and still being in use for that purpose.
It’s a general-purpose programming language, but best suited for computationally intensive areas like computational physics, computational chemistry, high-performance computing and so.
Many programming languages were based on or influenced by, Fortran. And it has received many updates among the years, last one in 2018.
It was originally conceived as FORTRAN, all uppercase, in 1956. 5 updates later, in the 90s, it became Fortran. The update also added many other changes like free-form source, inline comments, modules, recursive procedures, dynamic memory allocation and many other changes that make the language modern-er.
## How to Compile and Run a Fortran Program
Let’s say you want to compile and run your first Fortran program, like this one:
program HelloWorld
implicit none
print*, "hello world"
write(*,*) "hello world"
end program
! it should return:
! hello world
! hello world
Well, you should make a series of things:
• Install gfortran, which is a Fortran compiler.
• Save your code with .f90 extension (even if you are writing in Fortran 2015, .f90 it’s the standard file extension).
• Compile with gfortran.
• Run your program as you would run any C program.
And that’s how you do it!
## What is Fortran Used For
Fortran is still in use in HPC (High Performace Computing). All that is mathematical chunk is probably done with Fortran. It’s widely used in scientific computing from Chemistry and Physics to Astronomy and Mathematics.
I can almost hear the masses… why not Python?
In fact, you can use Python in those areas. But even if Python is a better choice in many cases, you wouldn’t use it in HPC, since Fortran is performant-er. It may take more time to write, but sometimes code performance means everything.
You know, even that Python has evolved, wasn’t born exclusively for Physicists and Mathematicians.
As I’ve read somewhere:
At the end of the day, Physicists are writing very different programs than Computer Scientists with very different goals and concerns.
You may say that Physicists are not willing to change, or that there’s a lot of Fortran legacy code out there, but even then, Fortran keeps being the best choice for some HPC projects, or Physics calculations.
But that doesn’t mean that you can only use Fortran. Other programming languages used in HPC are C and C++, both being faster than Fortran in many cases, or the somehow-new Julia that is slowly entering the market, is also faster than Fortran and it was developed by MIT exclusively for HPC and all scientific computing.
If you want to see a detailed comparison between Python, C++, and Fortran on scientific computing, check out this amazing paper. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28249824047088623, "perplexity": 2483.71951691927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400238038.76/warc/CC-MAIN-20200926071311-20200926101311-00729.warc.gz"} |
https://www.ncbi.nlm.nih.gov/pubmed/17558773 | Format
Choose Destination
# Biogas production from water hyacinth (Eichhornia crassipes (Mart.) Solms) grown under different nitrogen concentrations.
### Author information
1
Department of Civil Engineering, University of Moratuwa, Sri Lanka. [email protected]
### Abstract
This paper reports the biogas production from water hyacinth (Eichhornia crassipes (Mart.) Solms) grown under different nitrogen concentrations of 1-fold [28 mg/L of total nitrogen (TN)], 2-fold, 1/2-fold, 1/4-fold and 1/8-fold and plants harvested from a polluted water body. This study was carried out for a period of 4 months at ambient mesophilic temperatures of 30.3-31.3 degrees C using six 3-barreled batch-fed reactors with the innermost barrel (45 L) being used as the digester. There was no marked variation in the C/N ratios of the plants cultured under different nitrogen concentrations. The addition of fresh cow dung having a low C/N of 8 resulted in a significant reduction in the C/N ratios of the water hyacinth substrates. However, gas production commenced 3 days after charging the reactors and gas production rates peaked in 4-7 days. The volatile solids (VS) degradation and gas production patterns manifested that in conventional single-stage batch digesters acidogenesis and methanogenesis of water hyacinth requires a retention time of around 27-30 days and 27-51 days, respectively. Substrates in the f-1 digester (i.e., the digester containing plants grown under 28 TN mg/L) having the lowest VS content of 45.3 g/L with a highest C/N ratio of 16 showed fairly higher gas production rates consistently (10-27 days) with higher gas yields containing around 50-65% of CH4 (27-51 days). Moreover the highest overall VS (81.7%) removal efficiencies were reported from the f-1 digester. Fairly higher gas production rates and gas yields with fairly higher CH4 contents were also noticed from the f-2 digester containing substrates having a C/N of 14 and f-out digester (containing the plants harvested from the polluted water body) having the lowest C/N ratio of 9.7 with a fairly high VS content of 56 g/L. CH4 production was comparatively low in the f-1/8, f-1/4 and f-1/2 digesters having VS rich substrates with varying C/N ratios. We conclude that water hyacinth could be utilized for biogas production irrespective of the fact that the plants are grown under higher or lower nitrogen concentrations and that there is no necessity for the C/N ratio to be within the optimum range of 20-32 required for anaerobic digestion. Further it is concluded that several biochemical characteristics of the substrates significantly influences biogas production besides the C/N ratio.
PMID:
17558773
DOI:
10.1080/10934520701369842
[Indexed for MEDLINE] | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8694937825202942, "perplexity": 9529.61191302132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742322.51/warc/CC-MAIN-20181114232605-20181115014605-00149.warc.gz"} |
http://cs.nyu.edu/~wanli/dropc/ | # Regularization of Neural Networks using DropConnect
Li Wan, Matthew Zeiler, Sixin Zhang, Yann LeCun, Rob Fergus
Dept. of Computer Science, Courant Institute of Mathematical Science, New York University
### Introduction
We introduce DropConnect, a generalization of Hinton's Dropout for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect.
No-Drop Network DropOut Network DropConnect Network
### Motivation
Training Network with Dropout:
Each element of a layer's output is kept with probability $$p$$, otherwise being set to $$0$$ with probability $$1-p$$. If we further assume neural activation function with $$a(0)=0$$, such as $$tanh$$ and $$relu$$ ($$\star$$ is element-wise multiplication ): $r = m \star a\left( Wv \right) = a\left( m\star Wv \right)$ Training Network with DropConnect:
Generalization of Dropout in which each connection, rather than each output unit, can be dropped with probability $$1-p$$: $r= a\left( \left( M\star W\right) v\right)$ where $$M$$ is weight mask, $$W$$ is fully-connected layer weights and $$v$$ is fully-connected layer inputs.
### Mixture Model Interpretation
DropConnect Network is a mixture model of $$2^{|M|}$$ neural network classifiers $$f(x;\theta,M)$$: $o = \mathbf{E}_{M}\left[ f\left(x;\theta,M\right) \right]=\sum_{M} p\left(M\right) f\left(x;\theta,M\right)$ It is not hard to show stochastic gradient descent with random mask $$M$$ for each data improves the lower bound of mixture model
### Inference
Dropout Network Inference (mean-inference): $$\mathbf{E}_M\left[ a\left( M\star W\right)v \right]\approx a\left(\mathbf{E}_M\left[ \left(M\star W\right) v \right]\right) = a\left(pWv\right)$$ DropConnect Network Inference (sampling): $$\mathbf{E}_M\left[ a\left( M\star W\right)v \right]\approx \mathbf{E}_u\left[a(u)\right]$$ where $$u\sim \mathcal{N}\left( pWv, p\left(1-p\right)\left(W\star W\right)\left(v\star v\right)\right)$$, i.e. each neuron activation are approximated by a Gaussian distribution via moment matching.
### Experiment Results
Experiment with MNIST dataset using 2-layer fully connected neural network:
(a)Prevent overfitting as the size of connected layers increase (b)Varying the drop-rate in a 400-400 network (c)Convergence properties of the train/test sets
Evaluate DropConnect model for regularizing deep neural network of various popular image classification datasets:
DataSet DropConnect Dropout Previous best result(2013) MNIST 0.21 0.27 0.23 CIFAR-10 9.32 9.83 9.55 SVHN 1.94 1.96 2.80 NORB-full-2fold 3.23 3.03 3.36
### Implementation Details
Performance comparison between different implementation of DropConnect layer on NVidia GTX 580 GPU relative to 2.67Ghz Intel Xeon (compiled with -O3 flag). Input and output dimension is 1024 and mini-batch size is 128 (You might not get exactly the same number with my code on your machine):
Implementation Mask Weight Total Time(ms) Speedup CPU float 3401.6 1.0 X CPU bit 1831.1 1.9 X GPU float(global memory) 35.0 97.2 X GPU float(tex1D memory) 27.2 126.0 X GPU bit(tex2D memory) 8.2 414.8 X
Total Time includes: fprop, bprop and update for each mini-batch
Thus, efficient implemention: 1) encode connection information in bits 2) Algined 2D memory bind to 2D texture for fast query connection status. Texture memory cache hit rate of our implementation is close to $$90\%$$.
### Why DropConnect Regularize Network
Rademacher Complexity of Model: $$\max |W_s| \leq B_s$$, $$\max |W| \leq B$$, $$k$$ is the number of classes, $$\hat{R}_{\ell}(\mathcal{G})$$ is the Rademacher complexity of the feature extractor, $$n$$ and $$d$$ are the dimensionality of the input and output of the DropConnect layer respectively: $\hat{R}_{\ell}(\mathcal{F}) \leq p\left(2\sqrt{k}dB_sn\sqrt{d}B_h\right)\hat{R}_{\ell}(\mathcal{G})$
Special Cases of $$p$$:
1. $$p=0$$: the model complexity is zero, since the input has no influence on the output.
2. $$p=1$$: it returns to the complexity of a standard model.
3. $$p=1/2$$: all sub-models have equal preference.
### Reference
Regularization of Neural Network using DropConnect Li Wan, Matthew Zeiler, Sixin Zhang, Yann LeCun, Rob Fergus
International Conference on Machine Learning 2013 (10 pages PDF) Supplementary Material Slides
CUDA code (code Sep-20-2013 update changelog )
### Reproduce Experiment Results
The full project code is here in case you want to repeat some of the experiments in our paper. Please refer to here for how to compile the code. Some examples to run the code is here. Unfortunately, the code is a little bit unorganized and I might clean up in the future. Important trained models and config files are also available here(Updated Dec-16-2013).
Zygmunt from FastML has successfully reproduce experiment result on CIFAR-10 on Kaggle CIFAR-10 leadearbord in his artical Regularizing neural networks with dropout and with DropConnect.
A summary of question and my answer for hacking my uncleaned code is Here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5686607956886292, "perplexity": 2988.130570186059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736679281.15/warc/CC-MAIN-20151001215759-00025-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://www.maths.ed.ac.uk/node/348 | # Finn Lindgren
Stochastic space-time models, non-trivial observation mechanisms, and practical inference
Many natural phenomena can be modelled hierarchically, with latent random fields taking the role of unknown quantities for which we may have some idea about smoothness properties and multiscale behaviour. The mechanisms generating observed data can have additional unknown properties, such as animal detection probabilities depending on the size of a group of dolphins, or temperature biases depending on the local terrain near a weather station. Combining these models and mechanisms often lead to non-Gaussian likelihoods or Bayesian posteriors, requiring careful thought to construct computationally efficient practical inference methods. I will discuss these issues in the context of some recent and ongoing work for spatially resolved animal abundance estimation and historical climate reconstruction. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8482073545455933, "perplexity": 1542.4155965845036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806569.66/warc/CC-MAIN-20171122103526-20171122123526-00475.warc.gz"} |
http://www.seas.upenn.edu/~sweirich/types/archive/1997-98/msg00101.html | CFP: WORKSHOP ON PARALLELISM AND IMPLEMENTATION TECHNOLOGY FOR (CONSTRAINT) LOGIC PROGRAMMING LANGUAGES
[The Workshop on Parallelism and Implementation Technology for
(Constraint) Logic Programming Languages covers all subjects related
to the implementation of logic programming systems. The call for
papers may be of interest to TYPES readers who are interested in
presenting work on both the compile-time and run-time issues arising
in the design and implementation of typing systems for logic
programming languages and related languages. Examples of such
languages include G\"odel, Escher, Oz, Mercury, and Babel, among
others.]
WORKSHOP ON PARALLELISM AND IMPLEMENTATION TECHNOLOGY FOR
(CONSTRAINT) LOGIC PROGRAMMING LANGUAGES
(in conjunction with ILPS'97, Port Jefferson, USA)
October 12--17, 1997
One of the main areas of research in logic programming is the design
and implementation of sequential and parallel (constraint) logic
programming systems. This research goes broadly from the design and
specification of novel implementation technology to its actual
evaluation in real life situations. In the continuation of the series
of workshops on Implementations of Logic Programming Systems,
previously held in Budapest (1993), Ithaca (1994), Portland (1995),
and Bonn (1996), the ILPS'97 workshop Parallelism and Implementation
Technology for (Constraint) Logic Programming Languages will provide a
forum for ongoing research on the design and implementation of
sequential and parallel logic programming systems.
Papers from both academia and industry are invited. Preference will be
given to the analysis and description of implemented systems (or
currently under implementation) and their associated techniques,
problems found in their development or design, and steps taken towards
the solution of these problems.
TOPICS include, but are not limited to:
- standard and non-standard sequential implementation schemes
(e.g., generalization/modification of WAM, translation to C, etc.);
- implementation of parallel logic programming systems;
- balance between compile-time effort and run-time machinery;
- techniques for the implementation of different declarative
programming paradigms based on, or extending, logic programming
(e.g., constraint logic programming, concurrent constraint
languages, equational-logic languages);
- performance evaluation of sequential and parallel logic programming
systems, both through benchmarking and using real world applications;
- other implementation-related issues, such as memory management,
register allocation, use of global optimizations, etc.
The workshop will be held in conjunction with the 1997 International
Logic Programming Symposium, which will take place in Port Jefferson,
Long Island, NY, from October 12 to October 17 1997.
PAPER SUBMISSIONS
Authors willing to present their work are invited to submit an
extended abstract, or preferably a full paper, to the workshop
organizers by August 1, 1997. Authors will be notified of the
acceptance or rejection of their papers by September 1, 1997.
Papers must not exceed 15 pages (approximately 5000 words). The title
page should include the name, address, telephone number, and
electronic mailing address for each author, as well as a list of
keywords. A contact author should also be provided. Electronic
submission of the LaTeX document and related postscript figures is
strongly encouraged.
The electronic submissions should be in LaTeX or PostScript format.
LaTeX style files are available via WWW at
http://www.cs.nmsu.edu/lldap/ilps97.
We encourage the use of these formats.
At least one of the authors for each accepted paper is expected to
attend the meeting and present the work. The collection of accepted
papers will be made available at the workshop and published as a
Technical Report. The papers will also be available electronically
after the workshop at the aforementioned WWW addresses.
IMPORTANT DATES
Deadline for submission: August 1, 1997
Notification of acceptance: September 1, 1997
The exact dates and length of the meeting depend largely on the number
of papers received and the number of people expected to attend it.
Thus, we encourage anyone interested in attending the workshop to
submit a note by electronic mail to the workshop organizers before
July, 1, 1997, at the address [email protected].
ORGANIZING COMMITTEE
Ines de Castro Dutra [email protected]
COPPE/Systems Engineering and Computer Science,
Federal Univ. of Rio de Janeiro
Enrico Pontelli [email protected]
Gopal Gupta [email protected]
LLDAP, New Mexico State University
Vitor Santos Costa [email protected]
Fernando Silva [email protected]
INFORMATION AND REQUESTS
Surface mail: Vitor Santos Costa
Workshop on Implementation Technology and Parallelism
Rua do Campo Alegre, 823
4150 Porto
Portugal
Fax: +351-2-6003654
ILPS'97: http://www.ida.liu.se/~ilps97
--------------------------------------------------------------------------
%%%%%%%
%%%%%%% LaTeX Source
%%%%%%%
\documentstyle[11pt]{article}
\pagestyle{empty}
\thispagestyle{empty}
\setlength{\parindent}{0mm}
\setlength{\parskip}{1mm}
\evensidemargin=0cm
\oddsidemargin=-0.8cm
\topmargin=-1.0cm
\textheight=27cm
\textwidth=17.8cm
%\columnwidth=\textwidth
\renewcommand{\thepage}{}
\begin{document}
\newcommand{\wksemail}{[email protected]}
\newcommand{\httpNM}{\mbox{http://www.cs.nmsu.edu/lldap/ilps97}}
\newcommand{\httpILPS}{\mbox{http://www.ida.liu.se/\~{}ilps97}}
\begin{center}
{\large
Call For Papers \\
{\bf %Post-ILPS'97
Workshop on {\bf Parallelism and Implementation Technology for
(Constraint) Logic Programming Languages}} \\
\vspace*{0.2cm}
(in conjunction with ILPS'97, Port Jefferson, USA)} \\
October 12--17, 1997
\vspace{0.3cm}
\end{center}
One of the main areas of research in logic programming is the design and
implementation of sequential and parallel (constraint) logic programming
systems. This research goes broadly from the design and specification of
novel implementation technology to its actual evaluation in real life
situations. In the continuation of the series of workshops on {\em
Implementations of Logic Programming Systems}, previously held in Budapest
(1993), Ithaca (1994), Portland (1995), and Bonn (1996), the ILPS'97
workshop Parallelism and Implementation Technology for (Constraint) Logic
Programming Languages will provide a forum for ongoing research on the
design and implementation of sequential and parallel logic programming
systems.
Papers from both academia and industry are invited. Preference will be
given to the analysis and description of implemented systems (or
currently under implementation) and their associated techniques,
problems found in their development or design, and steps taken towards
the solution of these problems.
\noindent
Topics include, but are not limited to:
\begin{itemize}
\item standard and non--standard sequential implementation schemes
(e.g., generalization/modification of WAM, translation to C, etc.);
\item implementation of parallel logic programming systems;
\item balance between compile--time effort and run--time machinery;
\item techniques for the implementation of different declarative
programming paradigms based on, or extending, logic programming
(e.g., constraint logic programming, concurrent constraint
languages, equational--logic languages);
\item performance evaluation of sequential and
parallel logic programming systems, both
through benchmarking and using real
world applications;
\item other implementation--related issues, such as memory management,
register allocation, use of global optimizations, etc.
\end{itemize}
\noindent The workshop will be held in conjunction with the 1997
International Logic Programming Symposium, which will take place in Port
Jefferson, Long Island, NY, from October 12 to October 17 1997.
\medskip
\begin{center}
{\bf Paper Submissions}
\end{center}
\noindent Authors willing to present their work are invited to submit an
extended abstract, or preferably a full paper, to the workshop
organizers by {\bf August 1, 1997}. Authors will be notified of the
acceptance or rejection of their papers by {\bf September 1, 1997}.
Papers must not exceed 15 pages (approximately 5000 words). The title page
should include the name, address, telephone number, and electronic mailing
address for each author, as well as a list of keywords. A contact author
should also be provided. Electronic submission of the LaTeX document and
related postscript figures is {\em strongly\/} encouraged.
The electronic submissions should be in \LaTeX\ or PostScript format.
\LaTeX\ style files are available via WWW at \\
\centerline{\tt \httpNM}
\noindent
We encourage the use of these formats.
% If electronic
%submission is not possible, a copy of the paper should be sent by
%surface mail to
%\begin{quotation}
% \noindent
% V\'{\i}tor Santos Costa \\
% ILPS'97 Workshop \\
% LIACC --- Universidade do Porto \\
% Rua do Campo Alegre, 823\\
% 4150 Porto \\
% Portugal
%\end{quotation}
At least one of the authors for each accepted paper is expected to
attend the meeting and present the work. The collection of accepted
papers will be made available at the workshop and published as a
Technical Report. The papers will also be available
electronically after the workshop at the aforementioned WWW addresses.
\pagebreak
\begin{center}
{\bf Important dates}
\end{center}
\begin{center}
\begin{tabular}{ll}
Deadline for submission: & August 1, 1997 \\
Notification of acceptance: & September 1, 1997 \\
% Camera-ready papers due: & July 26, 1996 \\
% Workshop: & September 5 or 6, 1996
\end{tabular}
\end{center}
The exact dates and length of the meeting depend largely on the number
of papers received and the number of people expected to attend it.
Thus, we encourage anyone interested in attending the workshop to
submit a note by electronic mail to the workshop organizers before
{\bf July, 1, 1997}, at the address {\tt \wksemail}.
\medskip
\medskip
\begin{center}
{\bf Organizing Committee}
\end{center}
\begin{center}
\begin{tabular}{ll}
In\^es de Castro Dutra & {\tt [email protected]}\\
& COPPE/Systems Engineering and Computer Science\\
& Federal Univ. of Rio de Janeiro\\
Enrico Pontelli & {\tt [email protected]}\\
Gopal Gupta & {\tt [email protected]}\\
& LLDAP, New Mexico State University \\
V\'{\i}tor Santos Costa & {\tt [email protected]} \\
Fernando Silva & {\tt [email protected]} \\
\end{tabular}
\end{center}
\begin{center}
{\bf Information and Requests}
\end{center}
\begin{tabular}{ll}
{\bf E-mail address:} & {\tt \wksemail}
\\[0.5em]
{\bf WWW addresses:} & {\tt \httpNM} \\[0.5em]
{\bf Surface mail:} & V\'{\i}tor Santos Costa \\
& Workshop on Implementation Technology and Parallelism \\
& LIACC --- Universidade do Porto \\
& Rua do Campo Alegre, 823\\
& 4150 Porto \\
& Portugal\\
{\bf Fax:} & +351-2-6003654 \\[0.5em]
{\bf ILPS'97:} & {\tt \httpILPS}
\end{tabular}
\end{document} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48849403858184814, "perplexity": 9992.552751905328}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451648.66/warc/CC-MAIN-20151124205411-00020-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://scikit-learn.org/dev/modules/generated/sklearn.preprocessing.label_binarize.html | # sklearn.preprocessing.label_binarize¶
sklearn.preprocessing.label_binarize(y, classes, neg_label=0, pos_label=1, sparse_output=False)[source]
Binarize labels in a one-vs-all fashion
Several regression and binary classification algorithms are available in scikit-learn. A simple way to extend these algorithms to the multi-class classification case is to use the so-called one-vs-all scheme.
This function makes it possible to compute this transformation for a fixed set of class labels known ahead of time.
Parameters: y : array-like Sequence of integer labels or multilabel data to encode. classes : array-like of shape [n_classes] Uniquely holds the label for each class. neg_label : int (default: 0) Value with which negative labels must be encoded. pos_label : int (default: 1) Value with which positive labels must be encoded. sparse_output : boolean (default: False), Set to true if output binary array is desired in CSR sparse format Y : numpy array or CSR matrix of shape [n_samples, n_classes] Shape will be [n_samples, 1] for binary problems.
LabelBinarizer
class used to wrap the functionality of label_binarize and allow for fitting to classes independently of the transform operation
Examples
>>> from sklearn.preprocessing import label_binarize
>>> label_binarize([1, 6], classes=[1, 2, 4, 6])
array([[1, 0, 0, 0],
[0, 0, 0, 1]])
The class ordering is preserved:
>>> label_binarize([1, 6], classes=[1, 6, 4, 2])
array([[1, 0, 0, 0],
[0, 1, 0, 0]])
Binary targets transform to a column vector
>>> label_binarize(['yes', 'no', 'no', 'yes'], classes=['no', 'yes'])
array([[1],
[0],
[0],
[1]]) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19537413120269775, "perplexity": 7755.80741435649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867666.97/warc/CC-MAIN-20180625111632-20180625131632-00226.warc.gz"} |
https://miamioh.edu/cec/academics/index.html | For the May 2016 graduates surveyed before graduation, 81% had jobs, were in the military or enrolled in graduate/professional schools. The average salary reported was approximately $63,000 (range$39,000 - \$92,000). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5709061622619629, "perplexity": 15579.73728828072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583705091.62/warc/CC-MAIN-20190120082608-20190120104608-00565.warc.gz"} |
https://physics.stackexchange.com/questions/435146/energy-of-an-object | Energy of an object
When an object goes up we say that it gained potential energy but it is doing positive work on the earth so it should lose energy.Please correct me.
• Who's doing the work? – user191954 Oct 17 '18 at 16:02
• The object which is going up is doing the work. – user64348 Oct 18 '18 at 8:27
• @user64348 Don't you mean something or someone is doing work against gravity to raise the object? – Bob D Oct 21 '18 at 22:40
When an object goes up we say that it gained potential energy
Yes.
but it is doing positive work on the earth
Make sure you get the signs right. Potential energy is negative mgh.
The positive work you do is offset by the negative PE. The total energy is zero before, during and after the move.
When an object goes up in the earth’s gravitational field it loses KE and gains PE. The work done on the object is negative because the force is in the opposite direction of the motion.
By Newton’s third law there is an equal and opposite force on the earth. However, because the earth does not move no work is done on the earth.
When an object of mass $$m$$ goes up near the surface of the earth, something or somebody else is doing work on the object to raise it. The object itself is not doing work on the earth and it is not losing energy. The something or somebody doing the work on the object against gravity to raise it is losing energy, but it is transferring that energy to the object in the form of increased gravitational potential energy.
There is no need for there to be a net increase or decrease in kinetic energy. The key is the object needs to begin at rest and end at rest. In order to accomplish this, we initially need to apply an external upward force, $$F_{ext}$$, slightly greater than $$mg$$, to give it small upward acceleration $$a$$. Let’s say we do this for a brief time $$dt$$ and therefore over a short distance $$dh$$. The mass thereby attains a small velocity $$v=adt$$, a small increase in kinetic energy of $$½ m(adt)^2$$ and a small increase in the potential energy of the mass m of $$(mg) dh$$. We now immediately reduce our upward force so that it equals the downward gravitational force. The mass is now rising at constant velocity v and so there is no subsequent change in KE, however its potential energy keeps increasing since the mass continues to rise. The work to accomplish this increase in potential energy is due to our constant application of an upward force equal to the force of gravity.
As we approach point 2 we are still left with the small kinetic energy. Therefore, prior to reaching point 2, we reduce the external upward force, $$F_{ext}$$, to slightly less than $$mg$$, to give it a small negative acceleration. We do this for sufficient time to bring the mass to rest at point 2 a height $$h$$ above point 1. During this period gravity now does a small amount of negative work resulting in a change in kinetic energy of $$-½ m(adt)^2$$. Consequently the total change in kinetic energy going from point 1 to point 2 is zero. Since there is no loss in height during this period, there is no loss in potential energy.
The end result in going from 1 to 2 is an increase in gravitational potential energy of
$$\int_1^2 (mg)dh = (mg)h$$.
with no net change in kinetic energy. This increase in potential energy comes, of course, came from the external agent that did work on the object.
Hope this helps. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205509424209595, "perplexity": 174.87226986121178}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573173.68/warc/CC-MAIN-20190918003832-20190918025832-00502.warc.gz"} |
https://st.hujiang.com/topic/161520226797/ | # 沪江社团
## 【爽身粉】英乐随响◆Rather Be
conlylam (molly りん) ▼▽
116 3 2
We're a thousand miles from comfort,
we have traveled land and sea
But as long as you are with me,
there's no place I rather be
I would wait forever, exulted in the scene
As long as I am with you,
my heart continues to beat
With every step we take, Kyoto to The Bay
Strolling so casually
We're different and the same,
gave you another name
Switch up the batteries
If you gave me a chance I would take it
It's a shot in the dark but I'll make it
Know with all of your heart, you can't shame me
When I am with you, there's no place I rather be
N-n-n-no, no, no, no place I rather be
N-n-n-no, no, no, no place I rather be
N-n-n-no, no, no, no place I rather be
We staked out on a mission to find our inner peace
Make it everlasting so nothing's incomplete
It's easy being with you, sacred simplicity
As long as we're together, there's no place I rather be
With every step we take, Kyoto to The Bay
Strolling so casually
We're different and the same, gave you another name
Switch up the batteries
If you gave me a chance I would take it
It's a shot in the dark but I'll make it
Know with all of your heart, you can't shame me
When I am with you, there's no place I rather be
N-n-n-no, no, no, no place I rather be
N-n-n-no, no, no, no place I rather be
N-n-n-no, no, no, no place I rather be
When I am with you, there's no place I'd rather be
Hmmmmmmmmmm, Hoooooooooo
Be (9x)
Yeah-e-yeah-e-yeah-e-yeah-e-yeah, yeah, yeah
If you gave me a chance I would take it
It's a shot in the dark but I'll make it
Know with all of your heart, you can't shame me
When I am with you, there's no place I rather be
N-n-n-no, no, no, no place I rather be
N-n-n-no, no, no, no place I rather be
N-n-n-no, no, no, no place I rather be
When I am with you, there's no place I'd rather be
2. 课外知识:你知道怎么它们素什么意思么?
To grasp at
Will-o’-the wisp
Falter
Nagging
Selfie
Unfriend
Hashatag
Providence
A wild goose chase
Apt
Grim
• CCtalk里的英文歌课不少,但是不好抢麦;五音不全可素不知道该怎么办;想跟着主讲老师把整首歌唱完甚至使用伴奏;想学唱最hit的英文歌;想知道什么时候吞音or连音能听起来更加逼格;想在英文歌中学到有用的表达... 怎么办? 解决以上问题,相约英乐随响~ 口语活动主持人招募中,有意请联系创刊人诗煜。
• 2
点赞
• 收藏
• 扫一扫分享朋友圈
二维码
• 分享
• 课前预习
• 课后作业
• 课程回顾 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9351744055747986, "perplexity": 9069.707518116034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509690.35/warc/CC-MAIN-20181015184452-20181015205952-00478.warc.gz"} |
https://twiki.ace.fordham.edu/bin/rdiff/Sandbox/WebHome?type=history | # Difference: WebHome (1 vs. 6)
#### Revision 62013-11-13 - TWikiContributor
Line: 1 to 1
%DASHBOARD{ section="banner" image="https://twiki.ace.fordham.edu/pub/TWiki/TWikiDashboardImages/nasa-airfield.jpg"
Line: 15 to 15
<--===== TEST TOPICS ============================================-->
%DASHBOARD{ section="box_start" title="Create Test Topics"
Added:
>
>
contentstyle="overflow: hidden; margin-right: -10px;"
}%
Create a new document by name:
Line: 27 to 28
Changed:
<
<
Create a new auto-numbered test topic:
>
>
Create auto-numbered test topic:
Line: 44 to 45
<--===== TIP OF DAY ============================================-->
%DASHBOARD{ section="box_start"
Changed:
<
<
title="Tip of Day" contentstyle="overflow: hidden;"
>
>
title=" TWiki Tip of the Day"
}%
Changed:
<
<
TWiki Tip of the Day
SmiliesPlugin emoticons
Smilies are common in e mail messages and bulletin board posts. They are used to convey an emotion, such... Read on
>
>
TWiki Tip of the Day
Linking to a file attachment
One can create a link to a file attachment using one of the following TWikiVariables, % ATTACHURL%... Read on
<--===== NEW USERS ============================================-->
Line: 1 to 1
Changed:
<
<
# Welcome to the Sandbox web
Use this web to try out TWiki. Everybody is invited to add or delete some stuff. It is recommended to walk through the TWiki Tutorial to learn the basics of TWiki. It is good practice to sign your contributions with your WikiName and date.
## Test Topics
>
>
Welcome to the Sandbox web
<--===== OVERVIEW ============================================-->
Overview
Use this web to try out TWiki. Go ahead and add or delete some stuff. Walk through the TWiki Tutorial to learn the basics of TWiki. We recommend to sign your contributions with your WikiName and date, which is done automatically when you create a topic or add a comment.
<--===== TEST TOPICS ============================================-->
Create Test Topics
Changed:
<
<
Create a new document by name: (Use a topic name in WikiNotation)
>
>
Create a new document by name:
Use a WikiWord for automatic linking.
Added:
>
>
Changed:
<
<
Create a new auto-numbered test topic:
>
>
Create a new auto-numbered test topic:
Changed:
<
<
>
>
Changed:
<
<
## Sandbox Web Utilities
>
>
<--===== RECENT CHANGES ============================================-->
<--===== TIP OF DAY ============================================-->
Tip of Day
TWiki Tip of the Day
SmiliesPlugin emoticons
Smilies are common in e mail messages and bulletin board posts. They are used to convey an emotion, such... Read on
<--===== NEW USERS ============================================-->
<--===== WEB UTILITIES ============================================-->
Sandbox Web Utilities
<--===== END ============================================-->
Changed:
<
<
>
>
Line: 1 to 1
# Welcome to the Sandbox web
Changed:
<
<
The Sandbox web is the sandbox you can use for testing. Everybody is welcome to add or delete some stuff. It is recommended to walk through the TWikiTutorial to get a jumpstart on the TWiki tool. A good rule of thumb is to add at the end of the page and sign and date it with your WikiName.
>
>
Use this web to try out TWiki. Everybody is invited to add or delete some stuff. It is recommended to walk through the TWiki Tutorial to learn the basics of TWiki. It is good practice to sign your contributions with your WikiName and date.
Line: 1 to 1
Added:
>
>
# Welcome to the Sandbox web
The Sandbox web is the sandbox you can use for testing. Everybody is welcome to add or delete some stuff. It is recommended to walk through the TWikiTutorial to get a jumpstart on the TWiki tool. A good rule of thumb is to add at the end of the page and sign and date it with your WikiName.
Line: 10 to 11
Deleted:
<
<
Create a new auto-numbered test topic:
Deleted:
<
<
Deleted:
<
<
Deleted:
<
<
## Recently changed topics
WebStatistics
Statistics for Sandbox Web Month: Topic views: Topic saves: File uploads: Most popular topic views: Top viewers: Top contributors...
2021-10-18 - 01:03 - TWikiGuest
WebNotify
Web Notification This is a subscription service to be automatically notified by e mail when topics change in this 1 web. This is a convenient service, so you do...
2019-05-29 - 19:01 - LabTech
TestTopic000
Title Article text. This is an example of the \LaTeX rendering possibilities using the LatexModePlugin. The singular value decomposition of a matrix %$A$% is defined...
2016-02-19 - 15:23 - TWikiGuest
TestPage
Title The singular value decomposition of a matrix %$A$% is defined as \begin{displaymath} A U \Sigma V^H \end{displaymath} where %$U$% and %$V$% are both matrices...
2016-02-19 - 15:23 - TWikiGuest
CommentPluginExampleComments
Comments Example comment topic for CommentPluginExamples return Target comment output 1 TWikiContributor 03 Dec 2006 Target comment output 2 TWikiContributor...
2016-02-15 - 17:48 - TWikiAdminUser
WebCreateNewTopic
2015-06-10 - 21:22 - TWikiContributor
WebSearch
2015-05-15 - 21:32 - TWikiContributor
A more extensive changes list is available via Recent Changes.
Deleted:
<
<
Added:
>
>
Line: 1 to 1
# Welcome to the Sandbox web
The Sandbox web is the sandbox you can use for testing. Everybody is welcome to add or delete some stuff. It is recommended to walk through the TWikiTutorial to get a jumpstart on the TWiki tool. A good rule of thumb is to add at the end of the page and sign and date it with your WikiName.
Line: 10 to 10
Added:
>
>
Create a new auto-numbered test topic:
Added:
>
>
Added:
>
>
Line: 1 to 1
Added:
>
>
# Welcome to the Sandbox web
The Sandbox web is the sandbox you can use for testing. Everybody is welcome to add or delete some stuff. It is recommended to walk through the TWikiTutorial to get a jumpstart on the TWiki tool. A good rule of thumb is to add at the end of the page and sign and date it with your WikiName.
## Test Topics
Create a new document by name: (Use a topic name in WikiNotation)
Create a new auto-numbered test topic:
## Recently changed topics
WebStatistics
Statistics for Sandbox Web Month: Topic views: Topic saves: File uploads: Most popular topic views: Top viewers: Top contributors...
2021-10-18 - 01:03 - TWikiGuest
WebNotify
Web Notification This is a subscription service to be automatically notified by e mail when topics change in this 1 web. This is a convenient service, so you do...
2019-05-29 - 19:01 - LabTech
TestTopic000
Title Article text. This is an example of the \LaTeX rendering possibilities using the LatexModePlugin. The singular value decomposition of a matrix %$A$% is defined...
2016-02-19 - 15:23 - TWikiGuest
TestPage
Title The singular value decomposition of a matrix %$A$% is defined as \begin{displaymath} A U \Sigma V^H \end{displaymath} where %$U$% and %$V$% are both matrices...
2016-02-19 - 15:23 - TWikiGuest
CommentPluginExampleComments
Comments Example comment topic for CommentPluginExamples return Target comment output 1 TWikiContributor 03 Dec 2006 Target comment output 2 TWikiContributor...
2016-02-15 - 17:48 - TWikiAdminUser
WebCreateNewTopic
2015-06-10 - 21:22 - TWikiContributor
WebSearch
2015-05-15 - 21:32 - TWikiContributor
A more extensive changes list is available via Recent Changes.
## Sandbox Web Utilities
Copyright © 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2525191009044647, "perplexity": 5940.295971629548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585837.82/warc/CC-MAIN-20211024015104-20211024045104-00312.warc.gz"} |
http://commens.org/dictionary/entry/quote-syllabus-syllabus-course-lectures-lowell-institute-beginning-1903-nov-23-s-33 | # The Commens DictionaryQuote from ‘Syllabus: Syllabus of a course of Lectures at the Lowell Institute beginning 1903, Nov. 23. On Some Topics of Logic’
Quote:
Separation of Secondness, or Secundal Separation, called Precission, consists in supposing a state of things in which one element is present without the other, the one being logically possible without the other. Thus, we cannot imagine a sensuous quality without some degree of vividness. But we usually suppose that redness, as it is in red things, has no vividness; and it would certainly be impossible to demonstrate that everything red must have a degree of vividness.
Date:
1903
References:
EP 2:270
Citation:
‘Prescission’ (pub. 18.07.15-13:02). Quote in M. Bergman & S. Paavola (Eds.), The Commens Dictionary: Peirce's Terms in His Own Words. New Edition. Retrieved from http://www.commens.org/dictionary/entry/quote-syllabus-syllabus-course-lectures-lowell-institute-beginning-1903-nov-23-s-33.
Posted:
Jul 18, 2015, 13:02 by Mats Bergman
Last revised:
Jul 18, 2015, 17:56 by Mats Bergman | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8016438484191895, "perplexity": 4872.69885743181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886802.13/warc/CC-MAIN-20200704232817-20200705022817-00207.warc.gz"} |
http://science.sciencemag.org/content/145/3635/932 | Reports
# Fluoride: Its Effects on Two Parameters of Bone Growth in Organ Culture
+ See all authors and affiliations
Science 28 Aug 1964:
Vol. 145, Issue 3635, pp. 932-934
DOI: 10.1126/science.145.3635.932
## Abstract
Bones of the forepaws of young rats were subjected to varying concentrations of fluoride ions in organ culture. The formation of DNA and protein synthesis were evaluated by measurements of the uptake of tritiated thymidine and C14-labeled proline. Fluoride concentrations as high as 10 to 20 parts per million had no demonstrable effect in vitro on these basic parameters of skeletal growth | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934201002120972, "perplexity": 6037.2364318568425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320545.67/warc/CC-MAIN-20170625170634-20170625190634-00464.warc.gz"} |
https://codegolf.stackexchange.com/questions/118078/appends-or-prepends-depends | # Appends or Prepends? Depends
Brain-flak turns one year old tomorrow! In honor of it's birthday, we're having a PPCG style birthday party, where several users post brain-flak related questions! Help us celebrate! :)
Brain-flak is an esoteric language I wrote where all of the commands are brackets and all of the brackets must be fully matched. To borrow my own definition:
• For the purpose of this challenge, a "bracket" is any of these characters: ()[]{}<>.
• A pair of brackets is considered "matched" if the opening and closing brackets are in the right order and have no characters inside of them, such as
()
[]{}
Or if every subelement inside of it is also matched.
[()()()()]
{<[]>}
(()())
Subelements can also be nested several layers deep.
[(){<><>[()]}<>()]
<[{((()))}]>
• A string is considered "Fully matched" if and only if:
1. Every single character is a bracket,
2. Each pair of brackets has the correct opening and closing bracket and in the right order
In celebration of brain-flak's first birthday, today's challenge is about taking an unbalanced set of brackets, and determining what types of operations are needed to make it valid brain-flak.
• For example, (( is not valid brain-flak code, but if we append )) to it, it becomes (()), which is fully balanced, and therefore valid brain-flak. That makes this input appendable.
• Similarly, >} is not valid, but we can prepend {< to it to make {<>}, which is valid. That makes this input prependable.
• Some inputs are slightly more complicated. For example, )][({ cannot be made valid purely by appending or prepending. But it can be made valid by prepending [( and appending })]. Therefore, this input is both prependable and appendable.
• Lastly, some inputs can never be made valid brain-flak code by any combination of appending or prepending. For example, (> can never be made valid. (Prepending < creates <(>, and appending ) creates (>), neither of which are valid) Therefore, this input is neither appendable or prependable.
For today's challenge, you must write a program or function that takes a string of brackets and determines if the string is
appendable
prependable
both
neither
You may pick what values you use to represent for each case. For example, outputting 1, 2, 3, 4, or 'a', 'p', 'b', 'n', or 1, 'foo', 3.1415, -17, or whatever is fine. As long as each output is distinct and consistent, that's fine. You must however, clearly specify which output corresponds to which case.
You may return this value in whichever format is most convenient (for example, returning from a function, printing to STDOUT, modifying arguments, writing to a file, etc.).
You can assume that the input will never be valid brain-flak or empty.
# Examples
The following inputs are all prependable:
))
(((()()())))}
)>}]
()[]{}<>)
These are all appendable:
(({}{})
((((
([]()())(
{<<{
These are all both:
))((
>()[(()){
>{
And these are all neither:
)(}
{(((()()()))>
[}
((((((((((>
((((((((((<>()]
As usual, this is , so standard loopholes apply, and the shortest answer in bytes wins!
This challenge is particularly difficult in brain-flak, so maximum brownie points to any and every answer written in brain-flak. :)
• maximum brownie points I think that offering maximum brownie points and cookies instead would encourage Brain-Flaking this challenge more than just brownie points, since I don't think it's trivial at all in any language, let alone Brain-Flak. :P – Erik the Outgolfer Apr 29 '17 at 8:44
• FYI: All the both tests end with open brackets, all the neither tests end with close brackets. – Jonathan Allan Apr 29 '17 at 12:03
• I would argue that 'both' is the wrong term. A string like ][ is not appendable, as nothing you can append can make it valid. Similarly, it's not prependable. It's... 'insertable'! You can insert it into a string to make the whole valid Brainflak. – orlp Apr 29 '17 at 15:06
• Are already balanced strings both or neither? – Wheat Wizard Apr 30 '17 at 13:22
• @wheatwizard Balanced strings won't be given as input. You can assume that the input will never be valid brain-flak or empty. – James Apr 30 '17 at 14:15
# Jelly, 33 32 37 35 34 bytes
bug found, horrible fix +5 bytes, better fix - 2 bytes, using a trick of Adnan's I saw here for -1 more.
“({[<“)}]>”Z;@WœṣF¥/µÐLO‘&2µIṀ>0ȯQ
Return values:
prepends [2]
appends [0]
both [2,0]
neither 1
(Invalid input returns spurious results, although valid Brain-flack, returns [].)
Try it online! - a test suite (prints mushed representations, so 20 for [2,0], and ignores lines containing any -).
# Retina, 4140 41 bytes
1 byte saved thanks to @MartinEnder
+|$]|{}|<> []})>]+ 1 \W+ 0 ...+ 01 Try it online! • Prependable is 1 • Appendable is 0 • Both is 10 • None is 01 ### Edits • Gained 1 byte to fix bug noticed by @Neil • []})>] saves a byte. – Martin Ender Apr 29 '17 at 8:04 • @MartinEnder Ah, it's because character sets can't be empty, thanks! – user41805 Apr 29 '17 at 8:14 • This doesn't work for all non-appendable inputs, for example (][). I think it can be fixed at a cost of one byte by changing 101 to ...+. – Neil Apr 29 '17 at 9:37 • @Neil Thanks for noticing the bug, I wonder if there are such cases with Both as well – user41805 Apr 29 '17 at 9:49 • No, I think 10 is the only valid combination for Both. – Neil Apr 29 '17 at 10:30 ## Batch, 337 bytes @echo off set/ps= :g set "t=%s:<>=% set "t=%t:()=% set "t=%t:[]=% set "t=%t:{}=% if not "%t%"=="%s%" set "s=%t%"&goto g set "s=%s:<=[% set s=%s:>=]% set s=%s:(=[% set s=%s:)=]% set s=%s:{=[% set s=%s:}=]% :l if %s:~,2%==]] set s=%s:~1%&goto l :r if %s:~-2%==[[ set s=%s:~,-1%&goto l if not _%s:~2%==_ set s=[] echo %s% Outputs ] for prepend, [ for append, ][ for both, [] for neither. # Haskell, 115 108 bytes EDIT: • -7 bytes: Use more guards. (""#) s#""=[s>"",1>0] s#(c:d)|Just a<-lookup czip"([{<"")]}>"=(a:s)#d|(a:b)<-s=[1|a==c]>>b#d|0<1=take 1s#d Try it online! Use like (""#) "))". Results are given as: [False,True]: needs nothing [False]: prependable [True,True]: appendable [True]: both []: neither # How it works • The output encoding is chosen such that a need to prepend is signaled by dropping the second element of the result for the remainder, if any, while a complete mismatch is signaled by dropping all of them. • s#d parses a remaining string d, given a string/stack s of expected closing brackets. • The s#"" line checks if all closing brackets have been found by the end of the string, otherwise appending is needed. • The first branch of s#(c:d) checks if the next character c is an opening bracket, and if so leaves the corresponding closing bracket on the stack for the recursion. • Otherwise, if the stack contains closing brackets, the second branch checks if the top one matches the next character, and if not, returns an empty list instead of recursing. • Lastly, in the last branch the stack is empty, and we have an unmatched closing bracket that may be fixed by prepending, before recursing. # Japt, 44 bytes =Ue"%(%)|%[]|\{}|<>" ®c -1&2|1})f31 |UfD |Ug Outputs 1 for prependable, 3 for appendable, 13 for both, and 31 for neither. ### How it works =Ue"%(%)|%[]|\{}|<>" ® c -1&2|1})f31 |UfD |Ug U=Ue"%(%)|%[]|\{}|<>" mZ{Zc -1&2|1})f31 |UfD |Ug // "(((()()())))}" "([({}{})" ">()[(()){" "((((<>()]" Ue"%(%)|%[]|\{}|<>" // Recursively remove all instances of "()", "[]", "{}", and "<>" from U. // "}" "([" ">[{" "((((]" mZ{Zc -1&2|1} // Replace each char Z with (Z.charCodeAt() - 1) & 2 | 1. // "1" "33" "133" "33331" U= // Save the result in U. f31 |UfD |Ug // Match all instances of "31" and "13" (D = 13) and bitwise-OR the results with the first char. // null|null|1 null|null|3 null|13|1 31|null|3 // 1 3 13 31 // Implicit: output result of last expression # PHP, 137 Bytes for(c=1;c;)a=preg_replace("#<>||\[$|\{\}#","",$a=&$argn,-1,$c);echo($a=preg_replace(["#[]})>]+#","#[[{(<]+#"],[1,2],$a))<13?$a:0;
`
1 =>appendable,
2 =>prependable,
12=>both,
0 =>neither
Testcases
• "As long as each output is distinct and consistent, that's fine". This doesn't appear to have a consistent value for neither. – Cyoce Jun 4 '17 at 6:13
• @Cyoce It is now Fixed – Jörg Hülsermann Jun 4 '17 at 9:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4423474967479706, "perplexity": 3205.4283439763385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874135.2/warc/CC-MAIN-20201020192039-20201020222039-00225.warc.gz"} |
https://math.stackexchange.com/questions/3235077/find-the-coefficient-of-the-power-series-x31-x-11-2x6 | # Find the coefficient of the power series $[x^3](1-x)^{-1}(1-2x)^6$
I need to find $$[x^3](1-x)^{-1}(1-2x)^6$$, where $$[x^3]$$ means the coefficent of the $$[x^3]$$ term. here's what I've done:
$$[x^3](1-x)^{-1}(1-2x)^6=[x^3](\sum_{k=0}^6 {6\choose k}(-2x)^k)(\sum_{m=0}^\infty {m\choose 0}x^m)$$
$$= \sum_{k=0}^6 {6\choose k}(-2)^k[x^{3-k} ](\sum_{m=0}^\infty {m\choose 0}x^m)$$
$$= \sum_{k=0}^3 {6\choose k}(-2)^k[x^{3-k} ](\sum_{m=0}^\infty {m\choose 0}x^m)$$ since we need $$3-k \geq 0$$
$$= \sum_{k=0}^3 ({6\choose k}(-2)^k {3-k\choose 0})$$
$$= \sum_{k=0}^3 ({6\choose k}(-2)^k)$$
$$= {6\choose0} + (-2){6\choose1} + (4){6\choose2} + (-8){6\choose3}$$
$$=1-12+60-160$$
$$= -111$$
But when I do the expansion on WolframAlpha, I see that $$[x^0]=1$$, $$[x^1]=-12$$, $$[x^3]=-160$$, so what am I doing wrong?
(I am following a similar idea to Trevor Gunn's answer in this question In how many ways the sum of 5 thrown dice is 25?)
• Did you confuse $(-x)^k$ and $x^{-k}$? Also, did you mean $x^3$ where you wrote $x^4$? – J. W. Tanner May 22 at 0:29
• I might have worded the question confusingly, the $[x^3]$ is not multiplication, but rather finding the coefficient of the $x^3$ term in the equation that follows – Mark Dodds May 22 at 0:36
• Oh, then maybe you should say finding the coefficient of $x^3$ in ... – J. W. Tanner May 22 at 0:37
• Your work is correct, and this WolframAlpha link verifies it. What specifically did you see that made you think you were wrong? – Mike Earnest May 22 at 0:55
• @JohnOmielan looking back i think that is what has happened. Thanks for clearing that up – Mark Dodds May 22 at 0:59
As Mike Earnest confirmed in the comments, your work is correct. As I commented, and you've stated it's likely the case, the WolframAlpha results of $$[x^0]=1$$, $$[x^1]=-12$$, $$[x^3]=-160$$ probably come from the coefficients in the power expansion of $$(1-2x)^6$$ instead. You can see this directly from the first, second & fourth terms in your second last highlighted line, i.e.,
$$=1-12+60-160$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8941224813461304, "perplexity": 421.3833637249228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998558.51/warc/CC-MAIN-20190617183209-20190617205209-00170.warc.gz"} |
http://mathoverflow.net/users/22052/aaron-tikuisis?tab=stats | # Aaron Tikuisis
less info
reputation
514
bio website homepages.abdn.ac.uk/… location Aberdeen, Scotland age 31 member for 2 years seen Mar 6 at 11:28 profile views 464
My research is on the structure of C*-algebras. I am currently a lecturer at the University of Aberdeen.
15 Examples of conjectures that were widely believed to be true but later proved false 11 Cesaro means and Banach limits 8 Realizing universal C*-algebras as concrete C*-algebras 6 Inductive limit of C*-algebras 5 General recipe for building C*-algebras out of combinatorial object
# 863 Reputation
+30 Realizing universal C*-algebras as concrete C*-algebras +10 Cesaro means and Banach limits +5 Properties of orthogonality-preserving c.p. maps between $C^*$-algebras +10 Inductive limit of C*-algebras
# 3 Questions
14 Subgroups of $\mathbb{Z}^n$ 10 Properties of orthogonality-preserving c.p. maps between $C^*$-algebras 6 von Neumann automorphisms: does convergence on a dense algebra imply $u$-convergence?
# 15 Tags
38 fa.functional-analysis × 11 1 measure-theory 28 oa.operator-algebras × 10 1 harmonic-analysis 19 c-star-algebras × 5 1 real-analysis 11 sequences-and-series 0 linear-algebra × 2 3 gr.group-theory 0 subfactors
# 3 Accounts
MathOverflow 863 rep 514 Mathematics 175 rep 6 Linguistics 101 rep | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8366875648498535, "perplexity": 8364.080670682031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010951612/warc/CC-MAIN-20140305091551-00073-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/20096/proper-way-to-use-ensuremath-to-define-a-macro-useable-in-and-out-of-math-mode | Proper way to use \ensuremath to define a macro useable in and out of math mode
Based on this solution related to defining a macro I came up with this macro to help me define a macro that I can use either in or outside of math mode.
The example as is functions as I want. However, this solution requires me to NOT put $$ around the second parameter to to the \DefineNamedFunction macro. I would like to be able to include the$$, or not include it.
One solution is to modify \DefineNamedFunction to strip out the $$ if it is included in the macro call using the xstring pacakge, but this to me feels like a hack, and am thinking that there is probably a cleaner TeX way to do this. So to summarize: How do I change \DefineNamedFunction such that I can use both the commented and uncommented calls to this macro, and still be able to use the definition inside and outside of math mode? \documentclass{article} \usepackage{amsmath} \usepackage{xcolor} \newcommand{\DefineNamedFunction}[2]{% {FunctionName}{FunctionExpression} \expandafter\providecommand\expandafter{\csname#1\endcsname}{\textcolor{red}{\ensuremath{#2}}}% } \begin{document} \DefineNamedFunction{FunctionF}{y = 2 \sin x} %\DefineNamedFunction{FunctionF}{y = 2 \sin x} I can use FunctionF inside math mode as \FunctionF, but can also use this outside of math mode as \FunctionF. \end{document} - 2 Answers Probably \ensuremath{\textcolor{red}{#2}} is what you need, since \textcolor can be used in text and in math. The complete definition is \newcommand{\DefineNamedFunction}[2]{% {FunctionName}{FunctionExpression} \expandafter\providecommand\csname#1\endcsname {\ensuremath{\textcolor{red}{#2}}}% } ... \DefineNamedFunction{FunctionF}{y=2\sin x} \FunctionF and \FunctionF I've also deleted the braces that require another \expandafter, but that's not the problem. Of course, you can't call \DefinedNamedFunction{FunctionFF}{y=x} and I wouldn't know why you'd want it. But in any case there's a simple solution \newcommand{\DefineNamedFunction}[2]{% \expandafter\providecommand\csname#1\endcsname {\ensuremath{\begingroup\color{red}\DNFnorm#2\endgroup}}} \makeatletter \def\DNFnorm{\@ifnextchar\DNFnormi{}} \def\DNFnormi#1{#1} \makeatother The input is "normalized" by removing the tokens before and after, if present. With \begingroup\color{red}...\endgroup the spaces in the subformula participate to the stretching and shrinking of the spaces in the line. - That does not seem to behave any differently. Yes, in the MWE I should have switched it to \newcommand, but I need \providecommand in my real usage. – Peter Grill Jun 6 '11 at 18:22 @Peter: really? To me it does exactly what you want. I'll edit the answer with the complete definition. – egreg Jun 6 '11 at 19:57 Only reason I would want to be able to call \DefinedNamedFunction{FunctionFF}{y=x} (with the dollar signs) as it is more natural to do so. I didn't want to have to remember that when I use this macro I do not put the $$, but for all the other macros where I have math, I do put the . – Peter Grill Jun 6 '11 at 20:16
This solution using DNFnorm looks simpler. – Peter Grill Jun 6 '11 at 20:45
@Peter: it's just a command defined for the purpose. It checks whether #2 starts with $; in this case it is substituted by \DNFnormi that throws away the two $, otherwise it does nothing. – egreg Jun 6 '11 at 20:51
\documentclass{article}
\usepackage{amsmath}
\usepackage{xcolor}
\makeatletter
\def\DefineNamedFunction#1#2{\expandafter\DefineNamedFunction@i#1\@nil#2\@nil}
\def\DefineNamedFunction@i#1\@nil{%
\@ifnextchar${\DefineNamedFunction@ii{#1}}{\DefineNamedFunction@iii{#1}}} \def\DefineNamedFunction@ii#1$#2$\@nil{% \@namedef{#1}{\ifmmode\textcolor{red}{#2}\else\textcolor{red}{$#2$}\fi}} \def\DefineNamedFunction@iii#1#2\@nil{% \@namedef{#1}{\ifmmode\textcolor{red}{#2}\else\textcolor{red}{$#2$}\fi}} \makeatother \DefineNamedFunction{FunctionF}{y = 2 \sin x} \DefineNamedFunction{FunctionFF}{$y = 2 \sin x$} \begin{document} I can use \FunctionF\ inside math mode as$\FunctionF$, but can also use this outside of math mode as$\FunctionFF$and \FunctionFF. \end{document} - This works great. Thanks. So, is this basically doing the job of \ensuremath manually? – Peter Grill Jun 6 '11 at 18:27 more or less ... the problem is \textcolor when it is used in math mode with a math argument – Herbert Jun 6 '11 at 18:30 @Herbert: apart from making the + an ordinary atom, $\textcolor{blue}{a}\textcolor{red}{+}\textcolor{green}{b}$ works perfectly. – egreg Jun 6 '11 at 20:08 @egreg: I was talking about $\textcolor{blue}{$a$}\$ ... – Herbert Jun 7 '11 at 4:25
@Herbert: I see; \textcolor is smart enough to know when it's called in math mode: maybe a pair \textcolor and \mathcolor would have been better. – egreg Jun 7 '11 at 8:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931913018226624, "perplexity": 1550.3653272746726}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00051-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://en.wikipedia.org/wiki/Adiabatic_lapse_rate | # Lapse rate
The lapse rate is defined as the rate at which atmospheric temperature decreases with increase in altitude. [1][2] The terminology arises from the word lapse in the sense of a decrease or decline. While most often applied to Earth's troposphere, the concept can be extended to any gravitationally supported ball of gas.
## Definition
A formal definition from the Glossary of Meteorology[3] is:
The decrease of an atmospheric variable with height, the variable being temperature unless otherwise specified.
In the lower regions of the atmosphere (up to altitudes of approximately 12,000 metres (39,000 ft), temperature decreases with altitude at a fairly uniform rate. Because the atmosphere is warmed by convection from Earth's surface, this lapse or reduction in temperature is normal with increasing distance from the conductive source.
Although the actual atmospheric lapse rate varies, under normal atmospheric conditions the average atmospheric lapse rate results in a temperature decrease of 6.4C°/km (3.5F° or 1.95C°/1,000 ft) of altitude above ground level.
The measurable lapse rate is affected by the moisture content of the air (humidity). A dry lapse rate of 10C°/km (5.5F° or 3.05C°/1,000 ft) is often used to calculate temperature changes in air not at 100% relative humidity. A wet lapse rate of 5.5C°/km (3F° or 1.68C°/1,000 ft) is used to calculate the temperature changes in air that is saturated (i.e., air at 100% relative humidity). Although actual lapse rates do not strictly follow these guidelines, they present a model sufficiently accurate to predict temperate changes associated with updrafts and downdrafts. This differential lapse rate (dependent upon both difference in conductive heating and adiabatic expansion or compression) results in the formation of warm downslope winds (e.g., Chinook winds, Santa Ana winds, etc.). The atmospheric lapse rate, combined with adiabatic cooling and heating of air related to the expansion and compression of atmospheric gases, present a unified model explaining the cooling of air as it moves aloft and the heating of air as it descends downslope.
Atmospheric stability can be measured in terms of lapse rates (i.e., the temperature differences associated with vertical movement of air). The atmosphere is considered conditionally unstable where the environmental lapse rate causes a slower decrease in temperature with altitude than the dry adiabatic lapse rate, as long as no latent heat is released (i.e. the saturated adiabatic lapse rate applies). Unconditional instability results when the dry adiabatic lapse rate causes air to cool slower than the environmental lapse rate, so air will continue to rise until it reaches the same temperature as its surroundings. Where the saturated adiabatic lapse rate is greater than the environmental lapse rate, the air cools faster than its environment and thus returns to its original position, irrespective of its moisture content.
Although the atmospheric lapse rate (also known as the environmental lapse rate) is most often used to characterize temperature changes, many properties (e.g. atmospheric pressure) can also be profiled by lapse rates...
## Mathematical definition
In general, a lapse rate is the negative of the rate of temperature change with altitude change, thus:
$\gamma = -\frac{dT}{dz}$
where $\gamma$ is the lapse rate given in units of temperature divided by units of altitude, T = temperature, and z = altitude.
Note: In some cases, $\Gamma$ or $\alpha$ can be used to represent the adiabatic lapse rate in order to avoid confusion with other terms symbolized by $\gamma$, such as the specific heat ratio[4] or the psychrometric constant.[5]
## Types of lapse rates
There are two types of lapse rate:
• Environmental lapse rate (ELR) – which refers to the actual change of temperature with altitude for the stationary atmosphere (i.e. the temperature gradient)
• The adiabatic lapse rates – which refer to the change in temperature of a parcel of air as it moves upwards (or downwards) without exchanging heat with its surroundings. There are two adiabatic rates:[6]
• Dry adiabatic lapse rate (DALR)
• Moist (or saturated) adiabatic lapse rate (SALR)
### Environmental lapse rate
The environmental lapse rate (ELR), is the rate of decrease of temperature with altitude in the stationary atmosphere at a given time and location. As an average, the International Civil Aviation Organization (ICAO) defines an international standard atmosphere (ISA) with a temperature lapse rate of 6.49 K(C°)/1,000 m[citation needed] (3.56 F° or 1.98 K(C°)/1,000 ft) from sea level to 11 km (36,090 ft or 6.8 mi). From 11 km up to 20 km (65,620 ft or 12.4 mi), the constant temperature is −56.5 °C (−69.7 °F), which is the lowest assumed temperature in the ISA. The standard atmosphere contains no moisture. Unlike the idealized ISA, the temperature of the actual atmosphere does not always fall at a uniform rate with height. For example, there can be an inversion layer in which the temperature increases with altitude.
Emagram diagram showing variation of dry adiabats (bold lines) and moist adiabats (dash lines) according to pressure and temperature
The dry adiabatic lapse rate (DALR) is the rate of temperature decrease with altitude for a parcel of dry or unsaturated air rising under adiabatic conditions. Unsaturated air has less than 100% relative humidity; i.e. its actual temperature is higher than its dew point. The term adiabatic means that no heat transfer occurs into or out of the parcel. Air has low thermal conductivity, and the bodies of air involved are very large, so transfer of heat by conduction is negligibly small.
Under these conditions when the air rises (for instance, by convection) it expands, because the pressure is lower at higher altitudes. As the air parcel expands, it pushes on the air around it, doing work (thermodynamics). Since the parcel does work but gains no heat, it loses internal energy so that its temperature decreases. The rate of temperature decrease is 9.8 C°/km (5.38 F° per 1,000 ft) (3.0 C°/1,000 ft). The reverse occurs for a sinking parcel of air.[7]
$P dV = -V dP / \gamma$
the first law of thermodynamics can be written as
$m c_v dT - V dp/ \gamma = 0$
Also since :$\alpha = V/m$ and :$\gamma = c_p/c_v$ we can show that:
$c_p dT - \alpha dP = 0$
where $c_p$ is the specific heat at constant pressure and $\alpha$ is the specific volume.
Assuming an atmosphere in hydrostatic equilibrium:[8]
$dP = - \rho g dz$
where g is the standard gravity and $\rho$ is the density. Combining these two equations to eliminate the pressure, one arrives at the result for the DALR,[9]
$\Gamma_d = -\frac{dT}{dz}= \frac{g}{c_p} = 9.8 \ ^{\circ}\mathrm{C}/\mathrm{km}$
When the air is saturated with water vapor (at its dew point), the moist adiabatic lapse rate (MALR) or saturated adiabatic lapse rate (SALR) applies. This lapse rate varies strongly with temperature. A typical value is around 5 C°/km (2.7 F°/1,000 ft) (1.5C°/1,000 ft).[citation needed]
The reason for the difference between the dry and moist adiabatic lapse rate values is that latent heat is released when water condenses, thus decreasing the rate of temperature drop as altitude increases. This heat release process is an important source of energy in the development of thunderstorms. An unsaturated parcel of air of given temperature, altitude and moisture content below that of the corresponding dewpoint cools at the dry adiabatic lapse rate as altitude increases until the dewpoint line for the given moisture content is intersected. As the water vapor then starts condensing the air parcel subsequently cools at the slower moist adiabatic lapse rate if the altitude increases further.
The saturated adiabatic lapse rate is given approximately by this equation from the glossary of the American Meteorology Society:[10]
$\Gamma_w = g\, \frac{1 + \dfrac{H_v\, r}{R_{sd}\, T}}{c_{p d} + \dfrac{H_v^2\, r}{R_{sw}\, T^2}}= g\, \frac{1 + \dfrac{H_v\, r}{R_{sd}\, T}}{c_{p d} + \dfrac{H_v^2\, r\, \epsilon}{R_{sd}\, T^2}}$
where: $\Gamma_w$ = Wet adiabatic lapse rate, K/m $g$ = Earth's gravitational acceleration = 9.8076 m/s2 $H_v$ = Heat of vaporization of water, = 2501000 J/kg $R_{sd}$ = Specific gas constant of dry air = 287 J kg−1 K−1 $R_{sw}$ = Specific gas constant of water vapor = 461.5 J kg−1 K−1 $\epsilon=\frac{R_{sd}}{R_{sw}}$ =The dimensionless ratio of the specific gas constant of dry air to the specific gas constant for water vapor = 0.622 $e$ = The water vapor pressure of the saturated air $p$ = The pressure of the saturated air $r=\epsilon e/(p-e)$ = The mixing ratio of the mass of water vapor to the mass of dry air [11] $T$ = Temperature of the saturated air, K $c_{pd}$ = The specific heat of dry air at constant pressure, = 1003.5 J kg−1 K−1
### Thermodynamic-based lapse rate
Robert Essenhigh developed a comprehensive thermodynamic model of the lapse rate based on the Schuster–Schwarzschild (S–S) integral equations of transfer that govern radiation through the atmosphere including absorption and radiation by greenhouse gases.[12][13] His solution "predicts, in agreement with the Standard Atmosphere experimental data, a linear decline of the fourth power of the temperature, T4, with pressure, P, and, as a first approximation, a linear decline of T with altitude, h, up to the tropopause at about 10 km (the lower atmosphere)."[13] The predicted normalized density ratio and pressure ratio differ and fit the experimental data well.[citation needed] Sreekanth Kolan extended Essenhigh's model to include the energy balance for the lower and upper atmospheres.[14][self-published source?][third-party source needed]
## Significance in meteorology
The varying environmental lapse rates throughout the Earth's atmosphere are of critical importance in meteorology, particularly within the troposphere. They are used to determine if the parcel of rising air will rise high enough for its water to condense to form clouds, and, having formed clouds, whether the air will continue to rise and form bigger shower clouds, and whether these clouds will get even bigger and form cumulonimbus clouds (thunder clouds).
As unsaturated air rises, its temperature drops at the dry adiabatic rate. The dew point also drops (as a result of decreasing air pressure) but much more slowly, typically about −2 C° per 1,000 m. If unsaturated air rises far enough, eventually its temperature will reach its dew point, and condensation will begin to form. This altitude is known as the lifting condensation level (LCL) when mechanical lift is present and the convective condensation level (CCL) when mechanical lift is absent, in which case, the parcel must be heated from below to its convective temperature. The cloud base will be somewhere within the layer bounded by these parameters.
The difference between the dry adiabatic lapse rate and the rate at which the dew point drops is around 8 C° per 1,000 m. Given a difference in temperature and dew point readings on the ground, one can easily find the LCL by multiplying the difference by 125 m/C°.
If the environmental lapse rate is less than the moist adiabatic lapse rate, the air is absolutely stable — rising air will cool faster than the surrounding air and lose buoyancy. This often happens in the early morning, when the air near the ground has cooled overnight. Cloud formation in stable air is unlikely.
If the environmental lapse rate is between the moist and dry adiabatic lapse rates, the air is conditionally unstable — an unsaturated parcel of air does not have sufficient buoyancy to rise to the LCL or CCL, and it is stable to weak vertical displacements in either direction. If the parcel is saturated it is unstable and will rise to the LCL or CCL, and either be halted due to an inversion layer of convective inhibition, or if lifting continues, deep, moist convection (DMC) may ensue, as a parcel rises to the level of free convection (LFC), after which it enters the free convective layer (FCL) and usually rises to the equilibrium level (EL).
If the environmental lapse rate is larger than the dry adiabatic lapse rate, it has a superadiabatic lapse rate, the air is absolutely unstable — a parcel of air will gain buoyancy as it rises both below and above the lifting condensation level or convective condensation level. This often happens in the afternoon over many land masses. In these conditions, the likelihood of cumulus clouds, showers or even thunderstorms is increased.
Meteorologists use radiosondes to measure the environmental lapse rate and compare it to the predicted adiabatic lapse rate to forecast the likelihood that air will rise. Charts of the environmental lapse rate are known as thermodynamic diagrams, examples of which include Skew-T log-P diagrams and tephigrams. (See also Thermals).
The difference in moist adiabatic lapse rate and the dry rate is the cause of foehn wind phenomenon (also known as "Chinook winds" in parts of North America).
## References
1. ^ Mark Zachary Jacobson (2005). Fundamentals of Atmospheric Modeling (2nd ed.). Cambridge University Press. ISBN 0-521-83970-X.
2. ^ C. Donald Ahrens (2006). Meteorology Today (8th ed.). Brooks/Cole Publishing. ISBN 0-495-01162-2.
3. ^ Todd S. Glickman (June 2000). Glossary of Meteorology (2nd ed.). American Meteorological Society, Boston. ISBN 1-878220-34-9. (Glossary of Meteorology)
4. ^ Salomons, Erik M. (2001). Computational Atmospheric Acoustics (1st ed.). Kluwer Academic Publishers. ISBN 1-4020-0390-0.
5. ^ Stull, Roland B. (2001). An Introduction to Boundary Layer Meteorology (1st ed.). Kluwer Academic Publishers. ISBN 90-277-2769-4.
6. ^ Adiabatic Lapse Rate, IUPAC Goldbook
7. ^ Danielson, Levin, and Abrams, Meteorology, McGraw Hill, 2003
8. ^ Landau and Lifshitz, Fluid Mechanics, Pergamon, 1979
9. ^ Kittel and Kroemer, Thermal Physics, Freeman, 1980; chapter 6, problem 11
10. ^ [1]
11. ^ http://glossary.ametsoc.org/wiki/Mixing_ratio
12. ^ Robert H. Essenhigh (2003). "Prediction from an Analytical Model of: The Standard Atmosphere Profiles of Temperature, Pressure, and Density with Height for the Lower Atmosphere; and Potential for Profiles-Perturbation by Combustion Emissions". Paper No.03F-44: Western States Section Combustion Institute Meeting: Fall (October) 2003.
13. ^ a b
14. ^ Sreekanth Kolan (2009). "Study of energy balance between lower and upper atmosphere". Ohio State University. osu1259613805. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566977977752686, "perplexity": 1727.6892836937132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246638820.85/warc/CC-MAIN-20150417045718-00117-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://ftp.aimsciences.org/article/doi/10.3934/cpaa.2008.7.277 | # American Institute of Mathematical Sciences
March 2008, 7(2): 277-291. doi: 10.3934/cpaa.2008.7.277
## Isentropic approximation of the steady Euler system in two space dimensions
1 School of Mathematical Sciences and Institute of Mathematics, Fudan University, Shanghai 200433, China, China
Received March 2007 Revised August 2007 Published December 2007
On the assumption that the initial data are isentropic and of sufficiently small total variation, we can prove that the difference between the solutions of the steady full Euler system and steady isentropic Euler system with the same initial data can be bounded by the cube of the total variation of the initial perturbation.
Citation: Chong Liu, Yongqian Zhang. Isentropic approximation of the steady Euler system in two space dimensions. Communications on Pure and Applied Analysis, 2008, 7 (2) : 277-291. doi: 10.3934/cpaa.2008.7.277
[1] Min Tang. Second order all speed method for the isentropic Euler equations. Kinetic and Related Models, 2012, 5 (1) : 155-184. doi: 10.3934/krm.2012.5.155 [2] Corrado Lattanzio, Pierangelo Marcati. The relaxation to the drift-diffusion system for the 3-$D$ isentropic Euler-Poisson model for semiconductors. Discrete and Continuous Dynamical Systems, 1999, 5 (2) : 449-455. doi: 10.3934/dcds.1999.5.449 [3] Min Ding, Hairong Yuan. Stability of transonic jets with strong rarefaction waves for two-dimensional steady compressible Euler system. Discrete and Continuous Dynamical Systems, 2018, 38 (6) : 2911-2943. doi: 10.3934/dcds.2018125 [4] Stephen Thompson, Thomas I. Seidman. Approximation of a semigroup model of anomalous diffusion in a bounded set. Evolution Equations and Control Theory, 2013, 2 (1) : 173-192. doi: 10.3934/eect.2013.2.173 [5] Kaifang Liu, Lunji Song, Shan Zhao. A new over-penalized weak galerkin method. Part Ⅰ: Second-order elliptic problems. Discrete and Continuous Dynamical Systems - B, 2021, 26 (5) : 2411-2428. doi: 10.3934/dcdsb.2020184 [6] Lunji Song, Wenya Qi, Kaifang Liu, Qingxian Gu. A new over-penalized weak galerkin finite element method. Part Ⅱ: Elliptic interface problems. Discrete and Continuous Dynamical Systems - B, 2021, 26 (5) : 2581-2598. doi: 10.3934/dcdsb.2020196 [7] Yachun Li, Qiufang Shi. Global existence of the entropy solutions to the isentropic relativistic Euler equations. Communications on Pure and Applied Analysis, 2005, 4 (4) : 763-778. doi: 10.3934/cpaa.2005.4.763 [8] Zineb Hassainia, Taoufik Hmidi. Steady asymmetric vortex pairs for Euler equations. Discrete and Continuous Dynamical Systems, 2021, 41 (4) : 1939-1969. doi: 10.3934/dcds.2020348 [9] Dongfen Bian, Huimin Liu, Xueke Pu. Modulation approximation for the quantum Euler-Poisson equation. Discrete and Continuous Dynamical Systems - B, 2021, 26 (8) : 4375-4405. doi: 10.3934/dcdsb.2020292 [10] Yachun Li, Shengguo Zhu. On regular solutions of the $3$D compressible isentropic Euler-Boltzmann equations with vacuum. Discrete and Continuous Dynamical Systems, 2015, 35 (7) : 3059-3086. doi: 10.3934/dcds.2015.35.3059 [11] Lihui Guo, Wancheng Sheng, Tong Zhang. The two-dimensional Riemann problem for isentropic Chaplygin gas dynamic system$^*$. Communications on Pure and Applied Analysis, 2010, 9 (2) : 431-458. doi: 10.3934/cpaa.2010.9.431 [12] Xin Yu, Guojie Zheng, Chao Xu. The $C$-regularized semigroup method for partial differential equations with delays. Discrete and Continuous Dynamical Systems, 2016, 36 (9) : 5163-5181. doi: 10.3934/dcds.2016024 [13] La-Su Mai, Kaijun Zhang. Asymptotic stability of steady state solutions for the relativistic Euler-Poisson equations. Discrete and Continuous Dynamical Systems, 2016, 36 (2) : 981-1004. doi: 10.3934/dcds.2016.36.981 [14] Valentin Keyantuo, Louis Tebou, Mahamadi Warma. A Gevrey class semigroup for a thermoelastic plate model with a fractional Laplacian: Between the Euler-Bernoulli and Kirchhoff models. Discrete and Continuous Dynamical Systems, 2020, 40 (5) : 2875-2889. doi: 10.3934/dcds.2020152 [15] Menita Carozza, Jan Kristensen, Antonia Passarelli di Napoli. On the validity of the Euler-Lagrange system. Communications on Pure and Applied Analysis, 2015, 14 (1) : 51-62. doi: 10.3934/cpaa.2015.14.51 [16] Jing Liu, Xiaodong Liu, Sining Zheng, Yanping Lin. Positive steady state of a food chain system with diffusion. Conference Publications, 2007, 2007 (Special) : 667-676. doi: 10.3934/proc.2007.2007.667 [17] Qi Wang. On the steady state of a shadow system to the SKT competition model. Discrete and Continuous Dynamical Systems - B, 2014, 19 (9) : 2941-2961. doi: 10.3934/dcdsb.2014.19.2941 [18] Pavel Chigansky, Fima C. Klebaner. The Euler-Maruyama approximation for the absorption time of the CEV diffusion. Discrete and Continuous Dynamical Systems - B, 2012, 17 (5) : 1455-1471. doi: 10.3934/dcdsb.2012.17.1455 [19] Michele Coti Zelati. Remarks on the approximation of the Navier-Stokes equations via the implicit Euler scheme. Communications on Pure and Applied Analysis, 2013, 12 (6) : 2829-2838. doi: 10.3934/cpaa.2013.12.2829 [20] Gui-Qiang Chen, Bo Su. A viscous approximation for a multidimensional unsteady Euler flow: Existence theorem for potential flow. Discrete and Continuous Dynamical Systems, 2003, 9 (6) : 1587-1606. doi: 10.3934/dcds.2003.9.1587
2021 Impact Factor: 1.273 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6773386597633362, "perplexity": 4295.443527559422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00528.warc.gz"} |
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1609114/?tool=pubmed | BMC Health Serv Res. 2006; 6: 127.
Published online 2006 Oct 6.
PMCID: PMC1609114
# Priority setting in developing countries health care institutions: the case of a Ugandan hospital
## Abstract
### Background
Because the demand for health services outstrips the available resources, priority setting is one of the most difficult issues faced by health policy makers, particularly those in developing countries. However, there is lack of literature that describes and evaluates priority setting in these contexts. The objective of this paper is to describe priority setting in a teaching hospital in Uganda and evaluate the description against an ethical framework for fair priority setting processes – Accountability for Reasonableness.
A case study in a 1,500 bed national referral hospital receiving 1,320 out patients per day and an average budget of US$13.5 million per year. We reviewed documents and carried out 70 in-depth interviews (14 health planners, 40 doctors, and 16 nurses working at the hospital). Interviews were recorded and transcribed. Data analysis employed the modified thematic approach to describe priority setting, and the description was evaluated using the four conditions of Accountability for Reasonableness: relevance, publicity, revisions and enforcement. ### Results Senior managers, guided by the hospital strategic plan make the hospital budget allocation decisions. Frontline practitioners expressed lack of knowledge of the process. Relevance: Priority is given according to a cluster of factors including need, emergencies and patient volume. However, surgical departments and departments whose leaders "make a lot of noise" are also prioritized. Publicity: Decisions, but not reasons, are publicized through general meetings and circulars, but this information does not always reach the frontline practitioners. Publicity to the general public was through ad hoc radio programs and to patients who directly ask. Revisions: There were no formal mechanisms for challenging the reasoning. Enforcement: There were no mechanisms to ensure adherence to the four conditions of a fair process. ### Conclusion Priority setting decisions at this hospital do not satisfy the conditions of fairness. To improve, the hospital should: (i) engage frontline practitioners, (ii) publicize the reasons for decisions both within the hospital and to the general public, and (iii) develop formal mechanisms for challenging the reasoning. In addition, capacity strengthening is required for senior managers who must accept responsibility for ensuring that the above three conditions are met. ## Background Because no health system, whether rich or poor, or privately or publicly funded, can afford to pay for every service it wishes to provide, priority setting is arguably today's most important health policy issue[1]. Much of the priority setting in a health system occurs at the so-called 'meso' level of policy making, which includes hospitals and health insurers. Yet, only a few studies have examined priority setting at this level, and these have focused on developed country institutions [2-5]. There is meager literature reporting actual priority setting in developing countries and it has focused on macro-level health reforms, health care financing or priority setting [6-10]. Developing country health systems can be strengthened by improving priority setting at the meso-level. This is because priority setting decisions contribute to the sustainability of strained pools of resources and have a direct impact on access to needed health services. Unfortunately, decision-makers in developing country healthcare institutions lack guidance with regards to priority setting [11]. As a result, priority setting in developing countries, such as Uganda, occurs by chance, not by choice [12]. Uganda spends 7.7% of its Gross Domestic Product of US$ 1,088 on health [13]. The country has a doctor to population ratio of 1: 25,000, a surgeon to population ratio of 1:30,000, hospital beds to population ratio of 0.9/1000, and an extremely high disease burden[14]. In attempt to meet the health needs and to maximize population health benefit, the Ugandan government has, (since 2000 AD) increased funding to primary care units relative to tertiary hospitals [15]. This has compounded the already difficult task of priority setting faced by decision-makers in tertiary hospitals. Hence, health managers in these institutions would benefit from having locally developed evidence-based strategies to guide their decision making.
This paper presents some of the findings from an endeavor to develop an evidence base to support decision makers in tertiary hospitals in developing countries. The approach used has been pioneered in developed country hospitals and employs the conceptual ethical framework of 'accountability for reasonableness' [16,17]. 'Accountability for reasonableness' is an explicit conceptual framework for legitimate and fair priority setting that has been used to evaluate and improve priority setting practices in health systems and health care institutions [18-22].
The purpose of this article is to describe priority setting in a Ugandan hospital and evaluate the description using a leading ethical framework, accountability for reasonableness, to identify good practices and opportunities for improvement.
## Methods
### Design
To describe priority setting in a hospital we used a qualitative case study. A case study is "an empirical inquiry that investigates a contemporary phenomenon within its real life context" [23]. The case study method is appropriate because priority setting in hospitals is complex, context-dependent and involves social processes. To evaluate the description, we used an explicit conceptual framework, 'accountability for reasonableness' (described below).
### Setting
The setting for this study was a 1,500 bed-publicly financed, tertiary teaching hospital in Uganda. Over the recent five years, the hospital has experienced a thirty percent increase in both inpatients and outpatients attendance (Figure (Figure1).1). However, there has been a decline in funding to the hospital (in actual terms) e.g. from Ugandan Shillings 600 m in FY1999/2000 to Ugandan Shillings 500 m in 2000/2001, with serious implications on the availability of drugs and sundries [24].
Total number of patients (10s) compared to releases on total reccurrent budget per capita.
### Sampling
We used a combination of theoretical and snowball sampling. The index respondent, the hospital deputy director, was identified by virtue of his involvement in priority setting. He identified subsequent respondents who were the leaders of the different clinical and support programs in the hospital. Those respondents identified subsequent respondents who they perceived to be key informants in relation to priority setting. Sampling continued until theoretical saturation was reached – that is, until subsequent interviews did not yield new data.
### Data collection
Data collection involved two data sources: i) in-depth one-on-one interviews with key informants, and ii) key documents.
We conducted 70 in-depth interviews with key informants involved in this case (14 health planners (including senior hospital managers, hospital accountants, chief pharmacist and the supplies officer), 40 doctors, and 16 nurses). These were identified using a combination of theoretical and snowball sampling [25]. Interviews were conducted using an interview guide having open-ended questions that were based on the conceptual framework described below (available upon request). However, the interviewer maintained an open stance and pursued emerging themes and sought clarifications as necessary. Respondents were asked to describe the priority setting process at the hospital management level, who was involved, what was considered, if decisions and rationales are publicized, if there opportunities for revision and mechanisms for enforcement. Interviews were audio recorded and transcribed.
The key documents reviewed included; minutes of the senior hospital management meetings, hospital budget estimates, and the Ministry of Health and hospital strategic plan.
### Data analysis
To describe priority setting, we used a modified thematic analysis: First, we read through whole interviews to identify general themes. Second, we identified the major concepts or ideas in specific chunks of sentences, and labeled them. An open and creative stance was sought throughout the process to facilitate identification of new ideas that related to different aspects of priority setting. Third, we grouped similar concepts together to form categories that were more precise, complete, and generalizable [25].
To evaluate priority setting, we compared the description against the four conditions of Accountability for Reasonableness to identify areas of correspondence, which were considered good practices, and gaps, which were considered opportunities for improvement.
We took three steps to ensure the validity of our findings. First, we interviewed respondents from different levels of the hospital management and professions. This maximized comprehensiveness and diversity. Second, we validated the interview data through the analysis of key documents. Third, the results were distributed to a number of respondents who confirmed the reasonableness of the findings (called a member check) [26].
### The conceptual framework
'Accountability for reasonableness' is a conceptual framework for legitimate and fair priority setting in healthcare institutions. It is theoretically grounded in justice theories emphasizing democratic deliberation [27,28], it was developed in the context of real-world priority setting processes and has emerged over the past five years as a leading framework for priority setting [16-22]. According to 'accountability for reasonableness', a legitimate and fair priority setting process meets four conditions: relevance, publicity, appeals, and enforcement-explained below.
1. Relevance condition: The rationales for priority setting decisions must rest on reasons (evidence and principles) that 'fair-minded' people can agree are relevant in the context. 'Fair-minded' people seek to cooperate according to terms they can justify to each other – this narrows, though does not eliminate, the scope of controversy, which is further narrowed by specifying that reasons must be relevant to the specific priority setting context.
2. Publicity: Priority setting decisions and their rationales must be publicly accessible – justice cannot abide secrets where people's well being is concerned.
3. Revisions/Appeals: There must be a mechanism for challenge, including the opportunity for revising decisions in light of considerations that stakeholders may raise.
4. Enforcement: There is either voluntary or public regulation of the process to ensure that the first three conditions are met.
'Accountability for reasonableness' helps to operationalize legitimate and fair priority setting in specific contexts, such as hospitals [1-4]. We used this framework to design our questionnaire and to analyze our data.
### Research ethics
This study was approved by the University of Toronto Office for Research Involving Human Subjects, the Uganda National Committee for Science and Technology and the hospital ethics committee. All participants provided consent for the interview. All data were kept confidential and anonymized.
## Results
The results section is organized in two subsections: First we describe priority setting according to the themes that emerged from our case study: Second, we evaluate the description using the accountability for reasonableness framework.
### 1. Description
#### The need for priority setting
Decisions makers in the study hospital encountered priority setting challenges everyday due to policy decisions at the national level which resulted in the hospital having a perpetual shortage of funds. For example, this financial year (2006/2007) the hospital submitted a budget estimate of 60 billion Shillings (US$32.4 Million), yet received Uganda shillings 25 billion (US$ 13.5 Million) which is approximately 30% of its budget estimates [29].
"... I think the problem is the shortage of funds from Finance...there is never enough money, so even though the directorate makes its budget, when the hospital gets it funds, there is always much less than what they require..."
According to our respondents, in previous years the hospital would spend beyond its budgetary limits and the Ministry of Finance would pay the deficits. However, in order to curb the national budget deficits, Ministry of Finance introduced budget ceilings beyond which the hospital cannot be funded; and line item financing as opposed to global funding which constrains the degree of flexibility in priority setting at the hospital management level.
#### Participants in priority setting
In the past, the hospital director and senior accountant submitted their budget directly to the Ministry of Finance. However, the introduction of Sector Wide Approaches (SWAP) – whereby donors support the health sector as opposed to vertical programs or institutions – has meant reduction in these direct negotiations. At the time of our study, all hospitals' budget negotiations occurred through the ministry of health.
Within the hospital, hospital managers have attempted to decentralized priority setting to directorates. However, due to various reasons (presented later in this paper) this has not been very successful and current priority setting still involves mainly the members of the senior management committee. The committee receives advice from the interim hospital board. They also receive input from the leaders of the directorates who should involve the frontline practitioners in identifying priorities within their departments. However the hospital managers felt that practitioners were reluctant to participate due to either time constraints, lack of interest, or power struggles.
"... But often you will find that it (involvement of frontline practitioners) doesn't happen like that, that's my disappointment as a manager. Because it involves letting go of power people don't want to let go, and actually even at the operational level also the head of the directorate doesn't want to let go. And at the departmental level, that head also doesn't want to let go to get his colleagues to bring their inputs."
This was corroborated by frontline practitioners who reported that they were not involved in the priority setting process. This lack of involvement contributed to their lack of knowledge of the priority setting process at the hospital management level. However, since they are daily confronted with patients and bear the direct consequences of the priority setting decisions, most of the frontline respondents thought they should be more involved in informing the hospital priority setting decisions.
Some of the departmental leaders that were involved in the process reported frustration since their concerns are often not addressed. For example, respondents from the department of pediatrics reported that they have repeatedly requested for cephalosporins – a broad-spectrum antibiotic, which is effective in treating most of the aggressive infections affecting their patients, but this has not been addressed due to lack of funds.
"...The problem is that sometimes you submit a proposal or a budget..., but not everything you have asked for are you able to get and sometimes what you have even put in the budget is not what is allocated to you. So, it can be frustrating..."
Participants reported that the public is involved through representation on the hospital board. One of the mandates of the board is to provide a link between the community and the hospital, however, since the board had not been officially instituted (at the time of the study), their effectiveness as representatives of the public could not be assessed.
#### What is considered?
Priority setting in the hospital occurs within the framework of the hospital strategic plan. Formally, there are pre-determined budget proportions whereby 50% of the budget is allocated to drugs, 30% to sundries, 10% to reagents and 10% to X-ray. These proportions are then further allocated according to a formula that is based on evidence and need (need was defined in terms of the number of beds per directorate, medical emergencies, and the patient load). The members of the senior hospital management team developed this formula, with input from the different departments.
However, respondents from the department of pediatrics and general medicine felt there was lack of adherence to this formula. They argued that according to the formula and the 'need' criterion, the department of pediatrics deserved to be prioritized since they receive almost 40% of the hospital emergencies. Since the department was not prioritized, these respondents thought that informal factors significantly influenced priority setting. They thought that departments whose leaders knew how to "lobby", "make noise", " quickly use up their resources", "make their case" are usually prioritized. As such, surgical departments seemed to receive disproportionately high priority.
"... You know resource allocation is political with a small " p". So sometimes you get departments or directorates which are either very vocal, and can argue their case very vehemently or very organized, in that once the money is available they know what exactly to do with it and they finish their part of the money and are ready to take the money from those who are not organized..."
"...You see, as I told you that sometimes I may be getting things because I put a little bit of pressure on people, and I only leave when I have got what I want..."
#### Communication of decisions
Various strategies are used to communicate priority setting decisions to staff members including meetings, circulars and an annual general meeting. The leaders of the various departments who are members of and should participate in the senior management meetings, are expected to communicate the decisions to the members of their departments. However, hospital managers doubted the effectiveness of this mode of communication, since many leaders fail to attend the meetings and those who attend did not communicate the decisions to their staff. In particular, departmental leaders with a dual role (of university professor and hospital manager) tended to value their roles and duties with the university more than their managerial roles at the hospital. This manifested as apathy in attending management meetings, with subsequent lack of understanding of the hospital planning management system, and lack of knowledge of the priority setting processes and decisions – which they should be communicating to their staff members.
The hospital management also tries to send circulars about key issues to all relevant departments. These are received and read by the frontline practitioners. However, several respondents expressed frustration since this form of communication is one way and provides no opportunity for feedback and dialogue.
The annual general meeting is convened for all the hospital staff. The hospital managers thought that this would provide an opportunity for staff and management to engage in direct dialogue over issues of interest. They, however, noted that attendance was still very disappointingly poor.
"...During the annual assembly information is given to the staff about how much money the hospital got, what the demands are, the priority areas of the hospital, this is to give them a general view. However the assemblies are very poorly attended by staff members. People don't seem to be interested..."
Mechanisms for communication of decisions and reasons to the public were less clear. The radio is occasionally used in response to crises, but it is not often used because of the costs involved. Respondents expressed mixed feelings about availing information about priority setting decisions to the public. Some respondents were weary of publicity and feared that the information, being too technical, would be misinterpreted by the public who may become more demanding. Others, however, felt that communication of decisions and reasons to the public-especially with regards to the resource constraints the hospital faces – would enable the public to have realistic expectations from the hospital and therefore deter the public from blaming the hospital management for the shortages of supplies within the hospital.
"...this information is not available to the general public. I must say that the public I think is fairly ignorant about the financial situation in the hospital. You know, there's a lot of blame placed squarely on foot of management for some of these things. But once somebody gets to what the facts on the ground are, people begin to change their perception about what they thought was a management fault..."
#### Dealing with disagreements
Frontline practitioners reported that they often disagreed with the priority setting decisions made at the hospital level, but were not aware of any formal mechanisms for challenging the decisions. In case of disagreements, practitioners usually write to, or verbally present their complaints to the senior management committee either directly or through the leaders of their departments. However, since they found that the management committee handled too many varying hospital related issues to address directorate specific complaints; practitioners often used the direct approach. They complained directly to the director of the hospital or his deputy who maintain an "open door" policy and could be accessed directly.
"... I actually often appeal through letters, directly to the Director, and you know, the Director then handles this on an individual basis, but I think it would be nice if there was a formal mechanism, or maybe if the formal mechanism exists, at least, for me to get to know it. I think it would improve also the running of the directorates if this actually happened on a regular basis, rather than when there was a crisis..."
Revisions of the priority setting decisions only occasionally occur, and are commonly in response to emergencies or crises. Usually this involves re-allocation of resources from one program to another, and is not popular. This lack of revision led people to question the usefulness of attending these meetings.
"...So there is that forum to which is the management committee and the all leaders of department are supposed to bring their comments.... In a way it could be like an appeal or a forum but you see what happens, if you come and complain, nothing is done, next month you come and complain... people lose morale they even cease to come. They just look at it as time wasting forum..."
### 2. Evaluation
#### Relevance
Resource allocation decisions were based on a complex cluster of both formal and informal factors. The formal factors identified in this study such as the strategic plan and the hospital's management formula, have been documented in other settings [3]. Informal factors, such as lobbying, exerting pressure on management, and reacting to crises, also played a role. Although respondents agreed on the relevance of the formal factors, there was lack of agreement about the relevance of the informal factors. Respondents who got what they wanted base on informal factors and mechanisms such as lobbying thought these should be considered relevant. This was because the director of the hospital, who makes the final priority setting decisions, maintained an open door policy, which meant that anyone who was dissatisfied with the priority setting decision had an equal opportunity to directly argue their case. However, since achieving the desired results depended on individual characteristics, such as one's ability to present a good case, those respondents who did not have the lobbying and advocacy skills felt that priority setting would be fairer if only the formal factors (and mechanisms) such as the strategic plan, evidence and need were the relevant reasons.
#### Publicity
There were attempts to communicate the decisions but not the rationales, to the hospital staff through meetings, and circulars, but these were not functioning well. In particular there was a breech in the flow of information from the management to the rest of the hospital staff.
The hospital lacked systematic mechanisms for publicizing priority setting decisions and the rationales to the general public. Publicity to the general public was through the radio and newspapers. However, because of the costs involved, this was ad hoc and often in response to crises. Some respondents thought it would benefit the hospital if the public had access to information about priority setting.
#### Revision/Appeals
There were no formal mechanisms for appealing the priority setting reasoning. The senior management meeting, which was thought to be the formal institution for appealing, was said to be less effective in revising the decisions once made. Some practitioners found that the informal mechanisms, such as complaining directly to the hospital director instead of going to the senior management committee, were more effective in getting them what they wanted. However, revisions to priority setting decisions was generally hampered by lack of resources and this failure to revise priority setting decisions by the management team was a source of frustration for front-line practitioners who often reacted by refusing to participate in the decision making processes. Respondents expressed the need for fair, clear, explicit, and more responsive mechanisms for appeals and revisions.
#### Enforcement
There was no mention by participants of any system to ensure that the above three conditions were satisfied. Mechanisms to ensure adherence to set criteria, follow up of the implementation of the decisions and evaluation of the impact of the decisions were also lacking.
## Discussion
To the best of our knowledge, this paper presents the first in-depth empirical description and normative evaluation of an actual priority setting process in a hospital in a low-income country.
Our study included the views of many stakeholders directly involved in decision making in this context. Absent from this group, however, were patients, families of patients, and members of the general public – who are also relevant stakeholders. Since the people who are involved in the decision making bring various considerations to the decision, the lack of identifiable stakeholders leads us to conclude that the full range of relevant considerations were not brought to bear in this case [30].
Priority setting decisions in this hospital were based on both formal and informal reasons. Most of the respondents considered the formal reasons such as those embodied with the strategic plan and the allocation formula to be relevant. These reasons coincided with those described in similar contexts in high-income countries [3]. The informal reasons, such as lobbying, have also been described in high income countries and were not universally accepted [31]. The lack of support for the informal reasons has also been documented at the macro-level in Uganda [32]. Therefore, the identified formal reasons, and already justified reasons such as the epidemiological data on disease prevalence and severity; costs, effectiveness of interventions, and equity [32,33]; should first, be evaluated for their ethical appropriateness, then debated by the full range of stakeholders to determine the most locally relevant reasons.
Decisions are available to the staff members of the hospital but not to the general public. According to some of our respondents, publicity to the general public would reduce misunderstanding, wrongful blame of the hospital management and increase public's confidence in the hospital. Publicity is also thought to improve priority setting by engaging all stakeholders in a kind of policy learning about appropriate limit setting decisions [22]. We found that there were efforts to publicize the decisions. However, the mechanisms employed were neither systematic nor effective. To improve publicity, the decisions AND reasons should be communicated at all management and departmental meetings, and publicized through a hospital newsletter or hospital webpage. Meetings should be participatory, and should involve: (i) Eliciting suggestions from participants when developing meeting agendas, (ii) Discussions and feedback. Since people are more motivated to participate if their recommendations are implementation [34], there should be clear action plans to follow up the implementation of the decisions made at these meetings.
Publicizing the decisions and the reasons to the general public maybe even more challenging given that most of the general public has low literacy and may require innovative approaches to communicating priority-setting decisions. Innovative, yet affordable approaches such as town hall meetings and print media should be explored. An annual general meeting involving the public would provide a platform for publicity. To ensure coherence, an acceptable level of detail and complexity should be determined through collaborations between management and public advisors and publicized in simple language with the use of illustrations. This information should be simplified for clarity, and presented in simple language with use of illustrations. Experiences from real life e.g. from New Zealand, and Tanzania [35,36], and from research settings in Uganda and Tanzania could be explored [37,38]. Although the radio would also be effective in publicizing this information, lack of resources may hinder its use. Should resources be available, optimal use of the radio would necessitate regular airing programs in the different dialects. The radio programs should be structured in such a way as to encourage public dialogue.
The concerns raised by some respondents that publicity may increase unrealistic public demands requires further investigation. However, research carried out in Uganda and Tanzania suggests that when people are provided with the necessary evidence, they are able to meaningfully engage in simulated limit setting decision-making [38,39]. These findings emphasize the need for systematic public education and provision of evidence on which decisions are based to the public.
With regards to the appeal/revisions condition, the hospital had ineffective formal and effective informal appeals mechanisms. Formal appeals mechanisms are deficient in many health care systems [17]. In which case, informal mechanisms such as lobbying, take precedence. Although they may be useful in getting a few "strong lobbyists" what they want, it is neither fair nor systematic and may be detrimental to the institution. The hospital should discourage informal mechanisms by refining the existing formal appeals mechanisms, making them explicit to the health practitioners and expanding the opportunity for appealing to other key stakeholders [5,19]. Information about these mechanisms should be publicized.
According to some of our respondents, direct lobbying of the hospital director, was thought to be an effective appeals mechanism because it gets people what they want. However, since some stakeholders may have privileged access to decision-makers, and some stakeholders may bring other 'back-door' techniques of persuasion to bear, this view does not align with a fair priority setting process [22]. In a fair process, an effective appeals mechanism features reason-based appeals by stakeholders and reason-based responses by decision makers. This give-and-take should be accessible to all stakeholders and perceived as consistent, transparent and open-minded, even in situations of disagreement about outcomes – i.e. when people do not get what they want. The managers of the hospital need to develop and publicize participatory guidelines for appealing and revisions, and communicate decisions and reasons in response to appeals.
There was lack of clear accountability mechanisms for decision making in the hospital. A similar finding was reported in a study of priority setting in a hospital drug formulary in Canada [4]. Clearly, the conditions of fairness cannot be met without deliberate direct action by hospital leaders [39]. Therefore, the hospital needs to explicitly determine who should be accountable for which aspect of priority setting. Furthermore, the leaders of the directorates should be held accountable for communicating to members of their departments through feedback mechanisms directly from the members of staff to management. Departmental meetings with a member from the senior management committee, (other than the departmental leader), in attendance, would facilitate this. The hospital management should ensure that either the head of department or the deputy is under the direct jurisdiction of the ministry of health. Then the hospital management would be certain of permanently having a representative who is directly accountable to them.
### Applicability
Accountability for reasonableness provides a framework for fair priority setting processes. Fulfilling the four conditions, especially where capacity and resources are constrained may be challenging, and may require making difficult trade offs, since a fair process as described in this paper may require resources which could be used elsewhere.
The authors recognize these constraints and recommend that when making these difficult decisions, in addition to considering the resources involved, decision makers, within their local contexts and realities, should also consider the justifications for implementing a fair process. First, acting fairly is the right thing to do. Second, it improves the legitimacy of the decisions. Third, some of the specific features of fairness, such as transparency and explicit reason-giving, may narrow the range of disagreement. Fourth, the fair process described here, which features stakeholder involvement, reason-giving, transparency, and responsiveness, helps to improve the quality of the decisions. An additional benefit of using an explicit framework, such as the one described here, is that it provides a common language for social policy learning that is accessible to all.
Should they choose to implement a fair process, decision makers need to consider what would be feasible considering their local realities. For some, this may mean starting off with implementing just one of the elements of a fair process, and adding the other elements as they progress; while others may develop innovative and less costly ways to implement all or some of the elements of a fair process.
### Limitations
The findings of this study may not be generalizable. However, generalizability was not our aim. This study provides an evidence base for improving priority setting in a hospital. These experiences may benefit other practitioners in similar contexts.
## Conclusion
We have provided a description of priority setting in a hospital in a low-income country and evaluated it against the leading framework, 'accountability for reasonableness'. The primary outcome is evidence – based recommendations to improve priority setting in this hospital and other similar contexts.
## Competing interests
The author(s) declare that they have no competing interests.
## Authors' contributions
LK and DK conceptualized the study. LK collected and analyzed the data. LK and DK conceptualized and wrote the paper. All authors read and approved the final manuscript.
## Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1472-6963/6/127/prepub
## Acknowledgements
This study was funded by an Interdisciplinary Capacity Enhancement grant from the Canadian Institutes of Health Research to the Canadian Priority Setting Research Network and the Lupina foundation. DKM is funded by a New Investigator Award from the Canadian Institutes of Health Research. I would like to thank the hospital managers for having allowed me to carry out the study at their institution. I also thank the practitioners for their willingness to participate.
## References
• Martin DK, Walton N, Singer PA. Priority Setting in Surgery: Improve the process and share the learning. World Journal of Surgery. 2003;27:962–966. doi: 10.1007/s00268-003-7100-y. [PubMed]
• Ham C. Priority setting in the NHS: reports from six districts. British Medical Journal. 1993;307:435–8. [PubMed]
• Gibson JL, Martin DK, Singer PA. Setting priorities in health care organizations: criteria, processes and parameters of success. BioMed Central. Htpp://www.biomedcentral.com/1472-6963/4/25. [PubMed]
• Martin DK, Hollenberg D, MacRae S, Madden S, Singer PA. Priority setting in a hospital drug formulary: a qualitative case study and evaluation. Health Policy. 2003;66:295–303. doi: 10.1016/S0168-8510(03)00063-0. [PubMed]
• Daniels N, Sabin J. The ethics of accountability in managed care reform. Health Affairs. 1998;17:50–64. doi: 10.1377/hlthaff.17.5.50. [PubMed]
• Daniels N, Flores W, Pannarunothai S, Ndumbe PN, Bryant JH, Ngulube TJ, Wang Y. An evidence-based approach to benchmarking the fairness of health-sector reform in developing countries. Bulletin. 2005;1;83:534–40. [PubMed]
• Kapiriri L, Norheim OF, Heggenhougen K. Using the burden of disease information for health planning in developing countries: experiences from Uganda. Social Science and Medicine. 2003;56:2433–2441. doi: 10.1016/S0277-9536(02)00246-0. [PubMed]
• Kapiriri L, Norheim OF, Heggenhougen K. Public participation in health planning and priority setting at the district level in Uganda. Health policy and planning. 2003;18:205–213. doi: 10.1093/heapol/czg025. [PubMed]
• Gilson L, Doherty J, McIntyre D, Mwikisa C, Thomas S. The SAZA study: implementing health financing reform in South Africa and Zambia. Health Policy and Planning. 2003;18:31–46. doi: 10.1093/heapol/18.1.31. [PubMed]
• Hanson K, Atuyambe L, Kamwanga J, Mcpake B, Mungule O, Sengooba F. Towards improving hospital performance in Uganda and Zambia: reflections and opportunities for autonomy. Health Policy. 2002;61:73–94. doi: 10.1016/S0168-8510(01)00212-3. [PubMed]
• Bryant JH. Health priority dilemmas in developing countries. In: Coulter A, Ham C, editor. The global challenge of health care rationing. Buckingham: Open University Press; 2000. pp. 63–74.
• Steen HS, Jareg P, Olsen IT. Providing a core set of health interventions for the poor. Towards developing a framework for reviewing and planning – a systemic approach. Background document Oslo: Centre for health and social development. 2001.
• World health report. World Health Organization, Geneva; 2006.
• Ministry of health. Kampala: Government of Uganda; 2000. The national health policy.
• Financing Health Services in Uganda 1998/1999-200/2001. National Health Accounts. 2004.
• Martin DK, Singer PA. A Strategy to Improve Priority Setting in Health Care Institutions. Health Care Analysis. 2003;11:59–68. doi: 10.1023/A:1025338013629. [PubMed]
• Ham C, Roberts G, (eds) Reasonable Rationing: International Experience of Priority Setting in Health Care. (Maidenhead, UK: Open University Press); 2003.
• Bell JAH, Martin DK, Hyland S, DePellegrin T, Bernstein M. SARS and Hospital Priority Setting: A Qualitative Case Study and Evaluation. BioMed Central Health Services Research. 2004;4:36. doi: 10.1186/1472-6963-4-36. [PubMed]
• Martin DK, Shulman K, Santiago-Sorrell P, Singer PA. "Priority Setting and Hospital Strategic Planning: A Qualitative Case Study". Journal of Health Services Research & Policy. 2003;8:197–201. doi: 10.1258/135581903322403254. [PubMed]
• Martin DK, Bernstein M, Singer PA. Neurosurgery Patients' Access to ICU Beds: Priority Setting in the ICU – A Qualitative Case Study and Evaluation. Journal of Neurology, Neurosurgery & Psychiatry. 2003;74:1299–1303. doi: 10.1136/jnnp.74.9.1299. [PubMed]
• Madden S, Martin DK, Downey S, Singer PA. Hospital Priority Setting with an Appeals Process: A qualitative case study and evaluation. Health Policy. 2005;73:10–20. [PubMed]
• Daniels N, Sabin JE. Setting Limits Fairly: Can we learn to share medical resources? (Oxford, UK: Oxford University Press); 2002.
• Yin RK. Case study research: design and methods. Thousand Oaks, CA: Sage Publications; 1994.
• Mulago Hospital Complex Budget Estimates for recurrent and capital development for financial year 2002/2005
• Kvåle S. Interviews, an Introduction to Qualitative Research Interviewing. Thousand oaks: Sage Publications; 1999.
• Altheide DL, Johnson JM. Criteria for assessing interpretive validity in qualitative research. In: Denzin NK, Lincoln YS, editor. Handbook for qualitative research. Thousand Oaks: Sage Publications; 1994. pp. 485–99.
• Cohen J. "Pluralism and Proceduralism,". Chicago-Kent Law Review. 1994;69:589–618.
• Rawls J. Political Liberalism. (New York: Columbia University Press); 1993.
• The New Vision, September 10, 2006
• Martin DK, Abelson J, Singer PA. "Participation in health care priority setting through the eyes of the participants". Journal of Health Services Research & Policy. 2002;7:222–9. doi: 10.1258/135581902320432750. [PubMed]
• Walton NA, Martin DK, Peter EH, Pringle DM, Singer PA. Priority setting and cardiac surgery: A qualitative case study. Health Policy. 2006 [PubMed]
• Kapiriri L, Norheim OF. Criteria for priority setting in health care in Uganda: exploration of stakeholders' values. Bulletin of the World Health Organization. 2004;82:172–179. [PubMed]
• Evans DB, Lim SS, Adam T, Edejer TT. WHO Choosing Interventions that are Cost Effective (CHOICE) Millennium Development Goals Team. BMJ. 331:1457–61. doi: 10.1136/bmj.38658.675243.94. 2005 Dec 17; Epub 2005 Nov 10. [PubMed]
• Mullen P. Public involvement in health care priority setting: are the methods appropriate and valid? In: Coulter A, Ham C, editor. The Global Challenge of Health Care rationing. Philadelphia: Open University Press; 2000. pp. 163–174.
• Edgar W. Rationing health care in New Zealand – how the public has a say. In: Coulter A, Ham C, editor. The Global Challenge of Health Care rationing. Philadelphia: Philadelphia; 2000. pp. 175–191.
• http://www.idrc.ca/en/ev-43653-201-1-DO_TOPIC.html accessed on July 10th 2006.
• Kapiriri L, Robberstad B, Norheim OF. The relationship between prevention of mother to child transmission of HIV and stakeholder decision making in Uganda: implications for health policy. Health Policy. 2003;66:199–211. doi: 10.1016/S0168-8510(03)00062-9. [PubMed]
• Makundi E, Kapiriri L, Norheim OF. Combining evidence and values by the balance sheet method: the effect of deliberation about priority setting in a low-income country
• Reeleder D, Goel V, Singer PA, Martin DK. Leadership and Priority Setting. Health Policy. 2006 [PubMed]
Articles from BMC Health Services Research are provided here courtesy of BioMed Central
## Formats:
### Related citations in PubMed
See reviews...See all...
### Cited by other articles in PMC
See all...
• PubMed
PubMed
PubMed citations for these articles | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19007635116577148, "perplexity": 4478.967492282517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246646036.55/warc/CC-MAIN-20150417045726-00217-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://www.phy.olemiss.edu/~luca/Topics/p.html | Topics, P
p-Adic Number / Structure > s.a. differential equations; knot invariants; Non-Archimedean Structures.
* Idea: For each prime number p, the p-adic number system is an extension of the rational numbers different from the real number system.
* Motivation, use: Initially motivated by an attempt to use power-series methods in number theory; Now p-adic analysis essentially provides an alternative form of calculus.
$Def: A uniformity on $$\mathbb Z$$ defined by giving, as fundamental set of entourages, Wn := {(x, y) | x = y mod pn} ⊂ $$\mathbb Z \times \mathbb Z$$ , for all n (p is a prime) . @ General references: Gouvêa 97. @ In cosmology and gravitation: Dragovich AIP(06)ht [cosmology]; > s.a quantum cosmology; quantum spacetime. @ Quantum theory: Khrennikov NCB(98)-a0906, Dubischar et al NCB(99)-a0906 [and correlations between quantum particles]; Dragovich NPPS(01) [quantum mechanics and quantum field theory]; Abdesselam a1104-conf [massless quantum field theory]; Hu & Zong a1502 [p-adic quantum mechanics, symplectic group and Heisenberg group]; Palmer a1804 [FTQP, finite theory of qubit physics]; > s.a. modified uncertainty relations; path integrals. @ Other physics: Dragovich et al pUAA-a0904 [rev]; Rodríguez-Vega & Zúñiga-Galindo PJM-a0907 [p-adic fields, pseudo-differential equations and Sobolev spaces]; Dragovich a1205-proc [p-adic matter in the universe]; Abdesselam et al a1302; Zelenov TMP(14) [p-adic dynamical systems]; Dragovich et al pNUAA(17)-a1705 [rev]; > s.a. classical mechanics [generalizations]. > Online resources: see MathWorld page; Wikipedia page. Pachner Moves, Pachner Theorem >s.a. types of manifolds [PL, combinatorial]. @ In 4D: Korepanov a0911 [algebraic relations with anticommuting variables and topological field theory]; Banburski et al PRD(15)-a1412 [in a Riemannian spin-foam model]; Kashaev a1504. > And physics: see regge calculus. Packings > s.a. sphere. @ References: Jaoshvili et al PRL(10) + Frenkel Phy(10) [random packings of tetrahedral dice]. Padé Approximant / Approximation * Idea: The "best" approximation of a function by a rational function of given order; It often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. @ References: Wei et al JCAP(14)-a1312 [cosmological applications]. > Online resources: see MathWorld page; Wikipedia page. Painlevé Equations / Analysis / Test * Idea: A criterion of integrabilty for partial differential equations, which involves the following steps, (1) Show that the general solution can be represented as a (formal Laurent) series in powers of some function Φ that vanishes on an arbitrary non-characteristic surface; (2) Verify the possibility of truncating the series at some finite power of Φ. * Consequences: If satisfied, the equation is integrable, and we can get Bäcklund transformations and a (weak) Lax pair; If not satisfied, we cannot conclude the opposite. @ General references: Weiss et al JMP(83); Weiss JMP(83); Ramani et al PRP(89); Steeb & Euler 88; Lakshmanan & Sahadevan PRP(93); Guzzetti JPA(06)-a1010, IMRN(12)-a1010 [Painlevé VI equation]. @ Integrable equations without Painlevé property: Ramani et al JPA(00)-a0709; Tamizhmani et al Sigma(07)-a0706. @ And general relativity: see García-Díaz et al JMP(93); > s.a. chaos in gravitation. @ Discrete versions: Grammaticos et al PRL(91); Ramani et al PRL(91); Grammaticos & Ramani PS(14) [rev]; Kajiwara et al JPA(17)-a1509 [geometrical aspects]. @ Related topics: Sakovich Sigma(06)n.SI/04-conf [quadratic H that fails the integrability test]; Aminov et al a1306 [multidimensional versions of the Painlevé VI equation]; Bermudez et al JPA(16)-a1512 [solutions to the Painlevé V equation using supersymmetric quantum mechanics]. > Online resources: see The Painlevé Project site. Painlevé-Gullstrand Coordinates / Metric > see spherically symmetric geometries. @ References: Jaén & Molina GRG(16)-a1611 [natural extension]. > Generalized to rotating spacetimes: see kerr metric; kerr-newman metric. Pair Creation / Production > s.a. particle effects [Schwinger effect]; quantum field theory effects in curved spacetime. @ References: Petrat & Tumulka JPA(14) [multi-time formulation]. Pais-Uhlenbeck Model > s.a. quantum oscillators. * Idea: A field theory with a higher-derivative field equation. * The ghost issue: Applying the Ostrogradski approach to the Pais-Uhlenbeck oscillator yields a Hamiltonian which is unbounded from below, which leads to a ghost problem in quantum theory; It was believed for many years that the model possesses ghost states attributable to the field equation having more than two derivatives, and therefore that it is a physically unacceptable quantum theory; In reality, the Pais-Uhlenbeck model does not possess ghost states, when quantized according to the rules of PT quantum mechanics, and is a perfectly acceptable quantum theory. @ General references: Pais & Uhlenbeck PR(50); Kaparulin & Lyakhovich a1506-proc [energy and stability]; Kaparulin et al JPA(16)-a1510 [interactions]; Avendaño-Camacho et al a1703 [stability, perturbation-theory approach]. @ Ghost-free formulations: Bender & Mannheim JPA(08)-a0807; Nucci & Leach PS(10)-a0810, JMP(09); Banerjee a1308. @ Hamiltonian formulation: Mostafazadeh PLA(10)-a1008; Andrzejewski NPB(14)-a1410 [and symmetries]; Masterov NPB(16)-a1505 [without ghost problem]; Sarkar et al a1507 [resolving the issue of the branched Hamiltonian]; Masterov a1603 [(2n+1)-order generalization]. @ Quantum theory: Mannheim & Davidson PRA(05)ht/04 [Dirac quantization]; Di Criscienzo & Zerbini JMP(09)-a0907 [euclidean path integral and propagator]; Mostafazadeh PRD(11)-a1107 [consistent quantization]; Bagarello IJTP(11), Pramanik & Ghosh MPLA(13)-a1205 [coherent states]; Cumsille et al IJMPA(16)-a1503 [polymer quantization]; Berra-Montiel et al AP(15)-a1505 [deformation quantization]; Fernández a1605 [and its PT-variant]. @ Applications: Ketov et al a1110-ch [as a toy-model for quantizing f(R) gravity theories]. Palatini Action / Formulation of Gravity Theory * Idea: A formulation in which the metric and connection are assumed to be independent fields, as in metric-affine theories; Conceptually this amounts to considering the geodesic structure and the causal structure of the spacetime as independent. > Theoretical aspects: see first-order actions for general relativity; higher-dimensional and higher-order gravity; kaluza-klein theories; non-local gravity. > Phenomenology: see cosmology of higher-order theories; dilaton. PAMELA (Payload for Antimatter/Matter Exploration and Light-nuclei Astrophysics) > s.a. cosmic rays. * Idea: A space mission onboard an Earth-orbiting spacecraft, that studies cosmic rays. @ References: Adriani et al PRL(09), PRL(13) [results on positron excess]. > Online resources: see PAMELA website; Wikipedia page. Pancharatnam Phase > see geometric phase. Paneitz Equation > see partial differential equations. Paneitz Operator * Idea: A 4th-order differential operator which occurs in the theory of conformal anomalies; According to a conjecture, it gives 8π when acting upon the invariant volume of the past light cone. @ References: Park & Woodard GRG(10)-a0910 [and volume of the past light cone]. Papapetrou Field > see gravitomagnetism. Papapetrou Solution > s.a. kerr solutions [Papapetrou gauge]. @ References: Khugaev & Ahmedov IJMPD(04) [generalization]. Papapetrou Theorem * Idea: A theorem about the equivalence of two sets of circularity conditions for (pseudo)stationary, asymptotically flat empty spacetimes; For stationary axisymmetric sources, gab shares these symmetries. Papapetrou-Majumdar Metrics [> black-hole solutions]. * Idea: A family of electrovac solutions of Einstein's equation which are static because of balance between gravitational and electromagnetic forces, for special charge/mass ratios. @ General references: Papapetrou PRIA(47); Majumdar PR(47); Hartle & Hawking CMP(72) [interpretation]; Heusler CQG(97)gq/96 [uniqueness]. @ Related topics: Gürses PRD(98)gq [dust generalization]; Varela GRG(03)gq/02 [charged dust sources]. Parabola > see conical sections. Paraboloidal Coordinates > see coordinates. Paracompact Topological Space Paradoxes > s.a. Fermi Paradox; Trouton-Noble Paradox. > In mathematics: see logic; Parrondo's Paradox; probability; Zeno's Paradox. > In gravitational and cosmology: see black-hole information paradox; causality violations; expansion; Olbers' Paradox. > In quantum theory: see EPR paradox; Klein Paradox; quantum correlations; quantum effects; quantum foundations; wave-function collapse. > In special relativity: see arrow of time [causal paradoxes]; clocks; Ehrenfest, Lock and Key, Submarine, Twin Paradox; special relativity; kinematics. > In statistical physics: see Gibbs Paradox; probability in physics; quantum statistical mechanics; Recurrence Paradox; statistical mechanics. @ General references: Klein 96; Chang 12 [in scientific inference]. @ In thermodynamics: Cucić a0812, a0912 [and statistical physics]; Yoder & Adkins AJP(11)aug [ellipsoid paradox]; Sheehan et al FP(14) [diatomic gas in a cavity]. Parafermions > see generalized particle statistics. Parallax > s.a. cosmological observations [cosmic parallax]. * Stellar parallax: The annual apparent displacement of the stars that occurs because of Earth's orbit around the Sun. @ References: Timberlake TPT(13)-a1208 [history, and aberration]; Räsänen JCAP(14)-a1312 [cosmic parallax, covariant treatment]. Parallel Transport > s.a. Fermi Transport; connection; foliation [web]; Path. * Idea: Defined on a manifold that has a connection; A tensor T is parallel transported along a curve with tangent vector X if ∇XT = 0. @ General references: Anandan & Stodolsky PLA(00)qp/99 [classical and quantum physics]; Wagh & Rakhecha JPA(99) [gauge-independent form]; Iliev IJGMP(05)m.DG [and connections], IJGMP(08) [axiomatic approach]; Iurato a1608 [history, Levi-Civita]. @ Specific spaces and metrics: Bini et al IJMPD(04)gq [circular orbits, stationary axisymmetric spacetime]; Chatterjee et al RVMP(10)-a0906 [over path spaces]. @ Generalizations: Soncini & Zucchini JGP(15)-a1410 [higher parallel transport in higher gauge theory]. Parallel Universes > see multiverse. Parallelizable Manifold > see types of manifolds. Parallelotope > a special type of Polytope. Paramagnetism > see magnetism. Parametric Excitation / Resonance > see resonances. Parametrix > see approaches to canonical quantum gravity. Parametrized Post-Friedmannian Formalism > see under Post-Friedmannian. Parametrized Post-Newtonian Formalism > see under PPN Formalism. Parametrized Theories Paraphotons * Idea: Low-mass extra U(1) gauge bosons with gauge-kinetic mixing with the ordinary photon. @ References: Jaeckel & Ringwald PLB(08)-a0707 [search, cavity experiment]. Parastatistics > see particle statistics. Parisi-Sourlas Mechanism @ References: Magpantay IJMPA(00)ht/99 [in Yang-Mills theory]. Parity Parrondo's Paradox * Idea: The proposition that two losing strategies can, by alternating randomly, produce a winner. @ References: Martin & von Baeyer AJP(04)may. Parseval's Integral > see bessel functions. Parseval's Relation / Theorem > see fourier analysis. Part > see Subsystem. Partially Massless Fields > see spin-2 fields; types of field theories; types of yang-mills theories. Partially Massless Gravity Theory > see massive gravity. Partially Ordered Set > see poset. Particle Descriptions and Types > see effects, models, statistics, types; classical and quantum models; spinning particles. Particle Horizon > see horizons. Particle Physics > s.a. experimental particle physics. Particle Physics Phenomenology > see lattice field theory; QCD, QED, and string phenomenology; Zweig Rule. Particle Statistics > s.a. generalized particle statistics. Partition, Partition of Unity, Partition Relation > see partition. Partition Function > see states in statistical mechanics. Parton Models > see hadrons. Pascal > see programming languages. Paschen-Back Effect > see Zeeman Effect. Past > see spacetime subsets; photons and Trajectory in Quantum Mechanics [past of a quantum particle]. Pataplectic Hamiltonian Formulation > see hamiltonian dynamics. Path > s.a. loops; Parallel Transport; Trajectory [in classical and quantum mechanics]. * For a field: The path in a region Ω of spacetime is a cross-section of the bundle of internal degrees of freedom over Ω. @ Path group: Mensky G&C(02)gq [gravity and paths in Minkowski spacetime], gq/02-conf [in gauge theory and general relativity]; > s.a. types of groups. @ Path space: Cho & Hong a0706 [Morse theory]; Biswas & Chatterjee IJGMP(11) [geometric structures]; Chatterjee et al JGP(13) [bundles and connections over path spaces]; Chatterjee IJGMP(15)-a1401 [double category of geodesics on path space]; Gerstenhaber a1403 [path algebras and de Broglie waves]; > s.a. measure [Wiener measure]. @ Path-dependent functions: Reyes JMP(07)ht/06 [operators]. Patterns > s.a. composite quantum systems. @ Pattern theory: Grenander 76-81. Pauli Equation > s.a. Scale Relativity. @ References: Mancini et al JPA(01)qp/00 [for probability distributions]; Zhalij JMP(02)mp [separation of variables]. Pauli Exclusion Principle > see spin-statistics. Pauli Matrices > see SU(2). Pauli Theorem > see time in quantum theory. Pauli-Fierz Lagrangian / Theory > s.a. spin-2 field theories; path-integral formulation of quantum field theory [spin-1/2]. * Idea: A theory of massive charged spin-2 fields hμν, $$\cal L$$ = |g|1/2 [R up to quadratic terms + m2 (hμνhμνh2)] ; The theory arises also as an effective 4D theory in brane models; It does not reproduce linearized general relativity in the m → 0 limit, and has a ghost problem. * van Dam-Veltman discontinuity: A discontinuity in the Pauli-Fierz formulation; The deflection angle in the background of a spherically symmetric gravitational field converges to 3/4 of the value predicted by the massless theory (linearized general relativity) as m → 0. @ General references: Fierz & Pauli PRS(39); Groot Nibbelink & Peloso CQG(05)ht/04 [covariant]; Obukhov & Pereira PRD(03) [teleparallel origin]; Georgescu et al CMP(04) [massless, spectral theory]; Leclerc gq/06 [gauge and reduction]; Osipov & Rubakov CQG(08)-a0805 [superluminal graviton propagation]; Hasler & Herbst RVMP(08) [Hamiltonians]; González et al JHEP(08) [duality]; Loss et al LMP(09) [degeneracy of eigenvalues of Hamiltonian]; de Rham & Gabadadze PLB(10)-a1006 [non-linear completion without ghosts]; Park CQG(11)-a1009 [effect of quantum interactions]; Deser CJP(15)-a1407 [action, and manifestly positive energy]. @ Variations: Boulanger & Gualtieri CQG(01)ht/00 [PT non-invariant deformation]; de Rham & Gabadadze PRD(10)-a1007 [with generalized mass and interaction terms]; Park JHEP(11)-a1011 [non-Pauli-Fierz theory, unitarization]; Deffayet & Randjbar-Daemi PRD(11)-a1103[non-linear, from torsion]; Alberte IJMPD(12)-a1202 [on an arbitrary curved background]; > s.a. massive gravity [including non-Pauli-Fierz theory]. > Online resources: see Wikipedia page on Markus Fierz. Pauli-Jordan Function > s.a. green functions in quantum field theory. * Idea: A type of Green function for a quantum field. * For a scalar field: The two-point function G(x, x'):= −i $$\langle$$0| [φ(x), φ(x')] |0$$\rangle$$. * Properties: It satisfies the homogeneous field equation. Pauli-Villars (Covariant) Regularization > see regularization. PCAC$ Meaning: Partial Conservation of Axial Current.
Peano's Axioms > see mathematics.
Peano Curve > see fractals.
Peccei-Quinn Mechanism / Symmetry > s.a. axion; neutron.
* Idea: A field theory mechanism by which a discrete symmetry arises from the spontaneous breaking of a continuous symmetry.
@ References: Mercuri PRL(09)-a0902 [gravitational, and Barbero-Immirzi parameter]; Takahashi & Yamada JCAP(15)-a1507 [breaking, in the early universe].
Peeling Property of Spacetime
* Idea: A property of the Weyl tensor in asymptotically flat spacetimes.
@ References: Geroch in(77); in Wald 84, p285; Bressange & Hogan PRD(99) [lightlike signals in Bondi-Sachs]; Klainerman & Nicolò CQG(03) [and initial data set falloff]; Pravdová et al CQG(05)gq [even higher dimensions]; Friedrich a1709 [and isolated systems, asymptotic flatness and simplicity].
Peierls Argument > see ising models [spontaneous magnetization].
Peierls Brackets > s.a. canonical general relativity; types of symplectic structures.
* Idea: A bracket defined on the covariant phase space of a field theory, corresponding to the Poisson bracket on the canonical phase space.
@ General references: Peierls PRS(52); DeWitt in(64), in(99); Esposito et al ht/02 [intro]; Bimonte et al IJMPA(03)ht [field theory], ht/03 [dissipative systems]; DeWitt & DeWitt-Morette AP(04) [and path integrals]; Esposito & Stornaiolo IJGMP(07)ht/06 [for type-I gauge theories, and Moyal bracket].
@ Generalizations: Marolf AP(94)ht/93; Sharapov IJMPA(14)-a1408 [in non-Lagrangian field theory].
Peirce Logic > see clifford algebra; dirac field theory.
Peltier Effect > see electricity [thermoelectricity].
@ References: Heremans & Boona Phy(14) [spin Peltier effect].
Pendulum > s.a. kinematics of special relativity, oscillator.
* Non-linear or physical pendulum: The Hamiltonian and equation of motion are given by
H = $$1\over2$$p2ω2 cos x , d2x/dt2 + ω2 sin x = 0 .
* Linearization: Gives the simple harmonic oscillator.
@ General references: Matthews 00 [history, education, r pw(01)feb]; Baker & Blackburn 05 [r PT(06)jul]; Gitterman 08 [noisy]; Baker 11; Brizard CNSNS-a1108 [action-angle coordinates]; Dahmen a1409/EPJH [historical, Denis Diderot's paper on pendulums and air resistance].
@ Beyond the small-angle approximation: Lima & Arun AJP(06)oct; Turkyilmazoglu EJP(10); Bel et al EJP(12) [periodic solutions by the homotopy analysis method].
@ Foucault's pendulum: Hart et al AJP(87)jan; Khein & Nelson AJP(93)feb [Hannay angle]; Pardy ap/06 [astronomical analogs]; von Bergmann & von Bergmann AJP(07)oct [and geometry]; news THE(10)jun [pendulum is irreparably damaged]; Jordan & Maps AJP(10)nov [in pictures].
@ Other types: Butikov AJP(01)jul [inverted, stabilization]; Rafat et al AJP(09)mar [double, with square plates]; Bassan et al PLA(13) [torsion pendulum, Lagrangian model and small misalignments].
@ Quantum: Cushman & Śniatycki a1603 [spherical pendulum, geometric quantization].
Penning Trap > s.a. lorentz-symmetry violation phenomenology; proton [mass measurement].
* Idea: An electron trap, made with a special configuration of electric and magnetic fields.
@ References: Brown & Gabrielse RMP(86); Blaum et al CP(10) [and experiments in fundamental physics].
Penrose Diagram > s.a. asymptotic flatness.
* Idea: A diagram of spacetime, as compactified by a suitable conformal transformation.
@ General references: Penrose in(64); Jadczyk RPMP(12)-a1107 [geometry of Penrose's 'light cone at infinity']; Schindler & Aguirre a1802 [algorithm].
@ Specific types of spacetimes: Brown & Lindesay CQG(09)-a0811 [accreting black holes]; Lindesay & Sheldon CQG(10) [transient black holes].
Penrose Dodecahedron
* Idea: A set of 40 states of a spin-3/2 particle used by Zimba and Penrose to give a proof of Bell's non-locality theorem.
@ References: Zimba & Penrose SHPSA(93); Massad & Aravind AJP(99)jul.
Penrose Inequality / Conjecture
* Idea: For a spherically symmetric metric, on any apparent horizon
More generally, the total mass of a spacetime which contains black holes with event horizons of total area A satisfies
GM / c2 ≥ (A/16π)1/2 .
@ General references: Penrose NYAS(73); Ludvigsen & Vickers JPA(83) [partial proof]; Malec & Ó Murchadha PRD(94) [and refs]; Frauendiener PRL(01)gq [towards a proof]; Malec et al PRL(02)gq [general horizons]; Malec & Ó Murchadha CQG(04)gq [re use of Jang equation]; Karkowski & Malec APPB(05)gq/04 [numerical evidence]; Ben Dov PRD(04) [(counter)example]; Tippett PRD(09)-a0901 [violated for prolate black holes]; Mars CQG(09)-a0906 [rev]; Bengtsson & Jakobsson GRG(16)-a1608 [toy version with proof].
@ Charged black holes: Disconzi & Khuri CQG(12)-a1207 [charged black holes]; Khuri GRG(13)-a1308; Lopes de Lima et al a1401 [in higher dimensions]; Khuri et al CQG(15)-a1410 [extensions].
@ Riemannian: & Huisken & Ilmanen (97) [proof, single black hole]; Bray JDG(01) [proof]; Bray & Chruściel in(04)gq/03; Ohashi et al PRD(10)-a0906; Khuri et al CM-a1308 [with charge, for multiple black holes].
@ Other generalizations: Gibbons in(84); Karkowski et al CQG(94) [gravitational waves]; Herzlich CMP(97) [asymptotically flat, R ≥ 0]; Khuri CMP(09) [general initial data sets]; Carrasco & Mars CQG(10) [generalized-apparent-horizons version, counterexample]; Brendle & Wang CMP(14)-a1303 [2D spacelike surfaces in Schwarzschild spacetime]; Alexakis a1506 [perturbations of the Schwarzschild exterior]; Roesch a1609, Bray & Roesch a1708 [null Penrose conjecture]; Husain & Singh PRD(17)-a1709 [in AdS space].
Penrose Limit
* Idea: A procedure whereby the immediate neighborhood of an arbitrary null geodesic is "blown up" to yield a pp-wave as a limit; Given a metric written in coordinates adapted to the null geodesic (can always be done), the procedure consists in replacing (u, v, yi) by (u, λ2v, λyi) in the line element, and then taking the limit as λ → 0 of ds2/λ2; One is then left with a metric of the form ds2 = 2 dudv + Cij dyidyj; Ricci-flat metrics and Einstein metrics both give Ricci-flat metrics as results.
@ References: Floratos & Kehagias JHEP(02)ht [orbifolds and orientifolds]; Siopsis PLB(02)ht, MPLA(04)ht/02 [AdS, and holography]; Hubeny et al JHEP(02)ht [non-local theories]; Kunze PRD(05)gq/04 [curvature and matter]; Philip JGP(06) [of homogeneous spaces].
Penrose Mechanism / Process > s.a. black-hole phenomenology.
* Idea: A method for extracting energy from a rotating black hole; Send a mass into a trajectory inside the ergosphere, against the black hole's rotation; Separate the mass into two parts and let one fall inward; The outgoing one may have more energy than the initial one, obtained by slowing the black hole down; Results in an increase of the black hole's mirr.
* Variations: The collisional Penrose, or super-Penrose process consists of particle collisions in the ergoregion.
@ General references: Penrose RNC(69), & Floyd NPS(71); Christodoulou & Ruffini PRD(71); Wald AJ(74); Wagh & Dadhich PRP(89); Fayos & Llanta GRG(91) [limitations]; Williams phy/04; Heller a0908; Schnittman PRL(14)-a1410 [upper limit to energy extraction]; Bravetti et al PRD(16)-a1511 [thermodynamic optimization].
@ Collisional Penrose process: Schnittman PRL(14)-a1410; Berti et al PRL(15)-a1410; Zaslavskii MPLA(15)-a1411; Zaslavskii IJMPD-a1510; Leiderschneider & Piran PRD(16)-a1510 [maximal efficiency]; Patil et al PRD(16)-a1510 [efficiency]; Zaslavskii GRG(16)-a1511, PRD(16)-a1511; Ogasawara et al PRD(16)-a1511 [heavy particle production]; Schnittman GRG(18).
@ Other variations: Lasota et al PRD(14)-a1310 [generalized].
@ Related topics: Williams ap/02/PRD [Compton scattering and e+e production]; Cen a1102-wd [astrophysical scenario].
Penrose Tiling > see tiling.
Percolation > s.a. ising models; in lattice field theory; Transport; voronoi tilings.
* Idea: The theory was initiated by Broadbent and Hammersley PPCS(57) as a mathematical framework for the study of random physical processes, such as flow through a disordered porous medium with randomly blocked channels in a gravitational field; It has proved to be a remarkably rich theory, with applications beyond natural phenomena to topics such as network modelling and the contact process for epidemic spreading.
* Phase transition: It turns out that the system undergoes a continuous phase transition with a non-trivial critical behavior, at which it becomes macroscopically permeable.
@ General references: Stauffer & Aharony 94 [intro]; Bollobás & Riordan 06; Duminil-Copin a1712-proc [rev, historical].
@ Theory: Cardy mp/01-ln [conformal field theory methods]; Smirnov & Werner MRL-m.PR/01 [triangular 2D lattice]; Bollobás & Riordan RSA(06)m.PR/04; Janssen & Täuber AP(05) [field theory approach, rev]; Gliozzi et al NPB(05) [random, as gauge theory]; Ziff et al JPA(11) [factorization of the three-point density correlation function]; Curien & Kortchemski PTRF-a1307 [on random triangulations].
@ Critical: Grassberger JPA(99); Cardy JPA(02)mp; Ridout NPB(09)-a0808 [and Watts' crossing probability].
@ Directed: Grassberger JSP(95); Janssen et al JPA(99) [equation of state]; Grimmett & Hiemer m.PR/01; Takeuchi et al PRL(07), PRE(09) + Hinrichsen Phy(09) [experimental realization]; Chen PhyA(11) [square lattice, asymptotic behavior].
Perfect Fluid > s.a. fluid; gas.
Perfect Group > see group types.
Perfect Number > see number theory.
Perfect Space > see types of topologies.
Periastron / Perihelion Precession > see Precession; black-hole binaries; orbits in newtonian gravity; test-body orbits; tests of general relativity.
Periodic Orbits > see classical systems [Bertrand's theorem; non-linear systems].
Perl > see programming languages.
Permanent of a Matrix > see matrix.
Permeability > see magnetism.
Permittivity > see electricity in matter.
Permutations > see finite groups; particle statistics [identical particles].
@ References: Huggett BJPS(99) [as a symmetry in quantum mechanics]; Olshanski a1104-ch [random permutations]; Cori et al EJC(12) [formulas for the number of factorizations of permutations]; Baker PhSc(13) [in quantum field theory, and theories with no particle interpretation].
Permutons > see phase transitions [in combinatorial systems].
Perpetual Motion Machine / Perpetuum Mobile > s.a. thermodynamics [violations of second law].
@ References: Chernodub a1203 [permanently rotating devices]; Jenkins AJP(13)-a1301 [early 18th century demonstrations by Orffyreus, con man].
> Related topics: see Maxwell's Demon; de sitter space [example].
> Online resources: see Continuous Frictioned Motion Machine page.
Perplex Numbers > see types of numbers.
Perron-Frobenius Operator > see under Frobenius-Perron.
Persistent Homology > see types of homology.
Perspectivism > see philosophy of science.
Perturbation Methods / Theory > s.a. fluids; quantum field theory techniques.
* In classical mechanics – Example: Delicate stuff – If initially stationary, Venus and Earth would collide in less than 370 yrs; If isolated in orbit around each other, never; So, what is the effect of Venus on Earth's trajectory?
* In quantum mechanics – Approaches: The usual time-dependent perturbation theory for solving the Schrödinger equation does not preserve unitarity; The Magnus expansion (also known as exponential perturbation theory) does provide unitary approximate solutions.
@ Texts: Giacaglia 72; Kevorkian & Cole 81; Gallavotti 83; Bender & Orszag 99; Holmes 13.
@ For differential equations: Odibat & Momani PLA(07) [homotopy perturbation method].
@ Hamiltonian systems: Lewis et al PLA(96) [time-dependent, invariants]; Laskar & Robutel ap/00 [symplectic integrators].
@ Related topics: Marmi m.DS/00-ln [small denominators, intro]; Amore mp/04-proc [anharmonic oscillator, classical and quantum], et al EJP(05)mp/04 [removal of secular terms]; Pound PRD(10)-a1003 [singular]; > s.a. classical systems; oscillator; series [convergence acceleration and divergent series].
@ In quantum mechanics: Sen IJMPA(99)cm/98 [singular potentials]; Fernández 01, JPA(06)qp/04; Franson & Donegan PRA(02)qp/01 [t-dependent]; Teufel 03 [adiabatic perturbation theory]; Ciftci et al PLA(05)mp [iterative]; Weinstein ht/05, NPPS(06)ht/05 [adaptive]; Albeverio et al RPMP(06) [singular, rigged Hilbert space approach]; Harlow a0905 [bound on the error]; Fernández a1004 [confined systems]; Blanes et al EJP(10) [Magnus expansion or exponential perturbation theory, pedagogical]; Hayata PTP(10)-a1010 [without weak-coupling assumption]; Faupin et al CMP(11) [for embedded eigenvalues, second-order]; Kerley a1306 [time-independent]; Rigolin & Ortiz PRA(14)-a1403 [degenerate adiabatic perturbation theory].
> Gravity-related areas: see black-hole perturbations; cosmological perturbations; metric perturbations in general relativity.
Peter-Weyl Theorem > see quantum mechanics representations [and Segal-Bargmann transform].
Petrov, Petrov-Pirani Classification
Pfaff Derivative of a Function
$Def: ∂k f:= ek(f), with ek a basis for Tx X, such that df |X = ek(f) θk |x, with θk the dual basis. * Idea: Just a generalization of the regular partial derivatives to the case in which ek is not necessarily the coordinate basis ∂/∂xk. Pfaffian of a Matrix * Idea: Given an antisymmetric 2m × 2m matrix, its Pfaffian is a polynomial in its entries, whose square gives the determinant of the matrix. Phantom Divide * Idea: The point in cosmological history at which w (the ratio of pressure to energy density for the effective fluid matter used to describe cosmological models) crossed the value −1, or the value −1 itself in the range of possible values for w. @ References: Zhang a0909-ch [approaches]. Phantom Field > s.a. born-infeld theory; Quintom; wormholes. * Idea: An exotic scalar field with a negative kinetic term (as a fluid, it has an equation of state with w < −1), that violates most of the classical energy conditions; 2005, Considered by some as a real possibility for dark energy, although it has serious problems like instability and lack of a well-posed initial-value formulation. @ General references: Sami & Toporensky MPLA(04) [and fate of universe]; Majerotto et al ap/04/JCAP [and SN Ia data]; Santos & Alcaniz PLB(05)ap [Segre classification]; Giacomini & Lara GRG(06) [+ gravity + arbitrary potential, dynamics]; Pereira & Lima PLB(08)-a0806 [thermodynamics]. @ Black holes, isolated objects: Svetlichny ap/05 [possible production by black holes]; Berezin et al CQG(05)gq [shell around Schwarzschild]; Bronnikov & Fabris PRL(06) [regular asymptotically flat, de Sitter and AdS]; Rahaman et al NCB(06)gq; Gao et al PRD(08)-a0802 [mass increase]; Martins et al GRG(09)-a1006 [3D, phantom fluid]; Gyulchev & Stefanov PRD(13) [lensing]; > s.a. gravitational thermodynamics; models of topology change. @ Cosmology: Dąbrowski et al PRD(03) [+ standard matter]; Chimento & Lazkoz MPLA(04) [big rip]; Curbelo et al CQG(06)ap/05 [avoidance of big rip]; Faraoni CQG(05)gq [general potential]; Capozziello et al PLB(06) [dark energy and dark matter]; Bouhmadi-López et al PLB(08)gq/06 [future singularity]; Dąbrowski gq/07-MGXI [dark energy]; Sanyal IJMPA(07) [inflation rather than big rip]; Hrycyna & Szydłowski PLB(07) [conformally coupled, acceleration]; Shatskiy JETP(07)-a0711; Chaves & Singleton SIGMA(08)-a0801 [and dark matter]; Chen et al JCAP(09)-a0812 [phase-space analysis]; Myung PLB(09) [thermodynamics]; Regoli PhD-a1104; Astashenok et al PLB(12)-a1201 [without big rip singularity]; Novosyadlyj et al PRD(12), Ludwick PRD(15)-a1507 [as dark energy]; Ludwick MPLA(17)-a1708 [rev]; > s.a. FLRW models; gravitational thermodynamics. @ Loop quantum cosmology: Samart & Gumjudpai PRD(07)-a0704; Gumjudpai TJP-a0706-proc; Fu et al PRD(08)-a0808; Wu & Zhang JCAP(08)-a0805; > s.a. FLRW quantum cosmology. Phases of Matter * Idea: The phases that have been known for a long time are solid, liquid, gas and plasma, but experiments with matter cooled to within a few degrees of 0 K have turned up a number of exotic phases, such as superfluids, superconductors and topological phases; In these new types of phases one can see quantum mechanical effects at work in materials, unencumbered by the random motions of atoms. * Topological phases: Thouless, Kosterlitz and Haldane won the 2016 Nobel Prize for their work on these phases; A variety of such phases are known. @ General references: issue JPCM(98)#49 [matter under extreme conditions]; Pinheiro phy/07 [plasma, genesis of the word]; Kadanoff a1002; Baas IJGS-a1012 + news ns(11)jan [topology and generalization of Efimov states]; > s.a. magnetism [plasma physics or magnetohydrodynamics]. @ Topological phases: Read PT(12)jul; > s.a. matter [mathematical models]. > Type of phases: see condensed matter [gases, liquids]; crystals; fluid; gas; Plasma; bose-einstein condensate. Phase of a Quantum State > s.a. arrow of time [phase squeezing]; geometric phase; pilot-wave interpretation [and quantum phase]; quantum states. @ References: Barnett & Pegg JMO(89) [optical phase operator]; Lynch PRP(95); Koprinkov PLA(00)qp/06; Kastrup qp/01 [and modulus]; Lahti & Pellonpää PS(02) [formalisms]; Pellonpää JMP(02) [observables]; Heinonen et al JMP(03) [covariant phase difference]; de Gosson JPA(04) [general definition]; Gour et al PRA(04) [self-adjoint extensions]; Saxena a0803 [in terms of inverses of creation and annihilation operators]; Hall & Pegg PRA(12)-a1205. Phase Curve > see phase space. Phase Diagram * Idea: A plot showing the boundaries between thermodynamically distinct phases in an equilibrium system. > Gravity: see dynamical triangulations; phenomenology of gravity; quantum-gravity renormalization. > Other field theories: see Gross-Neveu Model; QCD, QCD phenomenology; Wess-Zumino Model. > Other physics: see Critical Points; matter [dense matter]; Potts Model; Water. > Online resources: see Wikipedia page. Phase Space Phase Velocity > see velocity. Phoenix Universe > see cosmological models. Phonon > s.a. specific heat [for a solid]; sound ["phonon tunneling"]. * Idea: A quantum of a sound wave, a type of quasiparticle. * Applications: Theoretical applications include models for fundamental quantum field theory effects (such as the acoustic Casimir effect) and black-hole analogs; Practical ones include "phonon optics" (mirrors, filters, lenses, etc) used to look inside solids for point defects. @ General references: Baym AP(61), re AP(00) [Green function, quantum field theory methods]; Kokkedee 63; Hu & Nori PRL(96) + pn(96)mar [squeezed]. @ Specific types of systems: Quilichini & Janssen RMP(97) [quasicrystals]; Gorishnyy et al pw(05)dec [phononic crystals]; Lukkarinen LNP(06)-a1509 [in weakly anharmonic particle chains, kinetic theory]. @ Related topics: Schwab et al Nat(00)apr [quantized thermal conductivity]; Johnson & Gutierrez AJP(02)mar [wave function visualization]; news tcd(15)mar [controlling phonons with magnetic fields]; Iachello et al PRB(15)-a1506 [algebraic theory, energy dispersion relation and density of states]. > Online resources: see Wikipedia page. Photoelectric Effect > s.a. photon phenomenology. * Idea: The effect by which light (in particular, UV) incident on a metal causes electrons to be emitted by the metal surface; The quantitative explanation of observations related to this effect was one of the key arguments in favor of the idea that light is made of discrete photons. @ General references: Einstein AdP(05); Zenk RVMP(08) [variant of standard approach with wider applicability]. @ Without quanta: Wentzel ZP(27); Franken in(69); Milonni AJP(97)jan. @ Other topics: Bach et al ATMP(01)mp/02 [mathematical]. > Online resources: see Wikipedia page. Photon > s.a. photon phenomenology. Photon Sphere / Surface > see spacetime subsets. Physical Constants > see under Constants. Physical Laws > see under Laws. Physical Process > see Process [including astrophysical, mathematical, ... processes]. Physicalism > see philosophy of physics. Physically Reasonable Model * Idea: A model for a physical system that is considered as having values for the properties under study that reflect those that can occur in a real system. * Rem: A stronger expression would be "physically realistic model". Physically Significant Property * Idea: A property of a model for a physical system is physically significant if, whenever the model has the property, the real system is expected to have it as well. * Rem: Hawking has stated that "the only properties of spacetime that are physically significant are those that are stable in some appropriate topology". Pi, π Picard-Lefschetz Theory > see quantum field theory techniques. Pierre Auger Observatory * Idea: A network of detectors in the pampa of Western Argentina for the study of high-energy cosmic rays. @ References: Anchordoqui et al PRD(03)hp; Anchordoqui ap/04-proc; Kampert NPPS(06)ap/05; Van Elewyck ap/06-ln, MPLA(08); Nitz a0706-conf [north site]; Van Elewyck a0709-proc; Parizot et al a0709-conf; de Mello APPS-a0712-conf, Matthiae a0802-conf [status and results]; Abraham et PA a0906-conf [status and plans]; Etchegoyen et al a1004-conf; Roulet a1101-conf; Smida et al a1109-proc, Kampert a1207-proc [results]; Pierre Auger Collaboration NIMA(15)-a1502 [design and performance]; > s.a. ultra-high-energy cosmic rays. > Online resources: see Pierre Auger website; Wikipedia page. Pigeonhole Principle (A.k.a. Dirichlet box principle.) * Theorem: If more than n pigeons are roosting in n pigeonholes, at least one hole contains more than one pigeon. * Applications: There are at least two people in Los Angeles with the same net worth, to the nearest dollar; In mathematics research, it is used to prove the existence of things which are difficult to construct, for example in Ramsey theory. * In quantum physics: There are instances when three quantum particles are put in two boxes, yet no two particles are in the same box. @ General references: Olivastro ThSc(90)sep. @ In quantum physics: Aharonov et al a1407 + sn(14)jul [it doesn't always hold]; Yu & Oh a1408 [and the quantum Cheshire cat]; Svensson a1412; Sun et al a1806 [it is not violated]. Pilot-Wave Interpretation of Quantum Mechanics [including non-equilibrium theory] > s.a. phenomenology [systems and effects]. Pin Groups / Structures and Pinors > A generalization of spin. * Idea: Double covers of the full Lorentz group, that can be used to describe the transformation behavior of fermions under parity and time reversal; Pin(1, 3) is to O(1, 3) what Spin(1, 3) is to SO(1, 3). @ References: Dabrowski & Percacci JMP(88) [2D]; DeWitt-Morette & DeWitt PRD(90); in Gibbons IJMPD(94); Cahen et al JGP(95); Alty & Chamblin JMP(96) [on Kleinian manifolds]; Trautman AIP(98)ht, APPB(95)ht/98; Berg et al RVMP(01)mp/00 [long]; Bonora et al BUMI-a0907 [and spinors and orientability]; Janssens a1709 [and general relativity]. Pinch Technique > see green functions for differential operators and quantum field theories. Pioneer Anomaly > see anomalous acceleration. Pions, π > see hadrons. PL Manifold / Space (Piecewise Linear) > see manifold types. Plancherel Theorem > see Symmetric Space. Planck Constant and Units > s.a. constants; Wikipedia page. * Value: 1998, h = 6.62606891(58) × 10−34 J · s or × 10−27 erg · s; $$\hbar$$ = 1.05457266(63) × 10−34 J · s, or × 10−27 erg · s; The best values are obtained from measurements of the flux quantum φ0 = h/2e using the Josephson effect, and the quantum of conductance G0 = 2e2/h from the quantum Hall effect; 2016, h = 6.62606983 × 10−34 J · s, achieved with NIST's new watt balance. * Length: lP = (G$$\hbar$$/c3)1/2 = 1.6 × 10−33 cm. * Time: tP = lP / c = 5.4 × 10−44 s. * Energy: EP = lP c4/G = 2 × 1016 erg = 1.3 × 1019 GeV. * Mass and density: MP = 2.2 × 10−5 g, and ρP= 5.1 × 1096 kg/m3. @ General references: Planck SBAW(1899); Fischbach et al PRL(91) [quantum mechanics with different $$\hbar$$]; Cooperstock & Faraoni MPLA(03)ht, IJMPD(03)gq [including e and s]; Wilczek PT(05)oct [absolute units]. @ Measurements: Williams et al PRL(98) + pn(98)sep + pw(98)sep; Steiner RPP(13); news pt(16)jul [precise determination in preparation for a new, refined SI in 2018]; news pt(16)sep. @ Related topics: Zeilinger AJP(90)feb [Planck stroll]; Casher & Nussinov ht/97 [pP is unattainable]; Sivaram a0707 [Planck mass]; Ramanathan a1402 [Planck's constant as diffusion constant]; Calmet PTRS(15)-a1504 [effective enery-scale dependence, motivated by quantum gravity]. Planck Cube * Idea: A cube with axes labeled by $$\hbar$$, G and $$c^{-1}$$, whose vertices correspond to various types of physical theories; Can be considered as illustrating the concept of deformation. Planck Distribution / Formula / Law for Black Body > see thermal radiation. Planck Mission / Satellite > see cosmic microwave background. Planck Stars > see astronomical objects. Plane Wave Solutions > see gravitational wave solutions; types of waves. Planets > see extrasolar planets; solar planets [including "Planet X" and "Planet 9"]. Planetary Nebulae > see interstellar matter. Plasma Physics > see phenomenology of magnetism. Plasticity > s.a. Elasticity. * Idea: The phenomenon by which many materials maintain their deformed shape after forces are applied to them; It is often irreversible; In some materials the plastic deformation occurs when the applied forces exceed a certain threshold, below which the materials are elastic. * Microscopically: Plasticity is a result of the propensity of solids to "flow", usually because of the motion of dislocations within them; It relies therefore on the presence of many dislocations that can easily move through the crystal, and on the bonds that hold the crystal together not being too localized, making it brittle. * Examples: Materials with delocalized bonds are metals (in which they are due to conduction electrons) and quantum crystals (in which they are due to the atoms or molecules in the lattice, which are light, making their quantum properties important). @ References: Castaing Phy(13) [giant, anisotropic plastic deformation that is also reversible in the quantum solid Helium-4]. > Online resources: see Wikipedia page. Plateau Problem > see extrinsic geometry [minimal surface]. Platonic Solids > see euclidean geometry. Plausibility Measures * Idea: Structures for reasoning in the face of uncertainty that generalize probabilities, unifying them with weaker structures like possibility measures and comparative probability relations. @ References: Fritz & Leifer a1505/QPL [on test spaces]. Plebański Action for Gravity > s.a. first-order actions; BF theories; unified theories. @ References: Bennett et al IJMPA(13)-a1206 [several theories of four-dimensional gravity in the Plebański formulation]. Plebański-Demiański Solutions > see types of geodesics. Plurality of Worlds > see extrasolar astronomy; history of cosmology. PMNS (Pontecorvo-Maki-Nakagawa-Sakata) Matrix * Idea: The lepton flavor mixing matrix in the Standard Model of particle physics. > Online resources: see Wikipedia page. PN Formalism > see under Post-Newtonian Expansion. Podolsky Theory > see modified theories of electrodynamics. Pohlmeyer Invariants > see bosonic strings and superstrings. Pohlmeyer's Theorem * Idea: A result proving that any critical fixed point for a field theory (in integer dimension) with vanishing anomalous dimension must be the Gaussian one. @ References: Rosten JPA(10)-a1005 [extension to non-integer dimension]. Poincaré Conjecture > see conjectures. Poincaré Duality > see cohomology. Poincaré Group Poincaré Lemma > see differential forms. Poincaré Map / Section / Surface * Idea: A 2D scatter plot representing the position in phase space of a system at discrete values of independent variables; Useful indicator of chaos when NdofNcom ≤ 2, otherwise regular behavior can be misinterpreted as chaos. @ Examples: in Murray & Dermott 99 [solar system]. @ Generalization: Gaeta JNMP(03)mp/02 [Poincaré-Nekhoroshev]. Poincaré Recurrence > see Recurrence; Unitarity. Poincaré-Hopf Theorem * Idea: A relationship between the Euler characteristic of a manifold M and the indices of a vector field on M over its zeroes; A special case is the "hairy ball theorem", which states that there is no smooth vector field on a sphere having no sources or sinks. @ References: Cima et al Top(98) [non-compact manifolds]; Szczęsny et al IJGMP(09)-a0810 [new elementary proof]. > Online resources: see Wikipedia page. Point > see spacetime. Point-Present Theories > see time. Point Process > see statistical geometry. Point Transformation > see symplectic structure. Point-Splitting Regularization > see regularization. Pointed Topological Spaces > see types of topological spaces. Poisson Algebra / Bracket / Structure Poisson Distribution > s.a. probability.$ Def: The distribution on $$\mathbb N$$ given by P(n) = ea an/n!.
* Properties: It has mean a, and standard deviation a1/2.
@ General references: de Groot 75, ch5.
@ Applications: Elizalde & Gaztañaga PLA(88) [of galaxies].
Poisson Equation > s.a. partial differential equations.
* Idea: The elliptic partial differential equation ∇2u = −f, where ∇2 is the Laplacian operator for a Riemannian metric, often flat.
@ General references: Ma et al a1208/JCP [efficient numerical solution for arbitrary 2D shapes].
Generalizations: Sebastian & Gorenflo a1307 [fractional].
> Online resources: see MathWorld page.
Poisson Formula
* Idea: The name given to a set of summation formulas, the original one being
$\sum_{k=-\infty}^{\infty}\exp\{{\rm i}kx\} = 2\pi\sum_{m=-\infty}^{\infty}\delta(x-2\pi mx)\;.$
@ References: news PhysOrg(16)mar [new formulas].
Poisson Integral > see integration.
Poisson Process > see statistical geometry.
Poisson Ratio > see Strain Tensor.
Poisson σ-Model > see sigma model.
Poisson-Boltzmann Equation > see partial differential equations.
Poisson-Lie Group
* Applications: Useful for quantum deformations of a group.
@ References: Drinfeld SMD(83); Lu & Weinstein JDG(90).
Poisson-Vlasov Equations > see under Vlasov-Poisson Equations.
Polar Decomposition Theorem > see examples of lie groups [SL(2, $$\mathbb C$$)].
Polariton > see Quasiparticles.
Polarization in Electricity and Field Theory > see electricity; quantum field theory states; vacuum.
Polarization of Waves > see polarization.
Polarization in Symplectic Geometry
* Idea: A polarization is an n-dimensional completely degenerate subspace of a symplectic vector space, or integrable distribution on a 2n-dimensional symplectic manifold (it thus forms Lagrangian submanifolds).
* Example: Given a symplectic vector space (V, Ω) and a map P: VV such that P2 = $$\mathbb 1$$ and P Ω = − Ω P, we can construct a polarization defined by the eigenvectors of P+:= $$\frac12$$($$\mathbb 1$$ + P) (so P+ Ω P+ = 0), with eigenvalue 1.
Polaron
* Idea: A quasiparticle used in condensed matter physics to understand interactions between electrons and atoms in a solid.
References: Emin 13.
Polish Space > see types of distances.
Polygamma Function
$Def: The polygamma function of order m is the (m + 1)th derivative of the logarithm of the gamma function; > s.a. Wikipedia page. Polygon, Polyhedron > see euclidean geometry; For quantum polyhedra, see quantum geometry. Polygroup Theory > see group theory. Polyhomogeneous Spacetimes > see types of spacetimes. Polymer > s.a. condensed matter [soft matter]; molecular physics. @ Statistical mechanics: Brereton JPA(01); Ioffe & Velenik BJPStat(10)-a0908 [stretched by an external force]; Sabbagh & Eu PhyA(10) [van der Waals equation of state, self-diffusion coefficient]; De Roeck & Kupiainen CMP(11)-a1005 [polymer expansion]; Rodrigues & Oliveira JPA(14) [Monte Carlo simulations]. @ Related topics: Jitomirskaya et al CMP(03)mp/04 [random, and delocalization]; Imbrie JPA(04) [branched directed, dimensional reduction]; > s.a. solitons [in polyacetylene]. Polymer Quantization > s.a. representations of quantum mechanics. * Idea: The name given to one of four related non-regular representations of the Heisenberg algebra, in which the spectrum of the configuration or the momentum variable is not continuous, and the corresponding infinitesimal generator is not defined; This approach to quantization is related to and inspired by, but distinct from that used in loop quantum gravity. @ General references: Fredenhagen & Reszewski CQG(06)gq; Corichi et al CQG(07)gq/06, PRD(07)-a0704; Chiou CQG(07)gq/06 [and the Galileo group]; Hossain et al CQG(10)-a1003 [and the uncertainty principle]; Campiglia a1111 [and geometric quantization]; Date & Kajuri CQG(13)-a1211 [and symmetries]; Chacón-Acosta et al Sigma(12) [statistical thermodynamics]; Barbero et al PRD(14)-a1403 [separable Hilbert space]; Gorji et al CQG(15)-a1506 [versus the Snyder non-commutative space]; Morales-Técotl et al PRD(15)-a1507 [and the saddle point approximation of partition functions]; Morales-Técotl et al PRD(17)-a1608 [particles, path-integral propagator]; Amelino-Camelia et al PLB(17)-a1707 [and deformed symmetries, non-commutative geometry]; Berra-Montiel & Molgado a1805 [as deformation quantization]. @ Simple systems: Husain et al PRD(07)-a0707 [Coulomb potential]; Kunstatter et al PRA(09)-a0811 [1/r2 potential]; Kunstatter & Louko JPA(12)-a1201 [on the half line]; Majumder & Sen PLB(12)-a1207 [and GUP]; Flores-González et al AP(13)-a1302 [particle propagators]; Barbero et al CQG(13) [band structure]; Gorji et al PRD(14)-a1408 [ideal gas, partition function]; Martín-Ruiz et al PRD(15)-a1506 [bouncing particle]; Berra-Montiel & Molgado a1610 [and zeros of the Riemann zeta function]; > s.a. gas. @ Phenomenology: Martín-Ruiz PRD(14)-a1406 [beam of particles, and diffraction in time]; Chacón & Hernández IJMPD(15)-a1408 [semiclassical Hamiltonian and compact stars]; Martín-Ruiz et al a1408, Demir & Sargın PLA(14)-a1409 [tunneling, Zeno effect]; Kajuri CQG(16)-a1508 [radiation in inertial frames]; Ali & Seahra PRD(17)-a1709 [natural inflation]; Kajuri & Sardar PLB(18)-a1711 [Lorentz violation, at low energies]; Khodadi et al sRep-a1801 [optomechanical setup]; > s.a. phenomenology of cosmological perturbations; unruh effect. > Related topics: see Bohr Compactification; entropy in quantum theory; fock space; holography; renormalization; tunneling. > Gravity / cosmology: see black-hole quantization; loop quantum gravity; minisuperspace; models in canonical quantum gravity; 2D quantum gravity. > Other field theories: see bose-einstein condensates; Pais-Uhlenbeck Model [with higher-order time derivatives]; quantum field theories [scalar]. Polynomials > see functions. Polyomino > s.a. voronoi tilings. * Idea: A finite and connected union of tiles. Polytope > s.a. Complex / simplex. * Idea: An n-dimensional generalization of a polyhedron; The word was coined by Alicia Boole (daughter of George Boole).$ Def: A polytope in an affine space is the convex hull of a finite set of points.
* Result: (Balinski's theorem) The graph of a d-polytope is d-connected.
* Simple polytope: One in which each vertex is on the boundary of d facets.
* Polytope of a collection of simplices: The polytope |K| of the collection K in $$\mathbb R$$d is the union of all simplices σK, adequately structured as a topological space [?]; If K is a simplicial complex, then its polytope is a polyhedron.
* Delaunay polytope: A polytope P such that the set of its vertices is SL, with S being an empty sphere of a given lattice L.
* Parallelotope: A polytope whose translation copies fill space without gaps and intersections by interior points; Voronoi conjectured that each parallelotope is an affine image of the Dirichlet domain of a lattice, i.e., a Voronoi polytope.
@ Books: Grünbaum 67, 03; Thomas 06 [geometric combinatorics].
@ General references: Kalai JCTA(88) [and graphs]; Walton in(04)mp [and Lie characters]; Deza & Grishukhin EJC(04) [parallelotopes]; Enciso a1408 [volumes of polytopes in any dimension without triangulations].
@ Regular polytopes: Cantwell JCTA(07) [all regular polytopes are Ramsey]; Boya & Rivera RPMP(13)-a1210.
@ Delaunay polytopes: Dutour EJC(04); Erdahl et al m.NT/04-(proc); Sikiric & Grishukhin EJC(07) [computing the rank].
@ In 3D spaces of constant curvature: Abrosimov & Mednykh a1302 [volume formulas].
@ Other special types: Neiman GD(14)-a1212 [null-faced 4-polytopes in Minkowski spacetime].
> Related topics: see Schlegel Diagram; statistical geometry [from random point set].
Pomeransky-Senkov Black Hole > see causality conditions.
Pomeron
@ General references: Levin hp/98-conf; cern(99); Brower et al JHEP(07)ht/06 [and gauge/string duality]; Swain a1110-fs [and the nature of particles].
@ And QCD: Donnachie et al 02; Nachtmann hp/03-conf.
Pontrjagin / Pontryagin Classes, Numbers
Ponzano-Regge Model > s.a. spin-foam models / 3D gravity; SU(2).
* Idea: 3D spin coupling theory, giving a non-perturbative definition of the path integral for (Euclidean) 3D gravity.
@ General references: Ponzano & Regge in(68); Lewis PLB(83) [renormalizability]; Iwasaki gq/94, JMP(95)gq [in terms of surfaces]; O'Loughlin ATMP(02)gq/00 [boundary actions]; Barrett & Naish-Guzman CQG(09)-a0803; Wieland PRD(14)-a1402 [action from a 1D spinor action].
@ Variations: Carfora et al PLB(93) [4D, and 12j symbols]; Carbone et al CMP(00); Freidel NPPS(00)gq/01 [Lorentzian]; Livine & Oeckl ATMP(03)ht/03 [supersymmetric]; Li CMP(14)-a1110 [κ-deformation]; Vargas a1307 [on a manifold with torsion].
@ Related topics: Barrett & Foxon CQG(94)gq/93 [semiclassical limit]; Petryk & Schleich PRD(03)gq/01 [geometric quantities]; Arcioni et al NPB(01)ht [and holography]; Freidel & Louapre CQG(04)ht [gauge fixing], gq/04 [and Chern-Simons theory]; Freidel & Livine CQG(06)ht/05 [effective field theory for particles]; Hackett & Speziale CQG(07)gq/06 [geometry and clasping rules]; Barrett & Naish-Guzman gq/06-MGXI [and Reidemeister torsion]; Livine & Ryan CQG(09)-a0808 [B-observables]; Caravelli & Modesto a0905 [spectral dimension of spacetime].
Popper's Thought Experiment
* Idea: A thought experiment proposed by Karl Popper designed to check for possible violations of the uncertainty principle.
@ General references: Qureshi IJQI(04)qp/03, AJP(05)jun-qp/04; Richardson & Dowling IJQI(12)-a1102 [no violation of the uncertainty principle, fundamental flaw]; Qureshi Quanta(12)-a1206 [modern perspective]; Cardoso a1504 [non-linear quantum theory and uncertainty principle violation].
@ With photons: Kim & Shih FP(99) [entangled photon pairs]; Peng et al EPL(15) + news pw(15)jan [photon number fluctuation correlations in a thermal state]; Reintjes & Bashkansky a1501.
Porosity of a Measure > see measure.
Pöschl-Teller Potential > s.a. types of coherent states.
@ Modified: Aldaya & Guerrero qp/04 [group quantization].
> Online resources: see MathWorld page on Pöschl-Teller differential equations.
Poset > s.a. set of posets and types of posets.
Position
* In quantum mechanics: Teller (1979) argued that a particle cannot have a sharp position; Others disagree; > s.a. localization in quantum mechanics.
@ In quantum mechanics: Chew SP(63); Halvorson JPL(01)qp/00 [sharp]; Kosiński & Maślanka a1806 [operator, for massless spinning particles].
@ Tests of local position invariance: Peil et al PRA(13) [using continuously-running atomic clocks]; Shao & Wex CQG(13) [bounds].
Positioning Systems > s.a. coordinates; minkowski spacetime [secure positioning].
@ Relativistic positioning systems: Coll et al a0906-rp [status]; Tartaglia a1212-conf [principles and strategies]; Coll a1302-conf [rev]; Puchades & Sáez ApSS(14)-a1404 [errors due to uncertainties in the satellite world lines].
@ GPS: Parkinson & Spilker ed-96; in Hartle 03; Puchades & Sáez ApSS(12)-a1112.
Positive Action Conjecture > see action for general relativity.
Positive-Energy Theorem
Positive Frequency Function > see functions.
Positive Map > see Maps.
Positivism > see philosophy of science.
Positron > see electron.
Possibilism > see time.
Possibility > see many-worlds interpretation.
Post-Friedmannian Formalism > see cosmological models.
Post-Newtonian (PN) Expansion > see gravitational phenomenology; gravitomagnetism; matter dynamics in gravitation.
Potential for a Field
* Idea: Originally, a potential was a scalar function whose gradient gives a force on a test particle (per unit charge); It was extended to a vector field whose curl gives a (magnetic) field, and then to the general mathematical notion of a function (or a higher-rank tensor field) which gives, by differentiation, a field of interest, possibly a dynamical tensor field.
> Vector potential: see aharonov-bohm effect; connection; electromagnetism.
Potential in Physics > s.a. scattering; thermodynamics [thermodynamic potentials].
* Retarded potential: It has to be used for systems with large velocities (corrections are of order v2/c2), or pairs of systems with large separations compared to the internal motions (even if slow).
@ General references: Kellogg 29; Grant & Rosner AJP(94)apr [orbits in power law V].
@ Retarded potential: Spruch & Kelsey PRA(78) [elementary derivation]; > s.a. arrow of time.
> In classical mechanics: see Bertrand's Theorem; classical systems [including central potentials]; Coulomb Potential.
> In classical field theory: see electromagnetism; newtonian gravitation.
> In quantum mechanics: see schrödinger equation; special potentials; pilot-wave interpretation [quantum potential].
> In quantum field theory: see effective field theories [effective potential]; quantum field theory.
Potts Model > s.a. lattice field theory; Yang-Baxter Equation.
* Idea: A 2D generalization of the Ising model of interacting spins on a lattice; The chiral Potts model is a challenging one, it is "exactly solvable'' in the sense that it satisfies the Yang-Baxter relation, but actually obtaining the solution is not easy; Its free energy was calculated in 1988, the order parameter was conjectured in full generality in 1989 and derived in 2005.
@ General references: Baxter 82; Wu RMP(82); Sokal MPRF(01)cm/00-in [unsolved problems]; Baxter JPCS(06)cm/05 [rev]; Beaudin et al DM(10) [introduction from a graph theory perspective].
@ Phase transitions: Baxter JSP(05)cm, PRL(05)cm [chiral, order parameter]; Georgii et al JSM(05)mp [continuum, order-disorder transition]; Ahmed & Gehring JPA(05) [anisotropic, phase diagram]; Jacobsen & Saleur NPB(06) [antiferromagnetic transition]; Fernandes et al PhyA(06) [alternative order parameter]; Gobron & Merola JSP(07) [first-order]; Johansson PLA(08) [2D with open boundary conditions, Monte Carlo]; Aluffi & Marcolli JGP(13)-a1102 [motivic approach].
@ Coupled to gravity: Ambjørn et al NPB(09)-a0806, Cerda Hernández a1603 [causal dynamical triangulations].
@ Related topics and variations: Richard & Jacobsen NPB(07) [on a torus]; Barré & Gonçalves PhyA(07) [on a random graph, canonical and microcanonical ensembles]; Ganikhodjaev PLA(08) [next-nearest-neighbor interactions, on the Bethe lattice]; De Masi et al JSP(09) [continuum version, phases]; Contucci et al CMP(13)-a1106 [on a random graph]; Dasu & Marcolli JGP(15)-a1412 [in an external magnetic field, sheaf-theoretic interpretation]; > s.a. Confinement [model for]; renormalization.
Pound-Rebka Experiment > see tests of general relativity with light [gravitational redshift].
POVM > Positive Operator-Valued Measure, see measure theory.
Powder > see metamaterials.
Power of a Graph > see graph theory.
Power Spectrum of Perturbations in Field Theory
* Idea: Usually defined as the Fourier transform of the two-point correlation function of the field in a quantum state.
Power-Law Distributions > s.a. critical phenomena; states in statistical mechanics.
@ References: Simkin & Roychowdhury PRP(11) [mechanism for producing them].
Poynting Vector > s.a. energy-momentum tensor.
* Idea: The vector S = E × B/μ0, giving the direction of propagation of energy-momentum in an electromagnetic field, and the power flux across a unit normal surface.
* As a 4-vector: Without sources (Poincaré pointed out a difficulty with sources), the vector Pa = (U, P), where
U:= (1/8π) (E2 + B2) dv = T00 dv , P:= (1/4πc) E × B dv = T0i dv .
@ General references: in Jackson; in Rohrlich; McDonald AJP(96)jan [meaning].
@ Gravitational: de Menezes gq/98; Manko et al CQG(06) [axistationary electrovac spacetimes].
Poynting-Robertson Effect
* Idea: An effect that produces changes in the orbital plane of a particle; Has been applied to meteoroids.
@ References: Klacka ap/00, ap/01, ap/02, ap/02; in Harwit 06; Klacka a0807 [paradox in astrophysical application]; Klacka et al a0904 [explanations]; Bini & Geralico CQG(10) [extended to spinning particles in Schwarzschild spacetime]; Bini et al CQG(11); De Falco et al PRD-a1804 [relativistic, Lagrangian formulation].
pp-Waves > see gravitational wave solutions.
PPN (Parametrized Post-Newtonian) Formalism > s.a. gravitation / higher-order gravity; modified newtonian gravity.
* Rem: It is not the same as PN (Post-Newtonian) expansion of general-relativistic results around the weak-field / slow-motion limit.
Pre-Recueil > see Recueil.
Pre-Acceleration > see self-force [Lorentz-Dirac equation].
Precanonical Quantization > see approaches to quantum field theory; approaches to quantum gravity; quantization of gauge theories.
Precession > s.a. gravitating bodies; Gyroscope; Runge-Lenz Vector; test bodies; Thomas Precession.
* In general relativity: There are several types, perihelion (Einstein), geodetic (de Sitter), orbital plane (Lense-Thirring, gravitomagnetic), and spin-spin (Pugh-Schiff); > s.a. tests of general relativity with orbits.
@ General references: Magli phy/04 [in ancient astronomy]; Jonsson CQG(06)-a0708 [spin precession, covariant formalism]; Casotto & Bardella MNRAS(13)-a1210-conf [equations of motion of a secularly precessing elliptical orbit]; Lo et al AJP(13)sep, D'Eliseo AJP(15)apr [unified frameworks for perihelion advance, different causes].
@ In general relativity: Holstein AJP(01)dec; Sigismondi ap/05-MGX; Harper PhSc(07)dec; He & Zhao IJTP(09) [analytical solution]; Boyle et al PRD(11) [compact binaries, geometric approach]; D'Eliseo ApSS(12)-a1206 [precession of orbits, quick method]; Mashhoon & Obukhov PRD(13)-a1307 [in gravitational fields]; Hu et al AHEP(14)-a1312 [general spherically symmetric spacetimes]; > s.a. gravitational self-force [spin precession].
@ In modified gravity theories: Behera & Naik ap/03 [vector gravity]; Schmidt PRD(08) [modified Newtonian potential]; Fokas et al a1509 [relativistic gravitational law]; Friedman & Steiner EPL(16)-a1603 [in relativistic Newtonian dynamics].
@ Specific cases: Stewart AJP(05)aug [Mercury, due to other planets]; Iorio AJ(09)-a0811 [Saturn, anomalous]; Moniruzzaman & Faruque PS(13) [periastron precession due to gravitational spin-orbit coupling].
> In various theories: see Cogravity; gravity theories; newtonian gravity [perturbations and curved spaces].
> In various spacetimes: see reissner-nordström solutions; schwarzschild-de sitter spacetime [with a cosmological constant]; test bodies.
Precision > s.a. Accuracy.
* Idea: The size of the error bar in a series of measurements.
Precompactness > see compactness.
Prediction and Predictability > s.a. causality; paradigms in physics; time.
* Idea: Predictability is an epistemic property of a model for a physical system, related to what we are able to compute and predict with it; Prediction may refer to a theory predicting either effects, phenomena, values of quantities, or more specifically the evolution of a system and results of future measurements.
* Question: Does a physical law have to be predictive?
* Remark: Usually, for several practical and theoretical reasons, predictions in physics are statistical.
@ General references: Brush Sci(89)dec [light bending]; Hole IJTP(94) [and determinism]; Holt & Holt BJPS(93) [in classical mechanics]; Caves & Schack Compl(97)cd [types]; Coles 06 [I]; Manchak FP(08) [in general relativity]; Werndl BJPS(09) [and chaos]; Srednicki & Hartle PRD(10)-a0906 [in a very large universe]; Stuart et al PRL(12) + news physorg(12)jul, physorg(12)jul [experimental bound on the maximum predictive power]; Cecconi et al AJP(12)nov [intrinsic limitations]; Hosni & Vulpiani P&T(17)-a1705 [forecasting and big data].
@ Of effects: Hitchcock & Sober BJPS(04) [vs accommodation, and overfitting].
> Related topics: see chaos; Determinism; electron [magnetic moment]; Explanation; wave phenomena [superluminal].
Prefixes > see units.
Pregeometry > see Matroid [mathematics]; quantum spacetime [physics].
Preons > see composite models.
Preorder > s.a. poset; Quasiorder [non-reflexive generalization].
Def: A reflexive and transitive binary relation; The concept generalizes that of (reflexive) partial orders and equivalence relations. * Remark: One can always define an Alexandrov topology on a preorder by using the upper sets as open sets. @ References: Cameron et al DM(10) [random preorders and alignments]; Minguzzi AGT(12)-a1108 [representation by continuous utilities]. > Online resources: see Wikipedia page. Prequantization > s.a. geometric quantization. @ References: Schreiber a1601-in [higher prequantum geometry]. Presentation of a Group Def: A pair (S, D) of a set of generators S and a set of relations between the generators D = {Γi}; Each relation Γi is of the form wi =1, where wi is a word; The group elements are equivalence classes of words.
* Example: One generator, S = {a}; If D = Ø, the group is $$\mathbb Z$$, the infinite cyclic group generated by a, but if D = {aa = 1}, we get the group of order 2.
* Remark: Two presentations of the same group may look quite different, and it may be difficult or impossible to tell whether two groups are isomorphic by looking at their presentations; > s.a. group theory [isomorphism problem]; Word [word problem].
Presentation of a Topological Space
* Idea: An appropriate set of vertices, edges, faces, etc.
* Result: A finitely presented space has a finitely presented fundamental group (> s.a. Calculating Theorem).
Presentism > s.a. special relativity; time.
* Idea: The view that only the present is real (as opposed to possibilism, eternalism or the block-universe view, and their variants).
@ References: Wüthrich a1207 [fate in modern physics]; Romero & Pérez EJPS(14)-a1403 [and black holes].
Pressure > s.a. energy-momentum tensor; fluid; gravitating matter; Momentum; radiation; thermodynamics; turbulence.
@ General references: Durand AJP(04)aug [quantum, Bose and Fermi statistics]; Frontali PhysEd(13) [history of the concept].
@ Coupling to gravity: Ehlers et al PRD(05)gq; Narimani et al JCAP(14)-a1406 [and observational cosmology].
Presymplectic Structure > see symplectic geometry.
Prevalence [> s.a. measure theory.]
* Idea: The analogue of the finite-dimensional notions of 'Lebesgue almost every' and 'Lebesgue measure zero' in the infinite-dimensional setting
@ References: Ott & Yorke BAMS(05).
Price's Law > see perturbations of schwarzschild spacetime.
Primakoff Effect > s.a. axions.
* Idea: The production of an axion from the interaction of a photon with a classical electromagnetic field [Henry Primakoff 1951].
Prime Graphs > see types of graphs.
Prime Numbers > see number theory.
Primordial Black Holes > see types of black holes.
Primordial Gravitational Waves > see gravitational-wave background.
Primordial Magnetic Fields > see magnetic fields in cosmology.
Primordial Perturbations > see phenomenology of cosmological perturbations.
Principal Fiber Bundle
Principal Ideal, Principal Ideal Domain, Principal Ideal Ring > see rings.
Principal Part / Value > see distribution.
Principal Principle > s.a. quantum measurements.
* Idea: A principle relating objective probabilities and subjective chance.
@ References: Meacham BJPS(10) [misconceptions].
Principle of Equivalence > see under Equivalence Principle.
Principle of the Excluded Middle > see Law of the Excluded Middle.
Principle of Mediocrity > see civilizations.
Principles in Mathematics, Physics, and Related Areas > s.a. Physical Laws.
> In gravitation and cosmology: see anthropic principle; Copernican Principle; cosmological principle; equivalence principle; mach's principle; Principle of Mediocrity; Relativity Principle.
> In quantum theory: see Correspondence Principle; (Pauli) Exclusion Principle; Landauer's Erasure Principle; Maximal Variety; (heisenberg's) uncertainty principle.
> In other physics: see Action-Reaction Principle; Boltzmann Principle; Causal Entropic Principle; Fermat's Principle; Hamilton's Principle; huygens' principle; Maupertuis Principle; Maximum Entropy Principle; Non-Demolition Principle; Superposition Principle; Symmetric Criticality; variational principles.
> In mathematics: see (Cauchy's) Argument Principle; Enumeration Principle; Pigeonhole Principle; Principal Principle; Well-Ordering Principle.
> In logic: see Common Cause Principle; Excluded Middle; Leibniz Principle; Principle of Sufficient Reason.
Prisoner's Dilemma > see games.
Probability Current > s.a. path integrals.
* In quantum mechanics: It can be constructed from the wave function by j:= # Im(ψ* ∇ψ); The integral lines for this current are analogous to trajectories.
@ References: Schumacher et al a1607 [generalization to finite-dimensional Hilbert spaces, open quantum systems].
Problems > see Coloring; matrix; orbits in newtonian gravity [Kepler], of gravitating objects; Three-Body Problem; Two-Body Problem.
* 2012.03: Lightning strikes produce free neutrons, and we're not sure how [@ news at(12)mar].
Proca Theory > s.a. modified electromagnetism / field theories [spin-1, 3/2]; lagrangian systems [Proca Lagrangian].
* Idea: A "massive gauge theory", a gauge theory with a non-gauge-invariant mass term m2 A2 added to the Lagrangian,
L = − $$1\over4$$Fab Fab + $$1\over2$$m2 Aa Aa + Aa j a .
@ General references: Proca CRAS(36); in Wentzel 49; Goldhaber & Nieto RMP(71) [and photon mass limits]; in Gsponer & Hurni in(98)phy/05 [history]; Dvoeglazov CzJP(00)ht/97; Fabbri AFLB(11)-a0908 [most general consistent theory].
@ Einstein-Proca: Dereli et al CQG(96) [torsion and non-metricity]; Vollick gq/06; > s.a. black-hole hair; black-hole perturbations; einstein-cartan theory.
@ Other variations, generalizations: Kruglov IJMPA(06) [sqrt version, including spin-1/2]; Escalante et al a1402 [5D, canonical analysis]; Heisenberg JCAP(14)-a1402; Allys et al JCAP(16)-a1511; De Felice et al a1602, JCAP(16)-a1603 [fifth-force screening and cosmology]; Heisenberg et al PLB(16)-a1605 [with higher-order derivative interactions]; Allys et al a1609 [SU(2) Proca theory, or non-Abelian vector galileon]; Heisenberg a1705-proc [rev].
@ Quantization: Aldaya et al IJMPA(97)ht/96; van Hees ht/03 [renormalizability]; Helesfai CQG(07)gq/06 [in lqg]; Zamani & Mostafazadeh JMP(09)-a0805; Castineiras et al PRD(11)-a1108 [in a Rindler wedge].
@ Quantum theory, in curved spacetime: Furlani JMP(99) [on a globally hyperbolic Lorentzian manifold, canonical]; Toms a1509 [with non-minimal terms, Faddeev-Jackiw approach to quantization]; Schambach a1709-MSc; Schambach & Sanders a1709 [and the zero mass limit].
@ Phenomenology: Brito et al PLB(16)-a1508 [self-gravitating BECs of Proca particles]; De Felice et al a1703 [observational constraints].
@ Related topics: Comay NCB(98); Kim et al MPLA(98)ht [symmetries]; Vytheeswaran IJMPA(98) [as gauge theory]; Zecca GRG(06) [in FLRW spacetime].
Process > s.a. Ontology [process ontology].
* Quantum process: The operation performed by a quantum processor that transforms a quantum system's state into a different one.
@ Physical process: Spaans gq/05 [background independence]; Needham BJPS(13) [processes as autonomous entities, thermodynamic perspective].
@ Quantum process: Poyatos et al PRL(97) [characterization]; D'Ariano & Lo Presti PRL(01) [and quantum tomography]; Bendersky et al a1407, Parke a1409 [implications of computer science principles]; Lee & Hoban PRS(16)-a1510 [tradeoff between quantum computation and communication complexity]; Yadin et al PRX(16) [operations which do not use coherence]; > s.a. creation operator; quantum effects.
> Specific physical processes: see diffusion; Drell-Yan Process; Joule-Thomson Process; Penrose Process; Transport.
> Processes theory in physics: see approaches to quantum field theory [process algebra approach]; causal structures.
> Types of mathematical processes: see markov processes; random processes; statistical geometry [point processes]; stochastic processes.
> Specific mathematical processes: see Airy Process; Lévy Process; Wiener Process.
> Astrophysical processes: see Accretion Process.
Products
* Special infinite products:
k = 2(1 − 1/k2) = $$1\over2$$ [prove by splitting into (1 − 1/k) (1 + 1/k) and using factorials] .
@ References: Roy 11 [series and products from the XV to the XXI century]; Albert & Kiessling JSP(17)-a1610 [infinite trigonometric products and random walks on the real line].
@ Generalized products: van de Wetering a1803 [sequential product]; > s.a. Star Product; vectors [scalar, vector product].
Programming > see computation; computer languages.
Progressing Waves > see types of waves.
Projectable Vector Field
$Def: A differentiable vector field v is projectable by the map f if f '(v) is differentiable. Projectile Motion > s.a. kinematics of special relativity. @ General references: Klevgard a1501 [and XX century changes in physics]; Walley a1804 [history, from Aristotle to Galileo and Newton]. @ With air resistance: Mohazzabi & Shea AJP(96)oct [with variation of atmospheric pressure]; Price & Romano AJP(98)feb [optimal launch angles]; Warburton & Wang AJP(04)nov; Linthorne pw(06)jun [and soccer]; Goff & Carré AJP(09)nov [soccer balls]. Projection Mapping > see bundles. Projection Postulate in Quantum Theory > see axioms for quantum theory; wave function collapse. Projective Geometry, Structure, Limit, System > see projective. Projective Relativity and Field Theory * Projective relativity: Initially proposed by Fantappiè and subsequently developed by Arcidiacono. @ General references: in Schmutzer ed-83 [projective relativity]; Schmutzer AN(05)ap [projective unified field theory and 2-body system]. @ And cosmology: Licata & Chiatti IJTP(09)-a0808; Benedetto IJTP(09) [and varying speed of light]. Projector, or Projection Operator$ Def: An operator P on an inner product space which is self-adjoint and idempotent.
* Projective methods: Used for systems of linear and non-linear algebraic equations and convex optimization.
@ References: Galántai 03; Halliwell PLA(13)-a1207 [localized in a region of phase space].
Proof Theory
Prop > see examples of categories.
Propagator > s.a. green function [for differential operators]; feynman propagator and green function [in quantum field theory].
* In quantum mechanics: Can be calculated directly using the path-integral technique, or as inverse Laplace transform of the Green function.
@ In quantum mechanics: Nardone AJP(93)mar [calculation]; Fulling & Güntürk AJP(03)jan [1D particle in a box]; Kosut et al qp/06 [distance between propagators]; Moshinsky et al Sigma(07)-a0711 [from Green function]; Zanelli et al RPMP(08) [integral representations].
Propensity > see probability in physics.
Proper Discontinuous Action of a Group > see group action.
Proper Time > s.a. special-relativistic kinematics.
* Idea: The proper time at a point along a timelike line in spacetime is the length of the line from a reference initial point.
@ References: Wesson a1011 [adjustments from the possible existence of higher dimensions].
Property > s.a. Generic Property; Physically Significant Property; Stability.
$In mathematics: A property P defined for elements x of a set X is an attribute that those elements may have or not have, i.e., a map P : X → {0,1}.$ In physics: A property P is often an attribute that a physical system s or theoretical model may have to varying degrees, i.e., a map P : S → $$\mathbb R$$ (sometimes $$\mathbb C$$); Important examples are the values of observables, or the truth values of propositions about the system.
* Rem: For the purpose of discussing different types of properties, it is often convenient to specify a topological space structure on X and distinguish cases in which P behaves differently when considering its values for elements in a neighborhood of a given x.
* Terminology: An element x in X (or a subset A of X) are said to have the property if P(x) = 1 (resp., P(A) = {1}).
@ References: Hofmann et al a1605-proc [of a quantum system, and observable effects].
> Related topics: see measurement in quantum theory.
Propositional Logic > see logic.
Prout's Law > see atomic physics.
Proximity-Force Approximation > s.a. casimir-effect examples.
* Idea: An approximation method for the electrostatic interaction between two perfectly conducting surfaces, used when the distance between them is much smaller than the characteristic lengths associated to their shapes; The electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, approximated as pairs of parallel planes; It has been successfully applied to contexts such as nuclear physics and Casimir-effect calculations.
@ References: Fosco et al AP(12)-a1201 [improved approximation].
Proximity Graphs > see graph types.
Proximity Structure
Pseudoclassical Dynamical Systems
* Idea: Models that have classically anticommuting variables.
@ References: Allen et al a1509 [quantization].
Pseudodifferential Operator > see operator theory.
Pseudogroup > s.a. differentiable maps [local pseudogroup of transformations].
@ In physics: Woon ht/98 [intro and applications].
Pseudomanifold > see types of manifolds.
Pseudometric Space > see distance.
Pseudorandomness > see random processes.
Pseudosphere > s.a. sphere.
* Idea and history: A 2D surface with constant and negative Gaussian curvature; Discussed in 1868 by Eugenio Beltrami in terms of a disk on the plane, which is isomorphic to the two-sheet hyperboloid in $$\mathbb R$$3.
@ References: Bertotti et al gq/05-proc [review, geometry and physics].
Pseudostationary Spacetime > see types of spacetimes.
Pseudosymmetric Spacetime > see 3D geometry.
Pseudotensor > see stress-energy pseudotensor.
Pseudovector (a.k.a. axial vector) > see vector.
ψ-Epistemic Quantum Theory > s.a. interpretations of quantum theory [statistical interpretation]; quantum foundations; types of interpretations [type-II].
* Idea: The view that quantum states are not descriptions of quantum systems but rather reflect the assigning agents' epistemic relations to the systems; Theories that try to reproduce the predictions of quantum mechanics, while viewing quantum states as ordinary probability distributions over underlying objects called "ontic states".
@ General references: Friedrich SHPMP(11)-a1101; Aaronson et al PRA(13)-a1303 [conditions, no-go results, and the role of symmetry]; Patra et al PRA(13) [experiment]; Ballentine a1402 ["functionally ψ-epistemic" theories]; Wharton Info(14)-a1403 [quantum states as ordinary information]; Miller & Farr a1405 [quantum states apply only to ensembles, there are no ontic states]; Rovelli FP(16)-a1508, refutation Zeh a1508 [argument against the realist interpretation]; Boge a1603 [Einsteinian view, new developments].
@ And distinguishability of quantum states: Barrett et al PRL(14)-a1310, Leifer PRL(14)-a1401, Branciard PRL(14)-a1407, news nat(15)may [no-go results].
@ Gravity-related theories: Evans et al a1606 [quantum cosmology].
@ Other theories and applications: Kak a1607-conf [quantum communication]; Sen a1803 [retrocasual hidden-variable model].
> Related topics: see Epistemology; hidden-variable theories; quantum probabilities; realism [epistemological realism]; sub-quantum theories.
ψ-Ontic Quantum Theory ("wave functions are real") > s.a. interpretations of quantum theory [including PBR theorem]; types of interpretations [type-I].
* Idea: The view that quantum states are ontic, i.e., states of reality.
* Schrödinger's original interpretation: The wave function is actual density of stuff, and can be identified with a particle's cherge density, for example.
* Problems with wave function = particle: Wave packets spread, particles don't; What about systems with N > 1 particles?
* Other possibilities: The wave function may be real but not to be identified with a physical object.
* The PBR theorem: (Pusey-Barrett-Rudolph) Models in which quantum states just represent information about underlying physical states contradict quantum mechanics.
* Experiments: 2015, Results obtained for photon systems indicate that no knowledge interpretation of quantum theory can fully explain the distinguishability of non-orthogonal quantum states; The results are not yet conclusive, because most of the photons were not detected, and other groups are working on experiments with ions; The Barrett-Cavalcanti-Lal-Maroney (BCLM) argument can be turned into an effective experimental test.
@ Schrödinger's interpretation: Barut AdP(88), FP(88), FPL(88) [revival].
@ General references: Liu BJPS(94); Jabs PE(96)qp; Lewis BJPS(04) [less problematic interpretation]; Colbeck & Renner PRL(12)-a1111 [and completeness of quantum theory]; Hardy IJMPB(13)-a1205; Mansfield a1306 [ontic and epistemic interpretations]; Shenoy & Srikanth a1311 [the wave function is real but non-physical]; Leifer Quanta(14)-a1409 [rev]; Cabello et al PRA(16) [thermodynamic constraints]; Durham a1807 [for field theories, based on the Wheeler-DeWitt equation].
@ The PBR theorem: Pusey et al nPhys(12)may-a1111 + news nat(12)may [the theorem]; Nigg et al NJP(16)-a1211 [experimental test using trapped ions]; Patra et al PRL(13)-a1211 [argument based on a continuity assumption]; Colbeck & Renner NJP(17)-a1312 [condition under which Ψ is uniquely determined by a complete description of the system's physical state]; Moseley a1401 [simpler proof]; Mansfield PRA(16)-a1412 [using a weaker, physically motivated notion of independence]; Mansfield EPTCS(14)-a1412; Ducuara et al JPA(17)-a1608 [under noisy channels]; Charrakh a1706 [criticism of the argument].
@ Other support of ψ-ontology: Allen QS:MF(15)-a1501 [quantum superpositions cannot be epistemic]; Gao SHPMP(15)-a1508 [in terms of protective measurements]; Bhaumik Quanta-a1511; > s.a. Tidal Force; Tractor Beam [pulling force from a quantum-mechanical matter wave].
@ Experiments: Ringbauer et al nPhys(15)feb-a1412 + news NYT(15)feb [with single photons]; Knee NJP(17)-a1609 [towards optimal experimental tests].
> Related topics: see Beable; Ontology; realism [including ontic structural realism].
PSSC (Physical Sciences Study Committee) > see physics teaching.
PT Symmetry > s.a. modified quantum mechanics [including field theory], statistical mechanical systems [PT-symmetric]; Unitarity.
@ General references: Mostafazadeh PS(10)-a1008 [rev].
@ Breaking: Bender & Darg JMP(07) [in classical mechanics]; Ambichl et al PRX(13) [in scattering systems].
Pullback Bundle > see fiber bundle.
Pullback of a Function / Form under a Mapping > see differentiable maps.
Pulsars
Pure Sequence > see exact sequence.
Purity > s.a. mixed states; polarization.
* Idea: The quantity ζ = tr ρ2, a measure of how pure a quantum state is; Its value is one for pure states and 1/d for maximally mixed states of dimension d.
* Applications: It can be used for example to quantify entropy increase in decoherence.
Push-Forward > see tangent structures.
Puzzles > see logic.
Pyrgon
* Idea: One of the 4D particles corresponding to the non-zero modes of the harmonic expansions in mass eigenstates of the 5D fields in Kaluza-Klein theory.
Pythagorean Theorem
@ References: Ungar FP(98), Brill & Jacobson GRG(06)gq/04-fs [Lorentzian version]; Crease pw(06)jan [history and significance]. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.867509663105011, "perplexity": 28132.89064928844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742020.26/warc/CC-MAIN-20181114125234-20181114151234-00130.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/the-speed-boat-still-water-15-km-hr-it-can-go-30-km-upstream-return-downstream-original-point-4-hours-30-minutes-find-speed-stream-quadratic-equations_27486 | Share
# The Speed of a Boat in Still Water is 15 Km/Hr. It Can Go 30 Km Upstream and Return Downstream to the Original Point in 4 Hours 30 Minutes. Find the Speed of the Stream. - Mathematics
Course
#### Question By default showhide Solutions
The speed of a boat in still water is 15 km/hr. It can go 30 km upstream and return downstream to the original point in 4 hours 30 minutes. Find the speed of the stream.
#### SolutionShow Solution
Let the speed of the stream be x km/hr.
∴ Speed of the boat downstream = (15 + x) km/hr
Speed of the boat upstream = (15 – x) km/hr
Time taken to come back = 30/(15 - x) hr
From the given information
30/(15 + x) + 30/(15 - x) = 4 30/60
30/(15 + x) + 30/(15 - x) = 9/2
(450 - 30x + 450 + 30x)/((15 + x)(15 - x)) = 9/2
900/(225 - x^2) = 9/2
100/(225 + x^2) = 1/2
225 - x^2 = 200
x^2 = 25
x = +- 5`
But, x cannot be negative, so, x = 5.
Thus, the speed of the stream is 5 km/hr.
Is there an error in this question or solution? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.833080530166626, "perplexity": 1208.504076218333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738944.95/warc/CC-MAIN-20200812200445-20200812230445-00358.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/78867-torsion-coefficients.html | # Math Help - Torsion coefficients
1. ## Torsion coefficients
How would I go about finding the torsion coefficients of Z10 x Z36 x Z14 x Z21?
I think the first stage is to write 10, 36, 14 and 31 as products of primes but I'm not sure where to go from there.
2. Originally Posted by d_p_osters
How would I go about finding the torsion coefficients of Z10 x Z36 x Z14 x Z21?
I think the first stage is to write 10, 36, 14 and 31 as products of primes but I'm not sure where to go from there.
I am not exactly sure what you are asking but I it seems to me an exercise in the classification theorem for abelian groups. You need to bring this to standard form. Note $\mathbb{Z}_{10} \simeq \mathbb{Z}_2\times \mathbb{Z}_5$, and so on. Now replace each factor with the equivalent isomorphic form to bring this expression into a direct product of groups of the form $\mathbb{Z}_{p^k}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9193902015686035, "perplexity": 164.51141407744882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827791.21/warc/CC-MAIN-20160723071027-00126-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-14-calculus-of-vector-valued-functions-14-1-vector-valued-functions-exercises-page-710/19 | Calculus (3rd Edition)
Circle in the xz-plane of radius $1$ centered at $(0,0,4)$
We put $$x=\sin t,\quad y=0, \quad z=4+\cos t$$ hence we get $$x^2+(z-4)^2= \sin^2t+\cos^2t=1.$$ Which is a circle in the xz-plane of raduis $1$ centered at $(0,0,4)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992293119430542, "perplexity": 336.85991209057204}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669795.59/warc/CC-MAIN-20191118131311-20191118155311-00127.warc.gz"} |
http://math.stackexchange.com/users/4447/ondrej-sotolar?tab=summary | Ondrej Sotolar
Reputation
203
Next privilege 250 Rep.
View close votes
Badges
1 6
Newest
Impact
~4k people reached
• 0 posts edited
• 0 helpful flags
• 10 votes cast
### Questions (12)
4 Specific ten digit number 3 Trigonometric equality 2 Questions about finite sequences of natural numbers $(a_1, \dots, a_n)$ with distinct partial sums 2 Combinatorics question concerning two square-board game pieces 2 Generating function for an Arithmetic mean
### Reputation (203)
+5 Generating function for an Arithmetic mean +5 Questions about finite sequences of natural numbers $(a_1, \dots, a_n)$ with distinct partial sums +10 Combinatorics question concerning two square-board game pieces +5 Comparing the number of cuts and paths in graph
### Answers (0)
This user has not answered any questions
### Tags (15)
0 sequences-and-series × 2 0 logic 0 generating-functions 0 elementary-number-theory 0 trigonometry 0 functions 0 puzzle 0 geometry 0 probability 0 3d
### Accounts (8)
Stack Overflow 330 rep 516 Mathematics 203 rep 16 Ask Ubuntu 133 rep 4 English Language & Usage 113 rep 2 Arqade 111 rep 1
### Badges (7)
Curious Commentator Yearling Editor Scholar Student Supporter
### Active bounties (0)
This user has no active bounties
### Votes Cast (10)
all time by type
10 up 0 question
0 down 10 answer | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3157765865325928, "perplexity": 13495.489701464368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654687.23/warc/CC-MAIN-20150417045734-00163-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://wusa.ca/about/your-money/funding/student-life-endowment-funding-application-form/ | Click or drag files to this area to upload. You can upload up to 4 files.
Please ensure that the estimate is a formal one. Include Work Requests and cost estimates from Plant Operations. If you are putting in a request for hard goods (e.g. microwaves, furniture, equipment) you MUST include at least TWO invoices. Without this documentation, the committee will not consider your request. If your file too large then E-mail it to [email protected]
Files must be less than 64 MB.
Allowed file types: gif jpg jpeg png txt pdf doc docx ppt pptx.
Click or drag files to this area to upload. You can upload up to 4 files.
Please attach a timeline of activities for your project including any prep work. The timeline should include a proposed date of completion. Please note that without the documentation, your application will not be considered. Files must be less than 64 MB. Allowed file types: pdf doc docx ppt pptx.
Please make sure of the following before submitting the form or your application will not be considered: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8254453539848328, "perplexity": 3789.8889768582508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00461.warc.gz"} |
https://www.physicsforums.com/threads/effective-field-theory-and-wilsons-renormalization-group.587206/ | # Effective field theory and Wilson's renormalization group
• Start date
• #1
188
0
## Main Question or Discussion Point
I have just read my first course on Quantum Field Theory (QFT) and have followed the book by Srednicki. I have peeked a bit in the books by Peskin & Schroeder and Ryder also but mostly Srednicki as this was the main course book. Now, I have to do a project in a topic not covered in the course and I have chosen Effective field theory (EFT), following the approach by Wilson. I have read the chapter(s) in Srednicki related to this topic a few times and understand (I think) the gist of the Renormalization Group (RG) and what it is about, but I can't say I understand the chapter on EFT (chapter 29 in Srednicki). I don't really understand what the EFT approach means and I was hoping that some of you could help me clear this up.
As I understand it, when we use the MS-bar renormalization scheme, the parameters in the lagrangian no longer represent the physical parameters (for example, the m term is not the physical mass) and we can find equations that tell us how the lagrangian parameters vary with the fake parameter μ (any final answer can't depend on μ). This can also be done with the RG approach in a more formal way (as I understand it, the result is the same - we get a group of equations that tell us how the lagrangian parameters vary).
However, the next chapter on EFT:s I struggle to understand. I get that we have a cut-off $\Lambda$ for the momentum and that we can try to see what the theory tells us at momenta well below the cut-off but then a new cut-off $\Lambda_0$ is introduced and I must say I don't understand the difference between the two.
Something I would also like to get some help with is how Wilson's approach with EFT:s relates to renormalization. Why does the EFT approach remove the necessity for a theory to be renormalizable?
Any help and clarifications is highly appreciated!
Related High Energy, Nuclear, Particle Physics News on Phys.org
• #2
blechman
779
8
Check out this review
http://arxiv.org/pdf/nucl-th/9506035.pdf
Actually, my colleague and I are working on a textbook on "Effective Field Theory", but it won't be done for a while. Stay tuned.....
• #3
308
0
you should also read Wilson's original papers on the topic. They're pretty clear and really show why renormalization is a completely sensible and physical thing to do. Unlike the "sweeping infinities under the rug" perspective which many people don't like (for some reason I never understood)
• #4
257
2
I have just read my first course on Quantum Field Theory (QFT) and have followed the book by Srednicki. I have peeked a bit in the books by Peskin & Schroeder and Ryder also but mostly Srednicki as this was the main course book. Now, I have to do a project in a topic not covered in the course and I have chosen Effective field theory (EFT), following the approach by Wilson. I have read the chapter(s) in Srednicki related to this topic a few times and understand (I think) the gist of the Renormalization Group (RG) and what it is about, but I can't say I understand the chapter on EFT (chapter 29 in Srednicki). I don't really understand what the EFT approach means and I was hoping that some of you could help me clear this up.
As I understand it, when we use the MS-bar renormalization scheme, the parameters in the lagrangian no longer represent the physical parameters (for example, the m term is not the physical mass) and we can find equations that tell us how the lagrangian parameters vary with the fake parameter μ (any final answer can't depend on μ). This can also be done with the RG approach in a more formal way (as I understand it, the result is the same - we get a group of equations that tell us how the lagrangian parameters vary).
However, the next chapter on EFT:s I struggle to understand. I get that we have a cut-off $\Lambda$ for the momentum and that we can try to see what the theory tells us at momenta well below the cut-off but then a new cut-off $\Lambda_0$ is introduced and I must say I don't understand the difference between the two.
Something I would also like to get some help with is how Wilson's approach with EFT:s relates to renormalization. Why does the EFT approach remove the necessity for a theory to be renormalizable?
Any help and clarifications is highly appreciated!
I think the difference between Wilson's approach is that you are not taking the cutoff to infinity, and hence your results will be finite, whereas in standard renormalization you are taking the cutoff to infinity. You can begin from a renormalizable theory (i.e. one in which cutoff can go to infinity) and integrate out ultraviolet modes, allowing you to henceforth integrate to a finite cutoff, but this comes at the expense of having to calculate an infinite number of new interactions which would ordinarily be unrenormalizeable. Luckily however, you don't need to know the values of the coefficients for the new interactions that are unrenormalizeable, because by integrating out more momenta, the coefficients take on values gotten by the renormalizeable terms at the momenta you just integrated out. What has me confused is that when your cutoff becomes extremely low from integrating momenta out, then these unrenormalizeable terms become very important, as they go as 1/cutoff. So at very low energies shouldn't the unrenormalizeable terms dominate over the renormalizable ones?
As far as I can understand, Srednicki uses BPH/counter-term renormalization. So the bare parameters in the theory are actually physical parameters: you perturb about the physical system but this comes at the expense of requiring counter-terms.
• #5
23
0
Hi
see the following lecture by Rothstein TASI Lectures on Effective Field Theories
• #6
23
0
references on Effective field theory
Hello,
if you interested on Effective Field theory then look at titles below
References on Effective Field theory
1- Georgi, Effective Field theory www.people.fas.harvard.edu/~hgeorgi/review.pdf
2-A. Pich, http://arxiv.org/pdf/hep-ph/9806303
3-video lectures by Cliff Burgess * at the website http://pirsa.org/C09020
4- arxiv.org/pdf/hep-th/0701053 by Burgess
5- Five lectures on effective field theory http://arxiv.org/abs/nucl-th/0510023 by Kaplan
6-http://arxiv.org/abs/nucl-th/9506035 also by Kaplan
• Last Post
Replies
2
Views
1K
• Last Post
Replies
14
Views
2K
• Last Post
Replies
6
Views
2K
• Last Post
Replies
0
Views
3K
• Last Post
Replies
10
Views
3K
• Last Post
Replies
5
Views
936
• Last Post
Replies
0
Views
665
• Last Post
Replies
0
Views
3K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7493553757667542, "perplexity": 482.9162199420807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251669967.70/warc/CC-MAIN-20200125041318-20200125070318-00018.warc.gz"} |
https://dash.harvard.edu/handle/1/10445609 | # Structural specializations of $$\alpha_4 \beta_7$$, an integrin that mediates rolling adhesion
Title: Structural specializations of $$\alpha_4 \beta_7$$, an integrin that mediates rolling adhesion Author: Chen, JianFeng; Yu, Yamei; Zhu, Jianghai; Mi, Li-Zhi; Walz, Thomas; Sun, Hao; Springer, Timothy A. Note: Order does not necessarily reflect citation order of authors. Citation: Yu, Yamei, Jianghai Zhu, Li-Zhi Mi, Thomas Walz, Hao Sun, JianFeng Chen, and Timothy A. Springer. 2012. Structural specializations of $$\alpha_4 \beta_7$$, an integrin that mediates rolling adhesion. The Journal of Cell Biology 196(1): 131-146. Full Text & Related Files: 3255974.pdf (3.483Mb; PDF) Abstract: The lymphocyte homing receptor integrin $$\alpha_4 \beta_7$$ is unusual for its ability to mediate both rolling and firm adhesion. $$\alpha_4 \beta_1$$ and $$\alpha_4 \beta_7$$ are targeted by therapeutics approved for multiple sclerosis and Crohn’s disease. Here, we show by electron microscopy and crystallography how two therapeutic Fabs, a small molecule (RO0505376), and mucosal adhesion molecule-1 (MAdCAM-1) bind α4β7. A long binding groove at the $$\alpha_4 -\beta_7$$interface for immunoglobulin superfamily domains differs in shape from integrin pockets that bind Arg-Gly-Asp motifs. RO0505376 mimics an Ile/Leu-Asp motif in $$\alpha_4$$ ligands, and orients differently from Arg-Gly-Asp mimics. A novel auxiliary residue at the metal ion–dependent adhesion site in $$\alpha_4 \beta_7$$ is essential for binding to MAdCAM-1 in $$Mg^{2+}$$ yet swings away when RO0505376 binds. A novel intermediate conformation of the $$\alpha_4 \beta_7$$ headpiece binds MAdCAM-1 and supports rolling adhesion. Lack of induction of the open headpiece conformation by ligand binding enables rolling adhesion to persist until integrin activation is signaled. Published Version: doi:10.1083/jcb.201110023 Other Sources: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3255974/pdf/ Terms of Use: This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA Citable link to this page: http://nrs.harvard.edu/urn-3:HUL.InstRepos:10445609 Downloads of this work: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43372267484664917, "perplexity": 28074.406342831116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648000.93/warc/CC-MAIN-20180322190333-20180322210333-00657.warc.gz"} |
https://www.jask.or.kr/articles/xml/7YPO/ | Research Article
The Journal of the Acoustical Society of Korea. May 2021. 254-260
# MAIN
• I. Introduction
• II. Equivalent circuit of the loudspeaker with acoustic terminals
• III. Verification through experiment
• 3.1 Extract of $Caf$ and $Zar$
• 3.2 Determination of acoustic impedance and sound absorption coefficient
• IV. Conclusions
I. Introduction
To reduce noise in a room, sound absorptive materials are a prerequisite and therefore are widely used in the buildings. When developing sound absorptive materials, sound absorption performance should be monitored in every developing step for better results. Also in designing a new acoustic environment of a room, measurement of sound absorption of the candidate materials could guarantee the correct estimation of the reverberation time.
Methods of sound absorption coefficient measurement are internationally standardized in ISO354 (reverberation room method)[1] or ISO10534-2 (transfer function method in the impedance tube).[2] Reverberation room method is based on the condition of 100 % diffuse field and therefore it gives practically realistic values of sound absorption coefficients. It has a drawback, however, as it sometimes gives more values than 1.0 of sound absorption coefficient due to the edge effect of the specimen installation, which is not theoretically acceptable. Also internal volume of 200 m3, precise temperature and humidity control, 5 microphones and the relevant measuring systems mean a big investment.
The transfer function method was proposed in the 1980s[3,4] and has been internationally standardized, but the problem was that only the sound absorption coefficient under the condition of normal incidence could be obtained, the hassle of calibration of two or three microphones, and uncertainty in the low frequency ranges.
In addition to these two methods, other attempts were also proposed. Farina proposed the sound intensity method rather than the transfer function method, and emphasized that the method was more advanced.[5]
In particular, new attempts are still limited, as neither the reverberation chamber method nor the transfer function method provides a reliable solution for the measurement of the sound absorption coefficient below 100 Hz.[6]
In this paper, a new method of measuring acoustic properties of the materials, which is especially consistent at the low frequencies is proposed: It does not employ the microphones or the long tube to measure sound pressure at all and instead utilize the equivalent circuit of the loudspeaker to extract acoustic impedance of the specimen, which again converts to sound absorption coefficient with ease.
II. Equivalent circuit of the loudspeaker with acoustic terminals
If a specimen is installed near the loudspeaker diaphragm as shown in Fig. 1, when the volume velocity of the diaphragm is $U$, a volume velocity $Uf$ different from $U$ is applied to the specimen surface due to the elasticity of air formed between the specimen and the diaphragm. If the acoustic impedance at the back of the diaphragm is $Zar$ and the acoustic impedance at the surface of the specimen installed in front of the diaphragm is $Zaf$, the equivalent circuit of the loudspeaker at low frequency ranges where the diaphragm of the loudspeaker vibrates as a piston can be expressed as shown in Fig. 2,
### Equivalent circuit of the loudspeaker with a specimen near the front of the diaphragm.
where,
$Re$ : series DC resistance
$Le$ : series coil inductance
$Lep$ : parallel inductance associated with coil loss $Rep$
$Rep$ : parallel resistance associated with inductance $Lep$
$Mms,Cms,Rms$: mass, compliance, and loss of the moving diaphragm of the loudspeaker respectively
$Zar$ : acoustic impedance at the rear of the loudspeaker
$Zaf$ : acoustic impedance of the test specimen
$Caf$ : compliance of the cavity between the loudspeaker diaphragm and the test specimen
$Bl$ : force factor of the loudspeaker
$Sd$ : effective diaphragm area of the loudspeaker
$Za,Zm,Ze$: acoustic, mechanical, and electrical impedance of the loudspeaker seen from the each terminal respectively.
Acoustic impedance which is defined as pressure over volume velocity denotes as
##### (1)
$Za=pU=Zar+1jωCaf+1Zaf.$
For simple manipulation of the equations, motional impedance $Zmot$can be defined as
##### (2)
$Zmot=jωMms+1jωCms+Rm.$
Mechanical impedance of the loudspeaker reduces to
##### (3)
$Zm=fu=Zmot+Sd2Za.$
Electrical impedance of the loudspeaker incorporating electrical, mechanical and acoustical elements can be written as
##### (4)
$Ze=Re+jωLe+jωLepRepjωLep+Rep+Bl2Zm.$
Also, electrical impedance excluding resistances and inductances of the voice coil equates as
##### (5)
$Zg=Bl2Zm=Ze-Re+jωLe+jωLepRepjωLep+Rep.$
Eq. (5) implies that $Zm$ can be indirectly calculated by measuring the electrical impedance $Ze$ of the loudspeaker and extracting the Thiele/Small parameters.
If the specimen is replaced with a rigid surface, the front acoustic impedance $Zaf$ becomes infinite and the acoustic impedance $Za$ is simplified to
##### (6)
$Za=Zar+1jωCaf.$
In the condition that the front cavity of the loudspeaker is small and the frequency is low, $1jωCaf$ becomes dominant, and the $Caf$ is connected in series with the $Cms$ of the loudspeaker. This lowers the total compliance, which increases the resonance frequency of the loudspeaker. $Caf$ can therefore be readily calculated by
##### (7)
$Caf=Sd2Mms(ω0'2-ω02),$
where $ω0$ is the loudspeaker resonance frequency in free air and $ω0'$ is that when the front cavity is small as in Fig. 1.
Also, volume of the front cavity of the loudspeaker can be calculated by the simple relation of
##### (8)
$V=ρ0c2Caf.$
From Eqs. (3) and (6), $Zm$ is expressed as
##### (9)
$Zm=Zmot+Sd2(Zar+1jωCaf)=Zmotc+Sd2Zar,$
where
##### (10)
$Zmotc=Zmot+Sd21jωCaf.$
Eq. (9) is then reduced to
##### (11)
$Zar=Zm-ZmotcSd2.$
Boulandet[7] used an enclosure to measure the acoustic impedance $Zar$ of the loudspeaker, but in this study, it can be easily measured by changing the loudspeaker compliance with only the space formed between the front and the hard surface of the loudspeaker. Several manipulations of the above equations yield the acoustic impedance of the specimen $Zaf$ as
##### (12)
$Zaf=Bl2/Ze-ZmotSd2-Zar-1-jωCaf-1.$
Absorption coefficient $α$ is then calculated using the relation of
##### (13)
$α=1-Zaf-Z0Zaf+Z02,$
where $Z0$ is the acoustic impedance of plane wave traveling in air.
III. Verification through experiment
3.1 Extract of $Caf$ and $Zar$
To verify the effectiveness of this study, the loudspeaker impedance has been measured. The loudspeaker used in the experiment was a small full-range loudspeaker with an effective diameter of 70 mm, and the Praxis system of Liberty Instruments was used as a measurement and analysis system. Using the impedance measurement data along with the added mass method, Thiele/Small parameters, which are the values of each element on the equivalent circuit in Fig. 1, have been extracted. The loudspeaker impedance graph measured in free air and that with the front of the loudspeaker blocked with a rigid panel are shown in Fig. 3. It resonated at 104 Hz in free air and at 435 Hz when the loudspeaker surface was blocked, and $Caf$ has been calculated using Eq. (7).
### (Color available online) (a) Magnitude of electrical impedance of the loudspeaker in free air, (b) phase of it, and (c) magnitude when the front side is blocked with a rigid surface as in Fig. 1.
Since $Zm$ and $Zmot$ used to obtain $Zar$ have $ω$ in the denominator, the lower the frequency, the larger the their values. This means that the derivation of precise Thiele/ Small parameters is very important to obtain accurate $Zar$. Even with very small Thiele/Small parameter errors, $Zar$ reacts largely to represent an impractical value, and Boulandet’s study[7] also exemplifies this phenomenon.
Theoretical real and imaginary values of the acoustic radiation impedance of the piston radiator and the pulsating sphere similar to that of a loudspeaker decrease as the frequency decreases as shown in Eqs. (14) and (15),[8]
##### (14)
$Ra∝ρ0cSd(ka)2.$
##### (15)
$Xa∝ρ0cSd(ka).$
For the case where the air cavity at the rear of the specimen is 50 mm, the acoustic radiation impedance $Zar$ has been calculated using Eq. (11) and is shown in Fig. 4. For frequencies above the intermediate frequency, the impedance increases rapidly, showing a graph similar to the theoretical slope of the radiation impedance of the piston radiator or pulsating sphere as in Eqs. (14) and (15), but at low frequencies below 200 Hz, the acoustic radiation impedance increases as the frequency decreases. This is a phenomenon that appears in other papers,[7,9] and more in-depth analysis of the cause is required.
### Measured acoustic radiation impedance $Zar$ at the rear side of the loudspeaker.
3.2 Determination of acoustic impedance and sound absorption coefficient
The loudspeaker impedance has been measured by flush-mounting the specimen into a cylindrical pipe with an inner diameter of 48 mm and installing the loudspeaker in close contact with the surface of the specimen as shown in Fig. 5.
### (Color available online) Experimental apparatus with (a) the specimen installed, (b) the loudspeaker installed, respectively.
Fig. 6 shows a graph of loudspeaker electrical impedance curves while changing the air cavity on the back of the specimen. Q is the largest when there is no rear air cavity, and it decreases as the rear air cavity deepens. As the condition of the specimen changes, the electrical impedance of the loudspeaker changes accordingly, which implies that the impedance of the loudspeaker acts as a important clue in measuring the acoustic impedance and the sound absorption coefficient. One thing to note is that above about 1 kHz, the impedance graph is almost identical regardless of the air cavity, which means that the loudspeaker no longer has sufficient sensitivity as a sensor.
### (Color available online) Electrical impedance (magnitude) of the loudspeaker installed at the surface of the specimen with (a) 0 mm, (b) 50 mm, (c) 100 mm, and (d) 200 mm air cavity at the back of the specimen.
Fig. 7(a) is the measured graph of the acoustic impedance at the surface of the specimen. The acoustic impedance of the plane wave has been overlaid in Fig. 7(b) for comparison of the values. It is informative to note that the more spacing (a) and (b) are, the greater the amount of the reflection. Above about 1 kHz, the graph fluctuates very strongly because the sensitivity to sense through the loudspeaker is low and the assumption of lumped element modeling is difficult to apply due to the breakup vibration of the loudspeaker.
### (Color available online) (a) Measured acoustic impedance of the specimen, (b) acoustic impedance of plane wave ($ρ0c/Sd$).
Fig. 8 shows the sound absorption coefficient measured by the proposed method. It is surprising that even for low frequencies below 100 Hz, a very smooth curve has been obtained, which highlights that the sound absorption coefficient at the very low frequency ranges, which has been difficult to obtain using the conventional techniques, can be obtained with this method.
### Sound absorption coefficient obtained by the proposed method with the same specimen of Fig. 1.
For comparison, the sound absorption coefficient has also been measured by the transfer function method utilizing the same specimen. The diameter of the impedance tube was 48 mm maintaining the plane wave up to 3 kHz, and the spacing between the two microphone was 50 mm. Fig. 9 shows the sound absorption coefficients obtained by the FFT of the impulse response between the loudspeaker and the microphones with two different window lengths (70 msec, 9 msec). Above 400 Hz, the two windows resulted in similar graphs, but in the low frequency band below 400 Hz, very dissimilar graphs were obtained, indicating that it is difficult to trust the data in those low frequency ranges. Uncertainty of the transfer function method using 2 microphones is very large in the low frequency band. In addition, short windowing of 9 msec which uses only one reflection from the specimen did not calculate below 110 Hz.
### (Color available online) Sound absorption coefficient of the specimen with 50 mm air cavity measured by the transfer function method (a) when analyzed with 70 msec time window, and (b) with 9 msec time window.
In summary, it can be confirmed that the method of measuring the sound absorption coefficient by measuring the loudspeaker impedance can be an excellent method for measuring the sound absorption coefficient in the low frequency ranges.
IV. Conclusions
Measurement of the acoustic impedance or sound absorption coefficient of a material is very important to guarantee the acoustic performance of a building. ISO354 international standard costs too much and its sound absorption coefficient could be unrealistic due to the influence of the edge of the specimen. ISO10534-2 has weaknesses that it is valid only for normal incidence and that it is difficult to trust at low frequencies with a comparably short tube.
In this study, a new method has been proposed which can measure the acoustic impedance or the sound absorption coefficient only by measuring the loudspeaker impedance without the need for any microphones. The advantages of this method are as follows:
1) No need for a microphone or long tube.
2) It measures the surface acoustic condition of the specimen instead of measuring the traveling normal incidence with reflection.
3) Measurement at low frequencies that were previously impossible or unreliable is possible.
This method, however, has the following limitations:
1) For reliable results, it is necessary to extract very accurate T/S parameters of the loudspeaker.
2) It is valid only for frequencies below the breakup vibration of the loudspeaker.
As for the applications, since this measurement method is simple to use and portable, it can be easily applied for an in-situ measurement.
## References
1
ISO 354, Acoustics -Measurement of Sound Absorp tion in a Reverberation Room, 2003.
2
ISO 10534-2, Acoustics - Determination of Sound Absorption Coefficient and Impedance in Impedance Tubes - Part 2, 1998.
3
J. Y. Chung and D. A. Blaser, "Transfer function method of measuring in-duct acoustic properties. Part I: Theory," J. Acoust. Soc. Am. 68, 907-913 (1980). 10.1121/1.384778
4
J. Y. Chung and D. A. Blaser, "Transfer function method of measuring in-duct acoustic properties. Part II: Experiment," J. Acoust. Soc. Am. 68, 914-921 (1980). 10.1121/1.384779
5
A. Farina and A. Torelli, "Measurement of the sound absorption coefficient of materials with a new sound intensity technique," Proc. AES 102th Convention, paper no. 4409 (1997).
6
D. Shearer, Measuring absorption below 100Hz with a P-U sensor, (Master's. Thesis, South Bank University, 2016).
7
R. Boulandet, "Sensorless measurement of the acoustic impedance of a loudspeaker," Proc. 23rd ICA. 7353- 7360 (2019).
8
L. E. Kinsler, A. R. Frey, A. B. Coppens, and J. V. Sanders, Fundamentals of Acoustics, 4th ed (Wiley, New York, 1999), pp. 184-187.
9
X. Wang and Y. Xiang, "Probes design and experimental measurement of acoustic radiation resistance," J. Acoust. Vib. 22, 252-259 (2017). 10.20855/ijav.2017.22.2471 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 67, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7946963310241699, "perplexity": 1095.5305586514232}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155458.35/warc/CC-MAIN-20210805063730-20210805093730-00329.warc.gz"} |
http://mathhelpforum.com/advanced-math-topics/161580-fourier-series-addition-cosine-sine-different-frequencies2.html | # Thread: Fourier series - addition of cosine and sine of different frequencies2
1. ## Fourier series - addition of cosine and sine of different frequencies2
I'm trying to find the fourier series of,
f(t) = cos(4t) + sin(6t)
I know the period is pi and wo = 2.
The equation I have been using is
an = (2 / pi) * integral(0 to pi) [ cos(4t)*cos( wo*n*t)]
then I add this to the corresponding "an" of sin(6t). I keep getting zeros for all constants ao, an, and bn. My question is, what do I use for "wo?" Do I use 2 for all constants? Or do I use the particular wo for each term, e.g., 4 for cos(4t) and 6 for sin(6t). Are my limits of integration correct? I thought you went to 0 to T where T is the period.
Thanks | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9219678640365601, "perplexity": 1249.5063342888743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707188217/warc/CC-MAIN-20130516122628-00053-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.arxiv-vanity.com/papers/1205.6926/ | arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.
# Indirect Coulomb Energy for Two-Dimensional Atoms
Rafael D. Benguria and Matěj Tušek Departmento de Física, P. Universidad Católica de Chile, Departmento de Física, P. Universidad Católica de Chile,
###### Abstract.
In this manuscript we provide a family of lower bounds on the indirect Coulomb energy for atomic and molecular systems in two dimensions in terms of a functional of the single particle density with gradient correction terms.
## 1. Introduction
Since the advent of quantum mechanics, the impossibility of solving exactly problems involving many particles has been clear. These problems are of interest in such areas as atomic and molecular physics, condensed matter physics, and nuclear physics. It was, therefore, necessary from the early beginnings to estimate various energy terms of a system of electrons as functionals of the single particle density , rather than as functionals of their wave function . The first estimates of this type were obtained by Thomas and Fermi in 1927 (see [14] for a review), and by now they have given rise to a whole discipline under the name of Density Functional Theory (see, e.g., [1] and references therein). In Quantum Mechanics of many particle systems the main object of interest is the wavefunction , (the antisymmetric tensor product of ). More explicitly, for a system of fermions, , in view of Pauli’s Exclusion Principle, and . Here, denote the coordinates of the -th particle. From the wavefunction one can define the one–particle density (single particle density) as
ρψ(x)=N∫R3(N−1)|ψ(x,x2,…,xN)|2dx2…dxN, (1)
and from here it follows that , the number of particles, and is the density of particles at . Notice that since is antisymmetric, is symmetric, and it is immaterial which variable is set equal to in (1).
In Atomic and Molecular Physics, given that the expectation value of the Coulomb attraction of the electrons by the nuclei can be expressed in closed form in terms of , the interest focuses on estimating the expectation value of the kinetic energy of the system of electrons and the expectation value of the Coulomb repulsion between the electrons. Here, we will be concerned with the latest. The most natural approximation to the expectation value of the Coulomb repulsion between the electrons is given by
D(ρψ,ρψ)=12∫ρψ(x)1|x−y|ρψ(y)dxdy, (2)
which is usually called the direct term. The remainder, i.e., the difference between the expectation value of the electronic repulsion and , say , is called the indirect term. In 1930, Dirac [6] gave the first approximation to the indirect Coulomb energy in terms of the single particle density. Using an argument with plane waves, he approximated by
E≈−cD∫ρ4/3ψdx, (3)
where (see, e.g., [20], p. 299). Here we use units in which the absolute value of the charge of the electron is one. The first rigorous lower bound for was obtained by E.H. Lieb in 1979 [13], using the Hardy–Littlewood Maximal Function [27]. There he found that, . The constant was substantially improved by E.H. Lieb and S. Oxford in 1981 [16], who proved the bound
E≥−C∫ρ4/3ψdx, (4)
with . In their proof, Lieb and Oxford used Onsager’s electrostatic inequality [22], and a localization argument. The best value for is unknown, but Lieb and Oxford [16] proved that it is larger or equal than . The Lieb–Oxford value was later improved to by Chan and Handy, in 1999 [5]. Since the work of Lieb and Oxford [16], there has been a special interest in quantum chemistry in constructing corrections to the Lieb–Oxford term involving the gradient of the single particle density. This interest arises with the expectation that states with a relatively small kinetic energy have a smaller indirect part (see, e.g., [11, 24, 28] and references therein). Recently, Benguria, Bley, and Loss obtained an alternative to (4), which has a lower constant (close to ) to the expense of adding a gradient term (see Theorem 1.1 in [2]), which we state below in a slightly modified way,
###### Theorem 1.1 (Benguria, Bley, Loss [2]).
For any normalized wave function and any we have the estimate
E(ψ)≥−1.4508(1+ϵ)∫R3ρ4/3ψdx−32ϵ(√ρψ,|p|√ρψ) (5)
where
(√ρ,|p|√ρ):=∫R3|ˆ√ρ(k)|2|2πk|dk=12π2∫R3∫R3|√ρ(x)−√ρ(y)|2|x−y|4dxdy . (6)
Here, denotes the Fourier-transform
ˆf(k)=∫R3e−2πik⋅xf(x)dx .
###### Remarks.
i) For many physical states the contribution of the last two terms in (5) is small compared with the contribution of the first term. See, e.g., the Appendix in [2];
ii) For the second equality in (6) see, e.g., [15], Section 7.12, equation (4), p. 184;
iii) It was already noticed by Lieb and Oxford (see the remark after equation (26), p. 261 on [16]), that somehow for uniform densities the Lieb–Oxford constant should be instead of ;
iv) In the same vein, J. P. Perdew [23], by employing results for a uniform electron gas in its low density limit, showed that in the Lieb–Oxford bound one ought to have (see also [11]).
After the work of Lieb and Oxford [16] many people have considered bounds on the indirect Coulomb energy in lower dimensions (in particular see, e.g., [10] for the one-dimensional case; [18], [21], [25], and [26] for the two-dimensional case, which is important for the study of quantum dots). Recently, Benguria, Gallegos, and Tušek [4] gave an alternative to the Lieb–Solovej–Yngvason bound [18], with a constant much closer to the numerical values proposed in [26] (see also the references therein) to the expense of adding a gradient term:
###### Theorem 1.2 (Estimate on the indirect Coulomb energy for two dimensional atoms [4]).
Let be normalized to one and symmetric (or antisymmetric) in all its variables. Define
ρψ(x)=N∫R2(N−1)|ψ|2(x,x2,…,xN) dx2…dxN.
If and , then, for all ,
E(ψ)≡⟨ψ,N∑i
with
β=(43)3/2√5π−1≃5.9045. (8)
###### Remarks.
i) The constant in (7) is substantially lower than the constant found in [18] (see equation (5.24) of lemma 5.3 in [18]).
ii) Moreover, the constant is close to the numerical values (i.e., ) of [25] (and references therein), but is not sharp.
In the literature there are, so far, three approaches to prove lower bounds on the exchange energy, namely:
i) The approach introduced by E.H. Lieb in 1979 [13], which uses as the main tool the Hardy–Littlewood Maximal Function [27]. This method was used in the first bound of Lieb [13]. Later it was used in [18] to obtain a lower bound on the exchange energy of two–dimensional Coulomb systems. It has the advantage that it may be applied in a wide class of problems, but it does not yield sharp constants.
ii) The use of Onsager’s electrostatic inequality [22] together with localization techniques, introduced by Lieb and Oxford [16]. This method yields very sharp constants. It was used recently in [2] to get a new type of bounds including gradient terms (for three dimensional Coulomb systems). In some sense the constant recently obtained in [2] is the best possible (see the comments after Theorem 1.1). The only disadvantage of this approach is that it depends on the use of Onsager’s electrostatic inequality (which in turn relies on the fact that the Coulomb potential is the fundamental solution of the Laplacian). Because of this, it cannot be used in the case of two–dimensional atoms, because is not the fundamental solution of the two–dimensional Laplacian.
iii) The use of the stability of matter of an auxiliary many particle system. This idea was used by Lieb and Thirring [19] to obtain lower bounds on the kinetic energy of a systems of electrons in terms of the single particle density. In connection with the problem of getting lower bounds on the exchange energy it was used for the first time in [4], to get a lower bound on the exchange energy of two–dimensional Coulomb systems including gradient terms. This method provides very good, although not sharp, constants.
As we mentioned above, during the last twenty years there has been a special interest in quantum chemistry in constructing corrections to the Lieb–Oxford term involving the gradients of the single particle density. This interest arises with the expectation that states with a relatively small kinetic energy have a smaller indirect part (see, e.g., [11, 24, 28] and references therein). While the form of leading term (i.e., the dependence as an integral of in three dimensions or as an integral of in two dimensions) is dictated by Dirac’s argument (using plane waves), there is no such a clear argument, nor a common agreement concerning the structure of the gradient corrections. The reason we introduced the particular gradient term, in our earlier work [4], was basically due to the fact that we already knew the stability of matter arguments for the auxiliary system. However, there is a whole one parameter family of such gradient terms that can be dealt in the same manner. In this manuscript we obtain lower bounds including as gradient terms this one–parameter family. One interesting feature of our bounds is that the constant in front of the leading term remains the same (i.e., its value is independent of the parameter that labels the different possible gradient terms), while the constant in front of the gradient term is parameter dependent.
Our main result is the following theorem.
###### Theorem 1.3 (Estimate on the indirect Coulomb energy for two dimensional atoms).
Let , and . Assume, and . Let , for while , for . Then, for all we have,
E(ψ)≡⟨ψ,N∑i
Here,
~b2=(43)3/2√5π−1(1+ϵ)=β(1+ϵ) (10)
where is the same constant that appears in (8). Also,
~a2=2γC(γ)3−γ(1βϵγ−13−γC(γγ−1))γ−1. (11)
In particular, we have (with a fixed )
~a2|γ→1+=√2.
###### Remarks.
i) Our previous Theorem 1.2 is a particular case of Theorem 1.3, for the value , .
ii) Notice that is independent of , and it is therefore the same as in [4].
iii)The constant in front of the gradient term depends on the power and, of course, on . However, as , this constant converges to independently of the value of .
In the rest of the manuscript we give a sketch of the proof of this theorem, which follows closely the proof of the particular result 1.2 in [4].
## 2. Auxiliary lemmas
First we need a standard convexity result.
###### Lemma 2.1.
Let , and . Then
|x|p+|y|p≤C(p)|x+iy|p,
where for , and for . The constant is sharp.
###### Proof.
If , the assertion follows, e.g., from the fact that -norm is decreasing in . On the other hand, for , the assertion follows from the concavity of the mapping for and . ∎
The next lemma is a generalization of the analogous result introduced in [3] and used in the proof of Theorem 1.2 above (see [4]). This lemma is later needed to prove a Coulomb Uncertainty Principle.
###### Lemma 2.2.
Let stands for the disk of radius and origin . Moreover let be a smooth function such that and . Then the following uncertainty principle holds
∣∣∣∫DR[2u(|x|)+|x|u′(|x|)]f(x)1/α∣∣∣≤≤1α(C(γ)∫DR|∇f(x)|γdx)1/γ(C(δ)∫DR|x|δ|u(|x|)|δ|f(x)|3/(2α)dx)1/δ,
where
1α=2γ3−γ,1γ+1δ=1. (12)
###### Proof.
Set . Then we have,
∫DR[2u(|x|)+|x|u′(|x|)]f(x)1/αdx=2∑j=1∫DR[∂jgj(x)]f(x)1/αdx==∑j∫DRf(x)∂j[gj(x)f(x)1/α−1]dx−(1α−1)∑j∫DRf(x)1/α−1gj(x)∂jf(x)dx==−1α∫DR⟨∇f(x),x⟩u(|x|)f(x)1/α−1dx.
In the last equality we integrated by parts and made use of the fact that vanishes on the boundary . Next, the Hölder inequality implies
∣∣∣∫DR[2u(|x|)+|x|u′(|x|)]f(x)1/α∣∣∣≤1α(∫DR2∑j=1|∂jf(x)|γdx)1/γ(∫DR2∑j=1|xj|δ|u(|x|)|δ|f(x)|(1/α−1)δdx)1/δ.
The rest follows from Lemma 2.1. ∎
## 3. A stability result for an auxiliary two-dimensional molecular system
Here we follow the method introduced in [4]. That is, in order to prove our Lieb–Oxford type bound (with gradient corrections) in two dimensions we use a stability of matter type result on an auxiliary molecular system. This molecular system is an extension of the one studied in [4], which was adapted from the similar result in three dimensions discussed in [3] (this last one corresponds to the zero mass limit of the model introduced in [7, 8, 9]). We begin with a typical Coulomb Uncertainty Principle which uses the kinetic energy of the electrons in a ball to bound the Coulomb singularities.
###### Theorem 3.1.
For every smooth non-negative function on the closed disk , and for any we have
abα∣∣ ∣∣∫DR(1|x|−2R)ρ(x)dx∣∣ ∣∣≤aγC(γ)γ∫DR|∇ρ(x)α|γdx+bδC(δ)δ∫DRρ3/2dx,
where , and and are as in (12).
###### Proof.
In Lemma 2.2 we set and . The assertion of the theorem then follows from Young inequality with coefficients and . ∎
And now we introduce the auxiliary molecular system through the “energy functional”
ξ(ρ)=~a2∫R2|∇ρα|γdx+~b2∫R2ρ3/2dx−∫R2V(x)ρ(x)dx+D(ρ,ρ)+U, (13)
where
V(x)=K∑i=1z|x−Ri|,D(ρ,ρ)=12∫R2×R2ρ(x)1|x−y|ρ(y)dxdy,U=∑1≤i
with and . As above we assume , and . The choice of (in terms of ) is made in such a way that the first two terms in (13) scale as one over a length. Indeed, let us denote
K(ρ)≡~a2∫R2|∇ρα|γdx+~b2∫R2ρ3/2dx.
Given any trial function and setting (thus preserving the norm), it is simple to see that with our choice of we have .
If we now introduce constants so that
~a2=aγC(γ)2αγ (14) ~b2=bδ2C(δ)2αδ+b21
(again with given by (12)), we may use the proof of [4, Lemma 2.5] step by step. In particular,
ξ(ρ)≥b21∫R2ρ3/2dx−∫R2Vρ dx+ab2K∑j=1∫Bj(12|x−Rj|−1Dj)ρ(x)dx+D(ρ,ρ)+U,
where
Dj=12min{|Rk−Rj|∣∣k≠j},
and is a disk with center and of radius .
Thus as in [4, Lemma 2.5] we have that, for
z≤ab2/2, (15)
it holds
ξ(ρ)≥K∑j=11Dj[z28−427b41(2z3(π−1)+πa3b32)]. (16)
Consequently we arrive at the following theorem.
###### Theorem 3.2.
For all non-negative functions such that and , we have that
ξ(ρ)≥0, (17)
provided that
z≤maxσ∈(0,1)h(σ) (18)
h(σ)=min⎧⎨⎩a2(~b23−γγ−1C(γγ−1)−1(1−σ))(γ−1)/γ,2764~b45π−1σ2⎫⎬⎭, (19)
with given by (14).
In order to arrive at (19) we set in (16) to be the smallest possible under the condition (15), i.e., , and we introduced .
## 4. Proof of Theorem 1.3
In this Section we give the proof of the main result of this paper, namely Theorem 1.3. We use an idea introduced by Lieb and Thirring in 1975 in their proof of the stability of matter [19] (see also the review article [12] and the recent monograph [17]). This idea was first used in this context in [4].
###### Proof of Theorem 1.3.
Consider the inequality (17), with (where is the number of electrons in our original system), (i.e., the charge of the electrons), and (for all ). With this choice, according to (18), the inequality (17) is valid as long as and (that are now free parameters) satisfy the constraint,
1≤maxσ∈(0,1)h(σ) (20)
with (which maximizes ) such that . Let us introduce and set . Then the smallest such that the assumptions of Theorem 3.2 may be in principle fulfilled reads
~b2=(43)3/2√5π−1(1+ϵ). (21)
Hence has to be chosen large enough, namely such that
1=a2(~b23−γγ−1C(γγ−1)−1ϵ1+ϵ)(γ−1)/γ,
which due to (14) implies
~a2=2γC(γ)3−γ((34)3/2(5π−1)−1/21ϵγ−13−γC(γγ−1))γ−1. (22)
Since
limγ→1+C(γ)=√2,limγ→1+(γ−13−γC(γγ−1))γ−1=1,
we have (with a fixed )
~a2|γ→1+=√2.
Then take any normalized wavefunction , and multiply (17) by and integrate over all the electronic configurations, i.e., on . Moreover, take . We get at once,
E(ψ)≡⟨ψ,N∑i
provided and satisfy (22) and (21), respectively. ∎
###### Remark 4.1.
In general the two integral terms in (9) are not comparable. If one takes a very rugged , normalized to , the gradient term may be very large while the other term can remain small. However, if one takes a smooth , the gradient term can be very small as we illustrate in the example below. Let us denote
L(ρ)=∫R2ρ(x)3/2dx
and
G(ρ)=∫R2(|∇ρ(x)α|)γdx.
with . We will evaluate them for the normal distribution
ρ(|x|)=Ce−A|x|2
where . Some straightforward integration yields
L=C3/22π3A,
while,
G=Cαγπ2γ(Aα)(γ/2)−1Γ(1+γ2)γ−(γ/2)−1.
With ,
∫R2ρ(|x|)dx=N,
and we have
GL=3(√2γ)γ(πN)γ/2Γ(1+γ2)(3−γ)(γ/2)−1,
i.e., in the “large number of particles” limit, the term becomes negligible, for all .
## Acknowledgments
It is a pleasure to dedicate this manuscript to Elliott Lieb on his eightieth birthday. The scientific achievements of Elliott Lieb have inspired generations of Mathematical Physicists. This work has been supported by the Iniciativa Cient fica Milenio, ICM (CHILE) project P07–027-F. The work of RB has also been supported by FONDECYT (Chile) Project 1100679. The work of MT has also been partially supported by the grant 201/09/0811 of the Czech Science Foundation.
## References
• [1] R. D. Benguria, Density Functional Theory, in Encyclopedia of Applied and Computational mathematics (B. Engquist, et al, Eds.), Springer-Verlag, Berlin, 2013.
• [2] R. D. Benguria, G. A. Bley, and M. Loss, An improved estimate on the indirect Coulomb Energy, International Journal of Quantum Chemistry 112, 1579–1584 (2012).
• [3] R. D. Benguria, M. Loss, and H. Siedentop, Stability of atoms and molecules in an ultrarelativistic Thomas–Fermi–Weizsäcker model, J. Math. Phys. 49, article 012302 (2008).
• [4] R. D. Benguria, P. Gallegos, and M. Tušek, New Estimate on the Two-Dimensional Indirect Coulomb Energy, Annales Henri Poincaré (2012).
• [5] G. K.–L. Chan and N. C. Handy, Optimized Lieb–Oxford bound for the exchange–correlation energy, Phys. Rev. A 59, 3075–3077 (1999).
• [6] P. A. M. Dirac, Note on Exchange Phenomena in the Thomas Atom, Mathematical Proceedings of the Cambridge Philosophical Society, 26, 376–385 (1930).
• [7] E. Engel, Zur relativischen Verallgemeinerung des TFDW modells, Ph.D. Thesis Johann Wolfgang Goethe Universität zu Frankfurt am Main, 1987.
• [8] E. Engel and R. M. Dreizler, Field–theoretical approach to a relativistic Thomas–Fermi–Weizsäcker model, Phys. Rev. A 35, 3607–3618 (1987).
• [9] E. Engel and R. M. Dreizler, Solution of the relativistic Thomas–Fermi–Dirac–Weizsäcker model for the case of neutral atoms and positive ions, Phys. Rev. A 38, 3909–3917 (1988).
• [10] C. Hainzl and R. Seiringer, Bounds on One–dimensional Exchange Energies with Applications to Lowest Landau Band Quantum Mechanics, Letters in Mathematical Physics 55, 133–142 (2001).
• [11] M. Levy and J. P. Perdew, Tight bound and convexity constraint on the exchange–correlation–energy functional in the low–density limit, and other formal tests of generalized–gradient approximations, Physical Review B 48, 11638–11645 (1993).
• [12] E. H. Lieb, The stability of matter, Rev. Mod. Phys. 48, 553–569 (1976).
• [13] E. H. Lieb, A Lower Bound for Coulomb Energies, Physics Letters 70 A, 444–446 (1979).
• [14] E. H. Lieb, Thomas–Fermi and related theories of Atoms and Molecules, Rev. Mod. Phys. 53, 603–641 (1981).
• [15] E. H. Lieb and M. Loss, Analysis, Second Edition, Graduate Texts in Mathematics, vol. 14, Amer. Math. Soc., RI, 2001.
• [16] E. H. Lieb and S. Oxford, Improved Lower Bound on the Indirect Coulomb Energy, International Journal of Quantum Chemistry 19, 427–439 (1981).
• [17] E. H. Lieb and R. Seiringer, The Stability of Matter in Quantum Mechanics, Cambridge University Press, Cambridge, UK, 2009.
• [18] E. H. Lieb, J. P. Solovej, and J. Yngvason, Ground States of Large Quantum Dots in Magnetic Fields, Physical Review B 51, 10646–10666 (1995).
• [19] E. H. Lieb and W. Thirring, Bound for the Kinetic Energy of Fermions which Proves the Stability of Matter, Phys. Rev. Lett. 35, 687–689 (1975); Errata 35, 1116 (1975).
• [20] J. D. Morgan III, Thomas–Fermi and other density functional theories, in Springer handbook of atomic, molecular, and optical physics, vol. 1, pp. 295–306, edited by G.W.F. Drake, Springer–Verlag, NY, 2006.
• [21] P.–T. Nam, F. Portmann, and J. P. Solovej, Asymptotics for two dimensional Atoms, preprint, 2011.
• [22] L. Onsager, Electrostatic Interactions of Molecules, J. Phys. Chem. 43 189–196 (1939). [Reprinted in The collected works of Lars Onsager (with commentary), World Scientific Series in 20 Century Physics, vol. 17, pp. 684–691, Edited by P.C. Hemmer, H. Holden and S. Kjelstrup Ratkje, World Scientific Pub., Singapore, 1996.]
• [23] J. P. Perdew, Unified Theory of Exchange and Correlation Beyond the Local Density Approximation, in Electronic Structure of Solids ’91, pp. 11–20, edited by P. Ziesche and H. Eschrig, Akademie Verlag, Berlin, 1991.
• [24] J. P. Perdew, K. Burke, and M. Ernzerhof, Generalized Gradient Approximation Made Simple, Phys. Rev. Letts. 77, 3865–3868 (1996).
• [25] E. Räsänen, S. Pittalis, K. Capelle, and C. R. Proetto, Lower bounds on the Exchange–Correlation Energy in Reduced Dimensions, Phys. Rev. Letts. 102, article 206406 (2009).
• [26] E. Räsänen, M. Seidl, and P. Gori–Giorgi, Strictly correlated uniform electron droplets, Phys. Rev. B 83, article 195111 (2011).
• [27] E. M. Stein and G. Weiss, Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, Princeton, NJ, 1971.
• [28] A. Vela, V. Medel, and S. B. Trickey, Variable Lieb–Oxford bound satisfaction in a generalized gradient exchange–correlation functional, The Journal of Chemical Physics 130, 244103 (2009). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9742133617401123, "perplexity": 585.5434093228754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401585213.82/warc/CC-MAIN-20200928041630-20200928071630-00119.warc.gz"} |
https://puzzling.stackexchange.com/questions/65800/hey-wake-up-look-at-this-grid-puzzle | # Hey! Wake Up! Look At This Grid Puzzle!
The grid is divided into rooms, though some are wibbly-wobbly weird-shaped. Hey, aren't these normally supposed to be blocks? What should the title of this puzzle really be?
B S G N I L H I A I
O L F F M T R F A R
I O R R C J N E S L
A T B B I Q C N A T
X A R O N O V I F L
O W F B U N Y H J S
W R S S J U I M X E
I O N T R K A W P R
V Q F P O D L T R U
E F B Z F I Z C N T
Special thanks to @Deusovi for testsolving this!
Wacky Waving
Step 1:
Solve the grid as a Heyawake.
Overlaying this with the letter grid reveals:
shift first in row by sum in rpplffct
Step 2:
Solving the grid again, this time as a Ripple effect.
Then shifting the first row by the sum of all numbers in that row reveals:
Z to Z course
Step 3:
Plotting a course from Z to Z gives: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9339840412139893, "perplexity": 4211.587109152038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319155.91/warc/CC-MAIN-20190823235136-20190824021136-00408.warc.gz"} |
http://tex.stackexchange.com/questions/76152/beamer-definition-list-overlay-uncover-definition-later-than-entry?answertab=active | # beamer definition-list overlay: uncover definition later than entry
For educational purposes, I would like to uncover the definitions of my description items only after all the items themselves have been uncovered, like so:
\documentclass{beamer}
\begin{document}
\begin{frame}
\begin{overprint}
\begin{description}
\item<1->[Spam]
\onslide<3->{Eggs}
\item<2->[Cheese]
\onslide<4->{Tofu}
\end{description}
\end{overprint}
\end{frame}
\end{document}
However, this has the effect that both Spam and Eggs appear only on slide 3. I would like that in slide 1, I have Spam (but without any content), and in slide 3, I have Spam and Eggs. How can I achieve this?
An alternative would be to use a tabular environment, but I'm interested to see if it can be achieved with a description.
-
Something like this works:
\documentclass{beamer}
\newcommand\desctext[1]{%
\only<+(1)>{\mbox{}}%
\onslide<+(1)->{#1}}
\begin{document}
\begin{frame}
\begin{overprint}
\begin{description}
\item[Spam1]\desctext{Eggs1}
\item[Spam2]\desctext{Eggs2}
\item[Spam3]\desctext{Eggs3}
\item[Spam4]\desctext{Eggs4}
\end{description}
\end{overprint}
\end{frame}
\end{document}
For the ordering required (all labels first, then all descriptions), something like this can be done:
\documentclass{beamer}
\newcommand\desctext[2][]{%
\only<+(1)->{\mbox{}}%
\onslide<#1->{#2}}
\begin{document}
\begin{frame}
\begin{overprint}
\begin{description}
\item[Spam1]\desctext[5]{Eggs1}% add one to the number of items
\item[Spam2]\desctext[6]{Eggs2}
\item[Spam3]\desctext[7]{Eggs3}
\item[Spam4]\desctext[8]{Eggs4}
\end{description}
\end{overprint}
\end{frame}
\end{document}
-
Almost; in fact, I like to first have all the items appear, then all the descriptions. And I might want a finer-grained control in any case. But the \mbox{} does the trick to solve the problem. – gerrit Oct 10 '12 at 17:15
@gerrit I updated my answer with a posibble definition for this new ordering; perhaps it can be simplified a little to elliminate the optional argument. – Gonzalo Medina Oct 10 '12 at 17:29
Personally I use itemize environment to achieve your goal.
description environment seems to behave differently from itemize and enumerate environment for "complicated" constructions.
\documentclass{beamer}
\begin{document}
\begin{frame}
\begin{overprint}
\begin{itemize}
\item<1-> Spam:
\onslide<2->{Eggs}
\end{itemize}
\end{overprint}
\end{frame}
\end{document}
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9594213366508484, "perplexity": 1714.3372562995555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986625.58/warc/CC-MAIN-20150728002306-00045-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-for-college-students-7th-edition/chapter-1-section-1-7-scientific-notation-exercise-set-page-89/42 | ## Intermediate Algebra for College Students (7th Edition)
$6 \times 10^5$
Divide the corresponding parts. Then, put the expression in the form $a*10^n$, where a is a number that is greater than or equal to 1 but less than 10. $=\dfrac{1.2}{2} \times \dfrac{10^4}{10^{-2}} \\=0.6 \times 10^{4-(-2)} \\=0.6 \times 10^{4+2} \\=0.6 \times 10^{6} \\=6 \times 10^{6-1} \\=6 \times 10^5$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9409254193305969, "perplexity": 179.3523456486135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593302.74/warc/CC-MAIN-20180722135607-20180722155607-00055.warc.gz"} |
https://dml.cz/handle/10338.dmlcz/146752 | # Article
Full entry | PDF (0.2 MB)
Keywords:
Itô functional difference equation; stability of solutions; admissibility of spaces
Summary:
The admissibility of spaces for Itô functional difference equations is investigated by the method of modeling equations. The problem of space admissibility is closely connected with the initial data stability problem of solutions for Itô delay differential equations. For these equations the $p$-stability of initial data solutions is studied as a special case of admissibility of spaces for the corresponding Itô functional difference equation. In most cases, this approach seems to be more constructive and expedient than other traditional approaches. For certain equations sufficient conditions of solution stability are given in terms of parameters of those equations.
References:
[1] Andrianov, D. L.: Boundary value problems and control problems for linear difference systems with aftereffect. Russ. Math. 37 (1993), 1-12; translation from Izv. Vyssh. Uchebn. Zaved. Mat. {\it 5} (1993), 3-16. MR 1265616 | Zbl 0836.34087
[2] Azbelev, N. V., Simonov, P. M.: Stability of Differential Equations with Aftereffect. Stability and Control: Theory, Methods and Applications 20. Taylor and Francis, London (2003). MR 1965019 | Zbl 1049.34090
[3] Elaydi, S.: Periodicity and stability of linear Volterra difference systems. J. Math. Anal. Appl. 181 (1994), 483-492. DOI 10.1006/jmaa.1994.1037 | MR 1260872 | Zbl 0796.39004
[4] Elaydi, S., Zhang, S.: Stability and periodicity of difference equations with finite delay. Funkc. Ekvacioj, Ser. Int. 37 (1994), 401-413. MR 1311552 | Zbl 0819.39006
[5] Ikeda, N., Watanabe, S.: Stochastic Differential Equations and Diffusion Processes. North-Holland Mathematical Library 24. North-Holland Publishing, Amsterdam; Kodansha Ltd., Tokyo (1981). MR 0637061 | Zbl 0495.60005
[6] Kadiev, R.: Sufficient stability conditions for stochastic systems with aftereffect. Differ. Equations 30 (1994), 509-517; translation from Differ. Uravn. {\it 30} (1994), 555-564. MR 1299841 | Zbl 0824.93069
[7] Kadiev, R.: Stability of solutions of stochastic functional differential equations. Doctoral dissertation, DSc Habilitation thesis, Makhachkala (2000) (in Russian).
[8] Kadiev, R., Ponosov, A. V.: Stability of linear stochastic functional-differential equations under constantly acting perturbations. Differ. Equations 28 (1992), 173-179; translation from Differ. Uravn. {\it 28} (1992), 198-207. MR 1184920 | Zbl 0788.60071
[9] Kadiev, R., Ponosov, A. V.: Relations between stability and admissibility for stochastic linear functional differential equations. Func. Diff. Equ. 12 (2005), 209-244. MR 2137849 | Zbl 1093.34046
[10] Kadiev, R., Ponosov, A. V.: The $W$-transform in stability analysis for stochastic linear functional difference equations. J. Math. Anal. Appl. 389 (2012), 1239-1250. DOI 10.1016/j.jmaa.2012.01.003 | MR 2879292 | Zbl 1248.93168
Partner of | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.908149778842926, "perplexity": 2898.6977350869147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538431.77/warc/CC-MAIN-20210123191721-20210123221721-00469.warc.gz"} |
https://rd.springer.com/article/10.1186/1687-1847-2011-63 | , 2011:63
# Stability criteria for linear Hamiltonian dynamic systems on time scales
Open Access
Research
## Abstract
In this article, we establish some stability criteria for the polar linear Hamiltonian dynamic system on time scales
by using Floquet theory and Lyapunov-type inequalities.
2000 Mathematics Subject Classification: 39A10.
### Keywords
Hamiltonian dynamic system Lyapunov-type inequality Floquet theory stability time scales
## 1 Introduction
A time scale is an arbitrary nonempty closed subset of the real numbers ℝ. We assume that is a time scale. For , the forward jump operator is defined by , the backward jump operator is defined by , and the graininess function is defined by μ(t) = σ(t) - t. For other related basic concepts of time scales, we refer the reader to the original studies by Hilger [1, 2, 3], and for further details, we refer the reader to the books of Bohner and Peterson [4, 5] and Kaymakcalan et al. [6].
Definition 1.1. If there exists a positive number ω ∈ ℝ such that for all and n∈ ℤ, then we call a periodic time scale with period ω.
Suppose is a ω-periodic time scale and . Consider the polar linear Hamiltonian dynamic system on time scale
(1.1)
where α(t), β(t) and γ(t) are real-valued rd-continuous functions defined on . Throughout this article, we always assume that
(1.2)
and
(1.3)
For the second-order linear dynamic equation
(1.4)
if let y(t) = p(t)xΔ (t), then we can rewrite (1.4) as an equivalent polar linear Hamiltonian dynamic system of type (1.1):
(1.5)
where p(t) and q(t) are real-valued rd-continuous functions defined on with p(t) > 0, and
Recently, Agarwal et al. [7], Jiang and Zhou [8], Wong et al. [9] and He et al. [10] established some Lyapunov-type inequalities for dynamic equations on time scales, which generalize the corresponding results on differential and difference equations. Lyapunov-type inequalities are very useful in oscillation theory, stability, disconjugacy, eigenvalue problems and numerous other applications in the theory of differential and difference equations. In particular, the stability criteria for the polar continuous and discrete Hamiltonian systems can be obtained by Lyapunov-type inequalities and Floquet theory, see [11, 12, 13, 14, 15, 16]. In 2000, Atici et al. [17] established the following stablity criterion for the second-order linear dynamic equation (1.4):
Theorem 1.2 [17]. Assume p(t) > 0 for , and that
(1.6)
If
(1.7)
and
(1.8)
then equation (1.4) is stable, where
(1.9)
where and in the sequel, system (1.1) or Equation (1.4) is said to be unstable if all nontrivial solutions are unbounded on ; conditionally stable if there exist a nontrivial solution which is bounded on ; and stable if all solutions are bounded on .
In this article, we will use the Floquet theory in [18, 19] and the Lyapunov-type inequalities in [10] to establish two stability criteria for system (1.1) and equation (1.4). Our main results are the following two theorems.
Theorem 1.3. Suppose (1.2) and (1.3) hold and
(1.10)
Assume that there exists a non-negative rd-continuous function θ (t) defined on such that
(1.11)
(1.12)
and
(1.13)
Then system (1.1) is stable.
Theorem 1.4. Assume that (1.6) and (1.7) hold, and that
(1.14)
Then equation (1.4) is stable.
Remark 1.5. Clearly, condition (1.14) improves (1.8) by removing term p0.
We dwell on the three special cases as follows:
1. 1.
If , system (1.1) takes the form:
(1.15)
In this case, the conditions (1.12) and (1.13) of Theorem 1.3 can be transformed into
(1.16)
and
(1.17)
Condition (1.17) is the same as (3.10) in [12], but (1.11) and (1.16) are better than (3.9) in [12] by taking θ (t) = |α (t)|/β (t). A better condition than (1.17) can be found in [14, 15].
2. 2.
If , system (1.1) takes the form:
(1.18)
In this case, the conditions (1.11), (1.12), and (1.13) of Theorem 1.3 can be transformed into
(1.19)
(1.20)
and
(1.21)
Conditions (1.19), (1.20), and (1.21) are the same as (1.17), (1.18) and (1.19) in [16], i.e., Theorem 1.3 coincides with Theorem 3.4 in [16]. However, when p(n) and q(n) are ω-periodic functions defined on , the stability conditions
(1.22)
in Theorem 1.4 are better than the one
(1.23)
in [16, Corollary 3.4]. More related results on stability for discrete linear Hamiltonian systems can be found in [20, 21, 22, 23, 24].
3. 3.
Let δ > 0 and N ∈ {2, 3, 4, ...}. Set ω = δ + N, define the time scale as follows:
(1.24)
Then system (1.1) takes the form:
(1.25)
and
(1.26)
In this case, the conditions (1.11), (1.12), and (1.13) of Theorem 1.3 can be transformed into
(1.27)
(1.28)
and
(1.29)
2 Proofs of theorems
Let u(t) = (x(t), y(t)), u σ (t) = (x(σ(t)), y(t)) and
Then, we can rewrite (1.1) as a standard linear Hamiltonian dynamic system
(2.1)
Let u1(t) = (x10(t), y10(t)) and u2(t) = (x20(t), y20(t)) be two solutions of system (1.1) with (u1(0), u2(0)) = I2. Denote by Φ(t) = (u1(t), u2(t)). Then Φ(t) is a fundamental matrix solution for (1.1) and satisfies Φ(0) = I2. Suppose that α(t), β(t) and γ(t) are ω-periodic functions defined on (i.e. (1.10) holds), then Φ(t + ω) is also a fundamental matrix solution for (1.1) ( see [18]). Therefore, it follows from the uniqueness of solutions of system (1.1) with initial condition ( see [9, 18, 19]) that
(2.2)
From (1.1), we have
(2.3)
It follows that det Φ(t) = det Φ(0) = 1 for all . Let λ1 and λ2 be the roots (real or complex) of the characteristic equation of Φ(ω)
which is equivalent to
(2.4)
where
Hence
(2.5)
Let v1 = (c11, c21) and v2 = (c12, c22) be the characteristic vectors associated with the characteristic roots λ1 and λ2 of Φ(ω), respectively, i.e.
(2.6)
Let v j (t) = Φ(t)v j , j = 1, 2. Then it follows from (2.2) and (2.6) that
(2.7)
On the other hand, it follows from (2.1) that
(2.8)
This shows that v1(t) and v2(t) are two solutions of system (1.1) which satisfy (2.7). Hence, we obtain the following lemma.
Lemma 2.1. Let Φ(t) be a fundamental matrix solution for (1.1) with Φ(0) = I2, and let λ1 and λ2 be the roots (real or complex) of the characteristic equation (2.4) of Φ(ω). Then system (1.1) has two solutions v1(t) and v2(t) which satisfy (2.7).
Similar to the continuous case, we have the following lemma.
Lemma 2.2. System (1.1) is unstable if |H| > 2, and stable if |H| < 2.
Instead of the usual zero, we adopt the following concept of generalized zero on time scales.
Definition 2.3. A function is said to have a generalized zero at provided either f(t0) = 0 or f(t0)f(σ(t0)) < 0.
Lemma 2.4. [4] Assume are differential at . If fΔ(t) exists, then f(σ(t)) = f(t) + μ(t)fΔ(t).
Lemma 2.5. [4] (Cauchy-Schwarz inequality). Let . For f,gC rd we have
The above inequality can be equality only if there exists a constant c such that f(t) = cg(t) for .
Lemma 2.6. Let v1(t) = (x1(t), y1(t)) and v2(t) = (x2(t), y2(t)) be two solutions of system (1.1) which satisfy (2.7). Assume that (1.2), (1.3) and (1.10) hold, and that exists a non-negative function θ(t) such that (1.11) and (1.12) hold. If H2 ≥ 4, then both x1(t) and x2(t) have generalized zeros in .
Proof. Since |H| ≥ 2, then λ1 and λ2 are real numbers, and v1(t) and v2(t) are also real functions. We only prove that x1(t) must have at least one generalized zero in . Otherwise, we assume that x1(t) > 0 for and so (2.7) implies that x1(t) > 0 for . Define z(t): = y1(t)/x1(t). Due to (2.7), one sees that z(t) is ω-periodic, i.e. z(t + ω) = z(t), . From (1.1), we have
(2.9)
From the first equation of (1.1), and using Lemma 2.4, we have
(2.10)
Since x1(t) > 0 for all , it follows from (1.2) and (2.10) that
(2.11)
which yields
(2.12)
Substituting (2.12) into (2.9), we obtain
(2.13)
If β(t) > 0, together with (1.11), it is easy to verify that
(2.14)
If β(t) = 0, it follows from (1.11) that α(t) = 0, hence
(2.15)
Combining (2.14) with (2.15), we have
(2.16)
Substituting (2.16) into (2.13), we obtain
(2.17)
Integrating equation (2.17) from 0 to ω, and noticing that z(t) is ω-periodic, we obtain
Lemma 2.7. Let v1(t) = (x1(t), y1(t)) and v2(t) = (x2(t), y2(t)) be two solutions of system (1.1) which satisfy (2.7). Assume that
(2.18)
(2.19)
and
(2.20)
If H2 ≥ 4, then both x1(t) and x2(t) have generalized zeros in .
Proof. Except (1.12), (2.18), and (2.19) imply all assumptions in Lemma 2.6 hold. In view of the proof of Lemma 2.6, it is sufficient to derive an inequality which contradicts (2.20) instead of (1.12). From (2.11), (2.13), and (2.18), we have
(2.21)
and
(2.22)
Since z(t) is ω-periodic and γ(t) ≢ 0,, it follows from (2.22) that z2(t) ≢ 0 on . Integrating equation (2.22) from 0 to ω, we obtain
Lemma 2.8. [10] Suppose that (1.2) and (1.3) hold and let with σ(a) ≤ b. Assume (1.1) has a real solution (x(t), y(t)) such that x(t) has a generalized zero at end-point a and (x(b), y(b)) = (κ1x(a), κ2y(a)) with and x(t) ≢ 0 on . Then one has the following inequality
(2.23)
Lemma 2.9. Suppose that (2.18) holds and let with σ(a) ≤ b. Assume (1.1) has a real solution (x(t), y(t)) such that x(t) has a generalized zero at end-point a and (x(b), y(b)) = (κx(a), κy(a)) with 0 < κ2 ≤ 1 and x(t) is not identically zero on . Then one has the following inequality
(2.24)
Proof. In view of the proof of [10, Theorem 3.5] (see (3.8), (3.29)-(3.34) in [10]), we have
(2.25)
(2.26)
(2.27)
and
(2.28)
where ξ ∈ [0, 1), and
(2.29)
Let |x(τ*)| = maxσ(a)≤τb|x(τ)|. There are three possible cases:
1. (1)
y(t) ≡ y(a) ≠ 0, ;
2. (2)
y(t) ≢ y(a), |y(t)| ≡ |y(a)|, ;
3. (3)
|y(t)| ≢ |y(a)|, .
Case (1). In this case, κ = 1. It follows from (2.25) and (2.26) that
which contradicts the assumption that x(b) = κx(a) = x(a).
Case (2). In this case, we have
(2.30)
instead of (2.28). Applying Lemma 2.5 and using (2.27) and (2.30), we have
(2.31)
Dividing the latter inequality of (2.31) by |x(τ*)|, we obtain
(2.32)
Case (3). In this case, applying Lemma 2.5 and using (2.27) and (2.28), we have
(2.33)
Dividing the latter inequality of (2.33) by |x(τ*)|, we also obtain (2.32). It is easy to verify that
Substituting this into (2.32), we obtain (2.24). □
Proof of Theorem 1.3. If |H| ≥ 2, then λ1 and λ2 are real numbers and λ1λ2 = 1, it follows that . Suppose . By Lemma 2.6, system (1.1) has a non-zero solution v1(t) = (x1(t), y1(t)) such that (2.7) holds and x1(t) has a generalized zero in , say t1. It follows from (2.7) that (x1(t1 + ω), y1(t1 + ω)) = λ1(x1(t1), y1(t1)). Applying Lemma 2.8 to the solution (x1(t), y1(t)) with a = t1, b = t1 + ω and κ1 = κ2 = λ1, we get
(2.34)
Next, noticing that for any ω-periodic function f(t) on , the equality
holds for all . It follows from (3.1) that
(2.35)
which contradicts condition (1.13). Thus |H| < 2 and hence system (1.1) is stable. □
Proof of Theorem 1.4. By using Lemmas 2.7 and 2.9 instead of Lemmas 2.6 and 2.8, respectively, we can prove Theorem 1.4 in a similar fashion as the proof of Theorem 1.3. So, we omit the proof here. □
## Notes
### Acknowledgements
The authors thank the referees for valuable comments and suggestions. This project is supported by Scientific Research Fund of Hunan Provincial Education Department (No. 11A095) and partially supported by the NNSF (No: 11171351) of China.
### References
1. 1.
Hilger S: Einßmakettenkalk ü l mit Anwendung auf Zentrumsmannigfaltigkeiten. Ph.D. Thesis, Universität Würzburg in German; 1988.Google Scholar
2. 2.
Hilger S: Analysis on measure chain--A unified approach to continuous and discrete calculus. Results Math 1990, 18: 18-56.
3. 3.
Hilger S: Differential and difference calculus-unified. Nonlinear Anal 1997, 30: 2683-2694. 10.1016/S0362-546X(96)00204-0
4. 4.
Bohner M, Peterson A: Dynamic Equations on Time Scales: An Introduction with Applications. Birkhäuser, Boston; 2001.
5. 5.
Bohner M, Peterson A: Advances in Dynamic Equations on Time Scales. Birkhäuser Boston, Inc., Boston. MA; 2003.
6. 6.
Kaymakcalan B, Lakshimikantham V, Sivasundaram S: Dynamic System on Measure Chains. Kluwer Academic Publishers, Dordrecht; 1996.Google Scholar
7. 7.
Agarwal R, Bohner M, Rehak P: Half-linear dynamic equations. Nonlinear Anal Appl 2003, 1: 1-56.
8. 8.
Jiang LQ, Zhou Z: Lyapunov inequality for linear Hamiltonian systems on time scales. J Math Anal Appl 310: 579-593.Google Scholar
9. 9.
Wong F, Yu S, Yeh C, Lian W: Lyapunov's inequality on timesscales. Appl Math Lett 2006, 19: 1293-1299. 10.1016/j.aml.2005.06.006
10. 10.
He X, Zhang Q, Tang XH: On inequalities of Lyapunov for linear Hamiltonian systems on time scales. J Math Anal Appl 2011, 381: 695-705. 10.1016/j.jmaa.2011.03.036
11. 11.
Guseinov GSh, Kaymakcalan B: Lyapunov inequalities for discrete linear Hamiltonian systems. Comput Math Appl 2003, 45: 1399-1416. 10.1016/S0898-1221(03)00095-6
12. 12.
Guseinov GSh, Zafer A: Stability criteria for linear periodic impulsive Hamiltonian systems. J Math Anal Appl 2007, 335: 1195-1206. 10.1016/j.jmaa.2007.01.095
13. 13.
Krein MG: Foundations of the theory of λ-zones of stability of canonical system of linear differential equations with periodic coefficients. memory of A.A. Andronov, Izdat. Acad. Nauk SSSR, Moscow; 1955:413-498.Google Scholar
14. 14.
Wang X: Stability criteria for linear periodic Hamiltonian systems. J Math Anal Appl 2010, 367: 329-336. 10.1016/j.jmaa.2010.01.027
15. 15.
Tang XH, Zhang M: Lyapunov inequalities and stability for linear Hamiltonian systems. J Diff Equ 2012, 252: 358-381. 10.1016/j.jde.2011.08.002
16. 16.
Zhang Q, Tang XH: Lyapunov inequalities and stability for discrete linear Hamiltonian system. Appl Math Comput 2011, 218: 574-582. 10.1016/j.amc.2011.05.101
17. 17.
Atici FM, Guseinov GSh, Kaymakcalan B: On Lyapunov inequality in stability theory for Hill's equation on time scales. J Inequal Appl 2000, 5: 603-620.
18. 18.
Ahlbrandt CD, Ridenhour J: Floquet theory for time scales and Putzer representations of matrix logarithms. J Diff Equ Appl 2003, 9: 77-92.
19. 19.
DaCunha JJ: Lyapunov stability and floquet theory for nonautonomous linear dynamic systems on time scales. Ph.D. dissertation, Baylor University, Waco, Tex, USA; 2004.Google Scholar
20. 20.
Halanay A, Răsvan Vl: Stability and boundary value problems, for discrete-time linear Hamiltonian systems. In Dyn Syst Appl. Volume 8. Edited by: Agarwal RP, Bohner M. Special Issue on "Discrete and Continuous Hamiltonian Systems"; 1999:439-459.Google Scholar
21. 21.
Răsvan Vl: Stability zones for discrete time Hamiltonian systems. Archivum Mathematicum Tomus 2000, 36: 563-573. (CDDE2000 issue)Google Scholar
22. 22.
Răsvan Vl: Krein-type results for λ-zones of stability in the discrete-time case for 2-nd order Hamiltonian systems. Folia FSN Universitatis Masarykianae Brunensis, Mathematica 2002, 10: 1-12. (CDDE2002 issue)Google Scholar
23. 23.
Răsvan Vl: On central λ-stability zone for linear discrete time Hamiltonian systems. Proc. fourth Int. Conf. on Dynamical Systems and Differential Equations, Wilmington NC; 2002.Google Scholar
24. 24.
Răsvan Vl: On stability zones for discrete time periodic linear Hamiltonian systems. Adv Diff Equ ID80757, pp. 1-13. doi:10.1155/ADE/2006/80757, e-ISSN 1687-1847, 2006Google Scholar | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9856456518173218, "perplexity": 2781.6222402783064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812756.57/warc/CC-MAIN-20180219151705-20180219171705-00225.warc.gz"} |
http://gmatclub.com/forum/a-certain-it-department-of-fewer-than-15-people-hires-coders-137203.html | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 27 May 2016, 08:28
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# A certain IT department of fewer than 15 people hires coders
Author Message
TAGS:
### Hide Tags
Intern
Joined: 06 Jun 2012
Posts: 31
Followers: 1
Kudos [?]: 14 [0], given: 62
A certain IT department of fewer than 15 people hires coders [#permalink]
### Show Tags
12 Aug 2012, 17:46
8
This post was
BOOKMARKED
00:00
Difficulty:
95% (hard)
Question Stats:
43% (03:28) correct 57% (02:34) wrong based on 194 sessions
### HideShow timer Statistics
A certain IT department of fewer than 15 people hires coders and systems administrators. Coders are paid $55,000 per year on average, while system administrators are paid an average yearly salary of$45,000. What is the ratio of coders to systems administrators?
(1) If two of the coders were made systems administrators instead, the yearly payroll for the IT department would be $535,000. (2) If systems administrators' salaries were reduced by one-third, and coders' salaries were increased to$58,000, the department would save $57,000 in yearly payroll. [Reveal] Spoiler: OA Director Joined: 22 Mar 2011 Posts: 612 WE: Science (Education) Followers: 90 Kudos [?]: 772 [2] , given: 43 Re: A certain IT department of fewer than 15 people [#permalink] ### Show Tags 13 Aug 2012, 01:29 2 This post received KUDOS Alterego wrote: A certain IT department of fewer than 15 people hires coders and systems administrators. Coders are paid$55,000 per year on average, while system administrators are paid an average yearly salary of $45,000. What is the ratio of coders to systems administrators? (1) If two of the coders were made systems administrators instead, the yearly payroll for the IT department would be$535,000.
(2) If systems administrators' salaries were reduced by one-third, and coders' salaries were increased to $58,000, the department would save$57,000 in yearly payroll.
Let's drop the thousands and work with smaller numbers.
Denote by A the number of administrators and by C that of the coders.
(1) We can write the following equation:
$$55(C - 2) + 45(A + 2) = 535$$, which can be written as $$11C + 9A = 111$$.
We have to keep in mind that $$A + C < 15$$ and that A and C are positive integers.
Checking for possibile solutions under the given constraints, we find a single pair $$A=5$$ and $$C=6.$$
Sufficient.
(2) Now we can write $$55C + 45A - (58C+30A) = 57.$$
We have to solve the equation $$5A-C=19.$$ Again, A and C must be positive integers and $$A + C < 15$$.
We find a single pair of admissible solution, $$A = 5$$ and $$C = 6$$, only if we assume that there is more than one coder.
Otherwise, we could also have the solution $$A = 4$$ and $$C = 1$$. As the question talks about administrators and coders, it is reasonable to assume that there is more than one of each type.
Sufficient.
_________________
PhD in Applied Mathematics
Love GMAT Quant questions and running.
Manager
Joined: 04 Apr 2013
Posts: 153
Followers: 1
Kudos [?]: 36 [0], given: 36
Re: A certain IT department of fewer than 15 people [#permalink]
### Show Tags
02 Jun 2013, 14:03
EvaJager wrote:
Alterego wrote:
A certain IT department of fewer than 15 people hires coders and systems administrators. Coders are paid $55,000 per year on average, while system administrators are paid an average yearly salary of$45,000. What is the ratio of coders to systems administrators?
(1) If two of the coders were made systems administrators instead, the yearly payroll for the IT department would be $535,000. (2) If systems administrators' salaries were reduced by one-third, and coders' salaries were increased to$58,000, the department would save $57,000 in yearly payroll. Let's drop the thousands and work with smaller numbers. Denote by A the number of administrators and by C that of the coders. (1) We can write the following equation: $$55(C - 2) + 45(A + 2) = 535$$, which can be written as $$11C + 9A = 111$$. We have to keep in mind that $$A + C < 15$$ and that A and C are positive integers. Checking for possibile solutions under the given constraints, we find a single pair $$A=5$$ and $$C=6.$$ Sufficient. (2) Now we can write $$55C + 45A - (58C+30A) = 57.$$ We have to solve the equation $$5A-C=19.$$ Again, A and C must be positive integers and $$A + C < 15$$. We find a single pair of admissible solution, $$A = 5$$ and $$C = 6$$, only if we assume that there is more than one coder. Otherwise, we could also have the solution $$A = 4$$ and $$C = 1$$. As the question talks about administrators and coders, it is reasonable to assume that there is more than one of each type. Sufficient. Answer D why is it reasonable to assume there is more than 1 coder? _________________ Maadhu MGMAT1 - 540 ( Trying to improve ) Math Expert Joined: 02 Sep 2009 Posts: 33042 Followers: 5765 Kudos [?]: 70627 [0], given: 9856 Re: A certain IT department of fewer than 15 people [#permalink] ### Show Tags 02 Jun 2013, 21:21 Expert's post maaadhu wrote: EvaJager wrote: Alterego wrote: A certain IT department of fewer than 15 people hires coders and systems administrators. Coders are paid$55,000 per year on average, while system administrators are paid an average yearly salary of $45,000. What is the ratio of coders to systems administrators? (1) If two of the coders were made systems administrators instead, the yearly payroll for the IT department would be$535,000.
(2) If systems administrators' salaries were reduced by one-third, and coders' salaries were increased to $58,000, the department would save$57,000 in yearly payroll.
Let's drop the thousands and work with smaller numbers.
Denote by A the number of administrators and by C that of the coders.
(1) We can write the following equation:
$$55(C - 2) + 45(A + 2) = 535$$, which can be written as $$11C + 9A = 111$$.
We have to keep in mind that $$A + C < 15$$ and that A and C are positive integers.
Checking for possibile solutions under the given constraints, we find a single pair $$A=5$$ and $$C=6.$$
Sufficient.
(2) Now we can write $$55C + 45A - (58C+30A) = 57.$$
We have to solve the equation $$5A-C=19.$$ Again, A and C must be positive integers and $$A + C < 15$$.
We find a single pair of admissible solution, $$A = 5$$ and $$C = 6$$, only if we assume that there is more than one coder.
Otherwise, we could also have the solution $$A = 4$$ and $$C = 1$$. As the question talks about administrators and coders, it is reasonable to assume that there is more than one of each type.
Sufficient.
why is it reasonable to assume there is more than 1 coder?
(2) uses plural wording: "... coders' salaries were ..." so we can assume that there is more than 1 coder. Though I agree that GMAT would probably made this clearer.
Hope it's clear.
_________________
Manager
Joined: 04 Apr 2013
Posts: 153
Followers: 1
Kudos [?]: 36 [0], given: 36
Re: A certain IT department of fewer than 15 people [#permalink]
### Show Tags
03 Jun 2013, 17:27
why is it reasonable to assume there is more than 1 coder?[/quote]
(2) uses plural wording: "... coders' salaries were ..." so we can assume that there is more than 1 coder. Though I agree that GMAT would probably made this clearer.
Hope it's clear.[/quote]
Thank You Bunuel for clarification.
_________________
MGMAT1 - 540 ( Trying to improve )
Senior Manager
Joined: 21 Jan 2010
Posts: 344
Followers: 2
Kudos [?]: 147 [1] , given: 12
Re: A certain IT department of fewer than 15 people hires coders [#permalink]
### Show Tags
03 Jun 2013, 18:51
1
KUDOS
Alterego wrote:
A certain IT department of fewer than 15 people hires coders and systems administrators. Coders are paid $55,000 per year on average, while system administrators are paid an average yearly salary of$45,000. What is the ratio of coders to systems administrators?
(1) If two of the coders were made systems administrators instead, the yearly payroll for the IT department would be $535,000. (2) If systems administrators' salaries were reduced by one-third, and coders' salaries were increased to$58,000, the department would save $57,000 in yearly payroll. Just think about the problem before lifting the pen. Department has coders(x) + system admins(y). A) 55x + 45y = 550,000 11x + 9y = 110,000 After that use brute force to determine the value of x and y. Remember x + y <15. You will get a unique soln. b) 55000x+45000y-(30000y + 58000x) = 57000 Use brute force to get the answer. It will be unique. Intern Joined: 05 Jun 2014 Posts: 6 Location: India GMAT 1: 610 Q44 V34 WE: Engineering (Computer Software) Followers: 0 Kudos [?]: 0 [0], given: 5 Re: A certain IT department of fewer than 15 people hires coders [#permalink] ### Show Tags 31 Jan 2015, 08:37 In this case it is coders + sys administrators < 11.. How is it possible to test a wide range of values ? At least if it was c + s = 11 we would have limited options, but since the situation here is c + s <11, there can be do many options Manager Joined: 22 Jan 2014 Posts: 135 WE: Project Management (Computer Hardware) Followers: 0 Kudos [?]: 43 [0], given: 118 Re: A certain IT department of fewer than 15 people hires coders [#permalink] ### Show Tags 31 Jan 2015, 12:00 EvaJager wrote: Alterego wrote: A certain IT department of fewer than 15 people hires coders and systems administrators. Coders are paid$55,000 per year on average, while system administrators are paid an average yearly salary of $45,000. What is the ratio of coders to systems administrators? (1) If two of the coders were made systems administrators instead, the yearly payroll for the IT department would be$535,000.
(2) If systems administrators' salaries were reduced by one-third, and coders' salaries were increased to $58,000, the department would save$57,000 in yearly payroll.
Let's drop the thousands and work with smaller numbers.
Denote by A the number of administrators and by C that of the coders.
(1) We can write the following equation:
$$55(C - 2) + 45(A + 2) = 535$$, which can be written as $$11C + 9A = 111$$.
We have to keep in mind that $$A + C < 15$$ and that A and C are positive integers.
Checking for possibile solutions under the given constraints, we find a single pair $$A=5$$ and $$C=6.$$
Sufficient.
(2) Now we can write $$55C + 45A - (58C+30A) = 57.$$
We have to solve the equation $$5A-C=19.$$ Again, A and C must be positive integers and $$A + C < 15$$.
We find a single pair of admissible solution, $$A = 5$$ and $$C = 6$$, only if we assume that there is more than one coder.
Otherwise, we could also have the solution $$A = 4$$ and $$C = 1$$. As the question talks about administrators and coders, it is reasonable to assume that there is more than one of each type.
Sufficient.
Shouldn't we be multiplying the LHS by 12? (considering that the figures given are avg salaries and RHS is denoting yearly payout)
Bunuel
_________________
Illegitimi non carborundum.
EMPOWERgmat Instructor
Status: GMAT Assassin/Co-Founder
Affiliations: EMPOWERgmat
Joined: 19 Dec 2014
Posts: 6412
Location: United States (CA)
GMAT 1: 800 Q51 V49
GRE 1: 340 Q170 V170
Followers: 269
Kudos [?]: 1893 [0], given: 161
Re: A certain IT department of fewer than 15 people hires coders [#permalink]
### Show Tags
31 Jan 2015, 13:09
Expert's post
Hi saleem1992,
We have two pieces of information that limit the possibilities: the number of each type of worker AND either the total salaries (in Fact 1) or the money saved (in Fact 2).
As an example of how there are limited options, consider what were told in Fact 1....
Fact 1: ......the yearly payroll for the IT department would be $535,000.$535,000 is a rather specific number and it's made up of a certain number of $55,000 employees and$45,000 employees. While it would take a bit of 'brute force' work, it wouldn't take that much to eliminate the wrong answers.
For example:
10 Coders = 10(55k) = 550,000 which is TOO MUCH MONEY. Therefore, the number of coders MUST be less than 10.
9 Coders = 9(55k) = 495,000, leaving 40,000 left over, but that's not the right amount of money for another employee. The number of coders CANNOT = 9 either.
Etc.
Working down, you'll eventually find that there's just one set of values that fits these parameters. Thus, Fact 1 is SUFFICIENT.
GMAT assassins aren't born, they're made,
Rich
_________________
# Rich Cohen
Co-Founder & GMAT Assassin
# Special Offer: Save $75 + GMAT Club Tests 60-point improvement guarantee www.empowergmat.com/ ***********************Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!*********************** Senior Manager Joined: 06 Mar 2014 Posts: 274 Location: India GMAT Date: 04-30-2015 Followers: 0 Kudos [?]: 41 [0], given: 84 Re: A certain IT department of fewer than 15 people hires coders [#permalink] ### Show Tags 07 Apr 2015, 16:56 Alterego wrote: A certain IT department of fewer than 15 people hires coders and systems administrators. Coders are paid$55,000 per year on average, while system administrators are paid an average yearly salary of $45,000. What is the ratio of coders to systems administrators? (1) If two of the coders were made systems administrators instead, the yearly payroll for the IT department would be$535,000.
(2) If systems administrators' salaries were reduced by one-third, and coders' salaries were increased to $58,000, the department would save$57,000 in yearly payroll.
I have a problem with the wording of the question.
Just want to confirm is it me or the fact that the opening statement says the IT Dept "Hires" coders and systems administrators makes it seem like total number = Already present IT people (less than 15) + Coders and System Admin.
That interpretation could be avoided in my view if the Question did not add "hires"
Please let me know if i am missing something in the language of the original question.
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 6578
Location: Pune, India
Followers: 1791
Kudos [?]: 10774 [0], given: 211
Re: A certain IT department of fewer than 15 people hires coders [#permalink]
### Show Tags
07 Apr 2015, 21:55
Expert's post
earnit wrote:
Alterego wrote:
A certain IT department of fewer than 15 people hires coders and systems administrators. Coders are paid $55,000 per year on average, while system administrators are paid an average yearly salary of$45,000. What is the ratio of coders to systems administrators?
(1) If two of the coders were made systems administrators instead, the yearly payroll for the IT department would be $535,000. (2) If systems administrators' salaries were reduced by one-third, and coders' salaries were increased to$58,000, the department would save $57,000 in yearly payroll. I have a problem with the wording of the question. Just want to confirm is it me or the fact that the opening statement says the IT Dept "Hires" coders and systems administrators makes it seem like total number = Already present IT people (less than 15) + Coders and System Admin. That interpretation could be avoided in my view if the Question did not add "hires" Please let me know if i am missing something in the language of the original question. Yes, the word "hires" could have been avoided since it hints at additions to the total number i.e. 15. But the rest of the question clarifies that "hires" means the team has "coders" and "system administrators". Mind you, even in the actual GMAT, the wording will not be what we want. Overall, the question should clarify and not leave any ambiguity. This question does a reasonable job of that and is hence acceptable. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199
Veritas Prep Reviews
GMAT Club Legend
Joined: 09 Sep 2013
Posts: 9645
Followers: 465
Kudos [?]: 120 [0], given: 0
Re: A certain IT department of fewer than 15 people hires coders [#permalink]
### Show Tags
19 May 2016, 11:11
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Manager
Joined: 01 Mar 2014
Posts: 94
Followers: 1
Kudos [?]: 3 [0], given: 532
Re: A certain IT department of fewer than 15 people hires coders [#permalink]
### Show Tags
20 May 2016, 10:30
VeritasPrepKarishma wrote:
earnit wrote:
Alterego wrote:
A certain IT department of fewer than 15 people hires coders and systems administrators. Coders are paid $55,000 per year on average, while system administrators are paid an average yearly salary of$45,000. What is the ratio of coders to systems administrators?
(1) If two of the coders were made systems administrators instead, the yearly payroll for the IT department would be $535,000. (2) If systems administrators' salaries were reduced by one-third, and coders' salaries were increased to$58,000, the department would save $57,000 in yearly payroll. I have a problem with the wording of the question. Just want to confirm is it me or the fact that the opening statement says the IT Dept "Hires" coders and systems administrators makes it seem like total number = Already present IT people (less than 15) + Coders and System Admin. That interpretation could be avoided in my view if the Question did not add "hires" Please let me know if i am missing something in the language of the original question. Yes, the word "hires" could have been avoided since it hints at additions to the total number i.e. 15. But the rest of the question clarifies that "hires" means the team has "coders" and "system administrators". Mind you, even in the actual GMAT, the wording will not be what we want. Overall, the question should clarify and not leave any ambiguity. This question does a reasonable job of that and is hence acceptable. I got to the equations but wasn't sure of how to solve these. I have come across such questions before as well where one equation with 2 variables can be solved. Is there a way of doing this? I find it extremely tough to just put in numbers. How can we be sure that there is a unique solution? Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 6578 Location: Pune, India Followers: 1791 Kudos [?]: 10774 [0], given: 211 Re: A certain IT department of fewer than 15 people hires coders [#permalink] ### Show Tags 22 May 2016, 21:45 Expert's post MeghaP wrote: I got to the equations but wasn't sure of how to solve these. I have come across such questions before as well where one equation with 2 variables can be solved. Is there a way of doing this? I find it extremely tough to just put in numbers. How can we be sure that there is a unique solution? Case 2 in this post discusses this concept: http://www.veritasprep.com/blog/2011/06 ... -of-thumb/ It shows you how to get all integer solutions given an equation in two variables. Sometimes, you will get one unique solution to the equation. The post tells you how to find it. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199
Veritas Prep Reviews
Re: A certain IT department of fewer than 15 people hires coders [#permalink] 22 May 2016, 21:45
Similar topics Replies Last post
Similar
Topics:
2 Last year, department A earned 35% more than department B did. 3 26 Feb 2016, 02:29
2 An IT Department employs 8 people at an average salary of \$8 3 24 Apr 2013, 21:05
42 A box contains 10 light bulbs, fewer than half of which are 21 28 Aug 2010, 02:54
2 A certain carton holds fewer than 50 books. What is the 7 17 Apr 2010, 01:52
40 A box contains 10 light bulbs, fewer than half of which are 19 12 Jan 2008, 02:54
Display posts from previous: Sort by
# A certain IT department of fewer than 15 people hires coders
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7223621010780334, "perplexity": 3009.6918982591133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276780.5/warc/CC-MAIN-20160524002116-00026-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://clay6.com/qa/25199/-p-rightarrow-q-is-equivalent-to | # $^{\sim}(p \Rightarrow q)$ is equivalent to
$\begin {array} {1 1} (A)\;p \wedge q & \quad (B)\;^{\sim} p V q \\ (C)\;^{\sim} p \wedge ^{\sim}q & \quad (D)\;p \wedge ^{\sim} q \end {array}$
p q $^{\sim}q$ $p \Rightarrow q$ $^{\sim}( p \Rightarrow q)$ $p \wedge ^{\sim}q$ T T F T F F T F T F T T F T F F F F F F T F F F
Ans : (D)
So, .$^{\sim}(p \Rightarrow q)$ is equivalent to $p \wedge ^{\sim}q$
edited Jan 24, 2014 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9673270583152771, "perplexity": 105.09187632248124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825308.77/warc/CC-MAIN-20171022150946-20171022170946-00169.warc.gz"} |
http://scott.sherrillmix.com/blog/tag/latex/ | ## Displaying Code in LaTeX
gioby of Bioinfo Blog! (an interesting read by the way) left a comment asking about displaying code in LaTeX documents. I’ve sort of been cludging around using \hspace‘s and \textcolor but I’ve always meant to figure out the right way to do things so this seemed like a good chance to figure out how to do it right.
LaTeX tends to ignore white space. This is good when you’re writing papers but not so good when you’re trying to show code where white space is an essential part (e.g. Python). Luckily there’s a builtin verbatim environment in LaTeX that is equivalent to html’s <pre>. So something like the following should preserve white space.
\begin{verbatim}
for i in range(1, 5):
print i
else:
print "The for loop is over"
\end{verbatim}
Unfortunately, you can’t use any normal LaTeX commands inside verbatim (since they’re displayed verbatim). But luckily there a handy package called fancyvrb that fixes this (the color package is also useful for adding colors). For example, if you wanted to highlight “for” in the above code, you can use the Verbatim (note the capital V) environment from fancyvrb:
\newcommand\codeHighlight[1]{\textcolor[rgb]{1,0,0}{\textbf{#1}}}
\begin{Verbatim}[commandchars=\\\{\}]
\codeHighlight{for} i in range(1, 5):
print i
else:
print "The for loop is over"
\end{Verbatim}
If you really want to get fancy, the Pygments package in Python will output syntax highlighted latex code with a command like: pygmentize -f latex -O full test.py >py.tex The LaTeX it outputs is a bit hard to read but it’s not too bad (it helped me figure out the fancyvrb package) and it does make nice syntax highlighted output.
Here’s an example LaTeX file with the three examples above and the pdf it generates if you’re curious.
## Interesting Links (08-01-23)
I think it’s supposed to be some sort of blogging shortcut but I kind of like when a blog I read posts interesting links they’ve found recently. So I thought I would start doing a few posts like that of my own. I’ll gather up links I think are especially interesting and once I get five or so dump them in to a post. Feel free to read or delete as you please.
MESSENGER Images of Mercury
The Messenger space probe passed by Mercury recently. I hadn’t realized that most of Mercury has never been seen. It’s pretty cool that we get to see images of a new world almost as quickly as the scientists working on it.
That Stupid Bigfoot on Mars
This one has been going around the internet. If you missed it, there’s a rock on Mars near one of the rovers that looks like Bigfoot. The “Bigfoot” thing is pretty silly (although Sasquatch was the first thing I thought when I saw the picture) but that post shows the really cool and huge panorama it came from.
Donald Knuth and LaTeX
I like LaTeX so I found this bit of history about Donald Knuth coming up with the software pretty interesting.
Bioluminescence and Squid Video
I just found out about all these TED talks being online. Pretty handy when you don’t have a TV. This one is about five minutes long and has a bunch of videos of squid, octopuses and things that glow in the depths.
Pulgasari: The North Korean Godzilla
This is another one resulting from not having a TV. Definitely a less than B grade monster movie but it does provide a good comparison to Cloverfield. The story of Kim Jong-Il kidnapping the director and his wife and forcing them to make the thing sounds like a better story than the movie itself (not that it’d take much). For the impatient, there’s decent monster bits around 27:30, 47:30 and 1:03:00.
Soldering Tiny Components
This is a great video tutorial on how to solder tiny electronic components. Really nicely filmed and very closeup. You can really see what’s going on and the guy sure makes it look easy.
NerdKits
A nice idea by a couple college students to sell kits for learning how to use microcontrollers. They “guarantee that you’ll get your first program written and running”. Unfortunately they don’t have a USB version yet. Sort of a homegrown alternative to EasyPic4.
That’s it for now. That was pretty quick and fun to put together so I’ll probably do some more of these in the future. I hope something on there is interesting for other people too.
## LaTeX: Document Creation Alternative
I’ve been using LaTeX a lot recently and I thought I would write a quick post since I wish I would have found out about it earlier. LaTeX is a really powerful document (pdf and others) creation program. It’s sort of like HTML and CSS for paper publishing. As a first warning, LaTeX, like HTML, is not WYSIWYG. You have to code in things like \textbf{This will be bold}. This takes some getting used to after programs like MS Word but after using LaTeX, I really can’t stand working in Word for anything longer than a page or two. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5541031956672668, "perplexity": 1447.1233290307239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510754.1/warc/CC-MAIN-20181016134654-20181016160154-00550.warc.gz"} |
http://www.wantedvibes.co.uk/nse9p/doctor-crossword-clue-6db064 | the bond angles of this is 109.5 and that is is the bond angles . Three Dee is confusing that with the total ⦠the shape is bent. Log in. Welcome; Our Menu; Contact Us; facebook; 0 ฿ 0.00 View Cart; Checkout Since there are two lone pairs on chlorine, the electron pair repulsion will result in a bond angle that is ⦠With two bonding pairs and two lone pairs, the structure is designated as AX 2 E 2 with a total of four electron pairs⦠One way to identify a lone pair is to draw a Lewis structure.The number of lone pair electrons added to the number of bonding electrons equals the number of valence electrons ⦠Draw the Lewis structure for ClO2â including all formal charges. ... trans-C2H2F2? The total of 4 electron pairs shows that the molecule is sp3 hybridised. In addition to the four ligands, sulfur also has one lone ⦠$\ce{ClO2}$ has 2 $\sigma$ bonds, 1 lone pair, 2Ï bonds and 1 odd electron. In all these ions, the sum of the lone pairs of electrons and bonding domains on C l atom is 4. ICl4; Cl. Double bonded O carries two lone pairs and single bonded O each carries three lone pairs⦠(Question 3B/C of the Fall 2009 Exam pg 212 of the course ⦠4. With three bonding pairs and two lone pairs, the structural designation is AX 3 E 2 with a total of five electron pairs. Ion / Molecular Geometry SiCl62- / trigonal bipyramidal PH4+ / tetrahedral ClO2- / angular NH4+ / tetrahedral SO42 Drawing the Lewis Structure for ClO 2-. 32-8= 24e-=12 lone pair. one lone pair of electrons and three bond pairs the resulting molecular geometry is trigonal pyramidal (e.g. Show The Formal Charges Of All Atoms In ⦠A lone pair is an electron pair in the outermost shell of an atom that is not shared or bonded to another atom. Join now. Because the axial and equatorial positions are not equivalent, we must decide how to arrange the groups to minimize repulsions. Since there are two lone pairs on chlorine,Chlorine dioxide is a chemical compound with the formula ClO2. Use information from step 4 and 5 to draw the lewis structure. How many lone pairs are in a chlorite ion (CLO2)- ? Draw the molecule by placing atoms on the grid and connecting them with bonds. There are no lone pairs of electrons in the molecule, and there is a symmetric distribution of the electrons in its structure. ===== Follow up ===== There is no way in the world for chlorine to have 8 lone pairs. Find the hybridization as well identify the pÏ-pÏ as well as pÏ-dÏ bonds in $\ce{ClO2}$. Ask your question. How many lone pairs does CLO2 have? H 2O). I 3 - : The ion is linear and symmetrical. While the four points Answered By Include All Lone Pairs Of Electrons. 3. In the molecule SF 4, for example, the central sulfur atom has four ligands; the coordination number of sulfur is four. There are 32 valence electrons available for the Lewis structure for ClO 4-.. Be sure to check the formal charges ⦠ClO2 has 19 electrons; it is a free radical. 3. But there is a lone ⦠For example, in H2O, you have 2 electrons coming from the hydrogens ⦠Include all lone pairs of electrons. The Cl has 2 lone pairs, the first O has 2 lone pairs and the other O has 3 lone pairs. The mono negative charge on the top is due to an excess electron on the central atom. The ClO2 Lewis structure has 19 valence electrons meaning that there will be an odd number of valence electrons in the structure. Thus 6 x 4 = 24 lone pairs, total. Include all lone pairs of electrons. CO2 - How many lone pairs of electrons are on the central C atom? Cl 2 lone pairs needed add to 36 e-3. 1 lone pair, square pyramidal d. wpd INTENT The purpose of this experiment is to introduce to you some of the basic theories and techniques used by chemists to predict the ⦠How many lone electron pairs does the ClO (-1 charged) polyatomic anion have? Question: Problem 8.61 Draw The Dominant Lewis Structures For These Chlorine-oxygen Molecules/ions. There are 2 resonance structures. That accounts for 24 electrons. 1 2 3. Be the first to answer! With an odd number of electrons, one will be unpaired, which makes the compound paramagnetic.....⢠O = Cl = O. Note that Chlorine is the least electronegative and goes at the center of the ClO2 ⦠Each Cl atom interacts with eight valence electrons total: the six in the lone pairs and the two in the single bond. Find an answer to your question No of lone pairs in ClO2- 1. iodine = 7 electron Each oxygen atom has a complete octate, and therefore three lone pairs of electrons. Due to the repulsive forces between the pairs of electrons, CO2 takes up linear geometry. The steric number of a central atom in a molecule is the number of atoms bonded to that central atom, called its coordination number, plus the number of lone pairs of valence electrons on the central atom. This results in s p 3 hybridization and tetrahedral geometry. To calculate the bond angle in ClO2-you would need to write the Lewis structure, then draw the VSEPR model. 2. 5. SO4-2 -> 6 bonding pairs and 10 lone pairs. Perchlorates (salts with the ClO 4-) are used in rocket fuel (NH 4 ClO 4) and to treat hyperthyroidism (NaClO 4).. A commonly used perchlorate is ammonium perchlorate (NH 4 ClO 4) found in solid rocket fuel.. Answered No of lone pairs in ClO2- 2 Draw Lewis structure for ClO{eq}_2^-{/eq}. The hybridization of Cl here is sp3 ClO2 is bent in shape with tetrahedral spatial geometry having two pairs of lone pair. Put lone pairs on atoms; Stability of lewis structure - Check the stability and minimize charges on atoms by converting lone pairs to bonds to obtain the best structure. CO2 - How many double bonds does the structure have? ssatvik2807 ssatvik2807 28.07.2020 Chemistry Secondary School +5 pts. It has an sp hybridization and has bond angles of 180 degrees. Each oxygen has two lone pairs. Show the formal charges of all atoms in the correct structure. Cl I Cl. Place all remaining electrons on the central atom. Repulsions are minimized by directing the bonding pairs and the lone pairs to the corners of a tetrahedron Figure 9.2. Draw the molecule by placing atoms on the grid and connecting them with bonds. Since we consider odd electron a lone pair like in $\ce{NO2}$ therefore hybridisation is ⦠ClO2; O=C=O 2 lone pairs needed Lewis dot structure of CLO 4-Alternatively a dot method can be used to draw the lewis structure. Just did it on Mastering Chemistry the answer is 2 pairs lone electrons on O, double bonded to Cl, Cl has one lone pair plus one lone electron, double bonded to the second O, which has 2 pairs of lone ⦠SubmitMy AnswersGive Up Correct Part B ClOâ Draw The Lewis Structure For The Ion. Chlorite is ClO2-The single bonded oxygen has a negative charge as it has 3 lone pairs (6 unpaired electrons). NH 3). O-Cl=O. The shape of sp3 hybrid molecule is a tetrahedral. Total number of electrons of the valance shells of chlorine and oxygen ⦠Give the number of lone pairs around the central atom and the molecular geometry of XeF 2. Answer. If there are two bond pairs and two lone pairs of electrons the molecular geometry is angular or bent (e.g. A) one B) three C) six D) two E) none of the above. #Xe F_4# does not follow the octet rule. By using double bonds to oxygen, the formal charges on oxygen are reduced to zero. The double bonded oxygen has 2 lone pairs (4 unpaired electrons). Answer: C. Learn More : Share this Share on Facebook Tweet on Twitter Plus on Google+ « Prev Question. Ask your question. CH3OH - How many lone pairs of ⦠There are four groups around the central oxygen atom, two bonding pairs and two lone pairs. ClO2 : Due to the presence of lone pair on the Cl atom ClO2 possesses slightly bent shaped having bond angle of 111 degree. Therefore, there will be one lone pair on chlorine. Lone pairs can make a contribution to a molecule's dipole moment. 3. Five electron pairs give a starting point that is a trigonal bipyramidal structure. It is also called a non-bonding pair. The structure of ClO 2-has Cl double bonded to one O and single bonded to the other O. A) 0 lone pairs, linear D) 3 lone pairs, bent In carbonate ion central C atom is joined to one O atom through double bond and attached to other two O atoms carrying -ve charge through single bond. Join now. 1. Chemistry Q&A Library ClO3+: Central atom = Cl Type of hybrid orbitals on the chlorine atom = _____ sp, sp2, sp3 Number of hybrid orbitals used in overlap with atomic orbitals = Number of hybrid orbitals accomodating unshared electrons = Hybrid orbital geometry = _____linear, bent, trigonal planar, trigonal pyramidal, ⦠Log in. Step 5: Find the number of nonbonding (lone pairs) e-. The remaining two electrons make a lone pair. Chemistry. Hybridisation is equal to number of $\sigma$ bonds + lone pairs. 1. ClO2- - How many lone pairs of electrons are on the central Cl atom? Part A ClO Draw The Lewis Structure For The Molecule. Delicious story Primary Navigation. If you've done them correctly, just add the number of pairs, multiply by two, and check if you get the total number of electrons in the molecule. The answer is (a). 2. The Si has no lone pairs, but each F has 6 lone pairs. 3. 1. In doing so, the electron pair geometry of the molecule is tetrahedral and the molecular geometry is bent. What are some ⦠Draw the Lewis structure. If we place both lone pairs in the axial positions, we have six LPâBP repulsions at 90°. Subtract step 3 number from step 1. BrF4- - How many lone pairs of electrons are on the central Br atom? Asked by Wiki User. 2. NH 3 has a dipole moment of 1.47 D. As the electronegativity of nitrogen (3.04) is greater than that of hydrogen (2.2) the result is that the N-H bonds are polar with a net negative charge on the nitrogen atom and a smaller net positive charge on the hydrogen atoms. The formal ⦠Distribute the remaining electrons as lone pairs on the terminal atoms (except hydrogen), completing an octet around each atom. Nitric oxide, NO, is an example of an odd-electron molecule; it is produced in internal combustion engines when oxygen and nitrogen react at high ⦠CO2 -> 4 lone pairs on the two oxygens and 4 bonding pairs. Drawing correct lewis structure is important to draw resonance structures of ClO 3-. Hence, the trial structure has the correct number of electrons. Use VSEPR theory to predict the molecular geometry around either carbon atom in ⦠A) 0 lone pairs, linear D) 3 lone pairs, bent B) 1 lone pair, bent E) 3 lone pairs, linear C) 2 lone pairs, bent Ans: C Category: Medium Section: 10.1 3. N3 N=N=N Central N has 0 lone pairs. The Br has 2 lone pairs and each O has 3 lone pairs. i want to say it is a tetrahedral because it has four atoms and no lone pairs, but im not sure. According to the VSEPR model, the arrangement of electron pairs around NH3 and CH4 is A) different, because in each case there are a different number of atoms around the central atom B) different, because in each case there are a different number of electron pairs around the central atom Calculate the total valence electrons in the molecule. The three oxygen form a double bond giving three bond pairs. Re: Bond angle of ClO2-In doing so, the electron pair geometry of the molecule is tetrahedral and the molecular geometry is bent. The correct number of valence electrons in the molecule SF 4, example... Be one lone pair on chlorine pair geometry of XeF 2 information from Step and. - How many lone pairs of electrons the molecular geometry is bent Tweet on Twitter on. Starting point that is is the bond angles ⦠draw the lewis structure is important to draw the structure. Brf4- - How many lone pairs around the central atom and the lone pairs to the of... 4 unpaired electrons ) ion is linear and symmetrical will be one lone pair, 2Ï bonds and odd... A chemical compound with the formula ClO2 to arrange the groups to minimize.. The molecule by placing atoms on the grid and connecting them with bonds is the angles... ( 4 unpaired electrons ) the grid and connecting them with bonds that the molecule sp3... Example, the trial structure has 19 electrons ; it is a tetrahedral has four atoms and lone. The trial structure has 19 valence electrons in the molecule is a free radical equivalent, have... Charge as it has 3 lone pairs on chlorine to number of lone needed! Cl = O needed lone pairs, but each F has 6 lone to... Electrons in the molecule by placing atoms on the central sulfur atom has four ligands the. Cloâ draw the lewis structure Twitter Plus on Google+ « Prev question, but im not.! Top is due to an excess electron on the two oxygens and bonding. We have six LPâBP repulsions at 90° 8 lone pairs can make a contribution to molecule! Negative charge on the central Br atom electrons ; it is a chemical compound with the ClO2..., one will be an odd number of $\sigma$ bonds, 1 lone pair in ⦠How lone! Does the structure in its structure be one lone pair, 2Ï bonds and 1 electron!, but im not sure shape of sp3 hybrid molecule is a free.... B ClOâ draw the molecule use information from Step 4 and 5 to the! And two lone pairs, but im not sure does ClO2 have each F has 6 pairs! Way in the structure of ClO 4-Alternatively a dot method can be used to draw the lewis structure for molecule! Since there are two bond pairs the resulting molecular geometry is trigonal pyramidal clo2- lone pairs e.g e.g! Connecting them with bonds in shape with tetrahedral spatial geometry having two of... Charge on the top is due to the corners of a tetrahedron Figure 9.2 \sigma... Are not equivalent, we must decide How to arrange the groups minimize. The above the groups to minimize repulsions 5 to draw the lewis structure for the molecule by placing atoms the... Double bonds to oxygen, the trial structure has 19 electrons ; is. Electron pair geometry of the molecule is tetrahedral and the other O Cl here is sp3 hybridised Cl =.. Prev question will be one lone pair on chlorine, chlorine clo2- lone pairs a! 4 electron pairs with tetrahedral spatial geometry having two pairs of electrons in its structure on Tweet... Are on the central oxygen atom, two bonding pairs and two clo2- lone pairs pairs of ⦠3 dioxide. Five electron pairs predict the molecular geometry around either carbon atom in ⦠SO4-2 - > bonding! Two bond pairs the resulting molecular geometry is angular or bent ( e.g, bonding... Use VSEPR theory to predict the molecular geometry is trigonal pyramidal ( e.g How! Pair geometry of XeF 2 ClO draw the molecule free radical coordination number of electrons are on the central atom! Placing atoms on the central oxygen atom, two bonding pairs and lone. The bond angles Cl atom double bond giving three bond pairs and two lone pairs starting point that is the... Has the correct structure so, the formal charges correct lewis structure for the molecule two. Six D ) two E ) none of the electrons in its structure 6 unpaired electrons ) 6 bonding and! Be one lone pair, 2Ï bonds and 1 odd electron to your question no lone. 6 unpaired electrons ) molecule by placing atoms on the central oxygen atom, two bonding pairs and two pairs. Electrons, co2 takes up linear geometry to have 8 lone pairs, the structural designation AX! There will be unpaired, which makes the compound paramagnetic..... ⢠O = Cl = O bonds the! C atom hence, the trial structure has the correct number of nonbonding ( lone pairs in... This is 109.5 and that is is the bond angles of this is 109.5 and that is is bond... Not sure the bond angles for the molecule is sp3 ClO2 is bent this is 109.5 that! 4 unpaired electrons ) ClOâ draw the molecule is tetrahedral and the other O has lone. To 36 e-3 has four atoms and no lone pairs the molecule by placing atoms on the sulfur! ¦ How many lone pairs on chlorine structure for the molecule by placing atoms on the and... Molecule, and there is a tetrahedral because it has 3 lone pairs electrons! 4 lone pairs and the other O and the molecular geometry is bent in shape tetrahedral... Electrons and three bond pairs and two lone pairs needed add to 36 e-3 B ) C! Molecule by placing atoms on the central Br atom structural designation is AX 3 2! Therefore, there will be unpaired, which makes the compound paramagnetic..... ⢠O = Cl = O Tweet. Not equivalent, we have six LPâBP repulsions at 90°: C. More! In a chlorite ion ( ClO2 ) - linear geometry two lone pairs nonbonding ( pairs. A tetrahedron Figure 9.2 pair, 2Ï bonds and 1 odd electron equivalent, we must decide How arrange... There are two lone pairs can make a contribution to a molecule 's dipole moment bonds + pairs. Them with bonds 4-Alternatively a dot method can be used to draw resonance structures ClO! ) - 5 to draw resonance structures of ClO 4-Alternatively a dot method be... Groups to minimize repulsions are on the central Cl atom the mono negative charge as it has ligands... Draw the lewis structure for the molecule SF 4, for example, first. Is linear and symmetrical them with bonds negative charge as it has lone... To number of lone pairs, the central oxygen atom, two bonding pairs and two lone pairs to other. Im not sure we must decide How to arrange the groups to minimize repulsions: Share this on. The two oxygens and 4 bonding pairs and two lone pairs can make a contribution to molecule. Lone pairs and the other O if we place both lone pairs, but each F 6. And 4 bonding pairs and 10 lone pairs of electrons are on the central atom... Two lone pairs of electrons are on the top is due to the other.! The shape of sp3 hybrid molecule is tetrahedral and the lone pairs, total total of! Sp3 hybrid molecule is sp3 hybridised are no lone pairs of electrons are the... Doing so, the structural designation is AX 3 E 2 with a total of 4 electron pairs shows the! Pairs to the other O has 3 lone pairs More: Share this Share on Facebook Tweet Twitter! Shows that the molecule ) e- needed add to 36 e-3 ) one B three... S p 3 hybridization and tetrahedral geometry the top is due to an excess electron on the central Cl?... Add to 36 e-3 coordination number of $\sigma$ bonds + lone pairs of electrons the. The Cl has 2 lone pairs around the central Br atom geometry trigonal! 4 and 5 to draw the lewis structure = O bent ( e.g two clo2- lone pairs. Free radical since there are two lone pairs, but im not sure is... ( ClO2 ) - correct number of electrons and three bond pairs the resulting molecular is. Is no way in the molecule is sp3 hybridised geometry around either carbon atom â¦... Geometry of the molecule by placing atoms on the central Cl atom central oxygen atom, two bonding pairs two... Chlorite is ClO2-The single bonded to the corners of a tetrahedron Figure 9.2 6 bonding pairs and two pairs! A tetrahedron Figure 9.2 has a negative charge on the two oxygens and 4 bonding pairs and two pairs! Formal charges of all clo2- lone pairs in ⦠SO4-2 - > 6 bonding and. Bipyramidal structure by directing the bonding pairs and two lone pairs ( 6 unpaired electrons ) bent in with! Five electron pairs shows that the molecule is a tetrahedral because it has 3 lone pairs of electrons are the. Ch3Oh - How many lone pairs does ClO2 have a chlorite ion ( ClO2 ) - lone... 6 lone pairs of electrons, one will be an odd number of electrons on... Corners of a tetrahedron Figure 9.2 is important to draw the lewis structure is important to resonance! Therefore, there will be an odd number of electrons, co2 takes up linear geometry 3... And 10 lone pairs ( 4 unpaired electrons ) bipyramidal structure \$ has 2 pairs... By directing the bonding pairs and two lone pairs in the molecule is a chemical compound with the formula.... The structure of ClO 4-Alternatively a dot method can be used to draw the lewis is! Has 2 lone pairs around the central Cl atom are on the central atom the. Symmetric distribution of the electrons in the molecule is a symmetric distribution of the electrons in the molecule 4! 'S dipole moment minimize repulsions the corners of a tetrahedron Figure 9.2 4 pairs! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3507923483848572, "perplexity": 2641.172767604647}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00212.warc.gz"} |
http://edoc.mpg.de/display.epl?mode=doc&id=715988&col=61&grp=3972 | Home News About Us Contact Contributors Disclaimer Privacy Policy Help FAQ
Home Search Quick Search Advanced Fulltext Browse Collections Persons My eDoc Session History Login Name: Password: Documentation Help Support Wiki Direct access to document ID:
Institute: MPI für Physik Collection: YB 2016 Display Documents
ID: 715988.0, MPI für Physik / YB 2016
Search for $B_{s}^{0}\\rightarrow\\gamma\\gamma$ and a measurement of the branching fraction for $B_{s}^{0}\\rightarrow\\phi\\gamma$
Authors:
Date of Publication (YYYY-MM-DD):2015
Title of Journal:Physical Review D
Journal Abbrev.:Phys.Rev.D
Issue / Number:91
Start Page:011101
Audience:Not Specified
Intended Educational Use:No
Abstract / Description:We search for the decay $B_{s}^{0}\\rightarrow\\gamma\\gamma$ and measure the branching fraction for $B_{s}^{0}\\rightarrow\\phi\\gamma$ using 121.4~$\\textrm{fb}^{-1}$ of data collected at the $\\Upsilon(\\mathrm{5}S)$ resonance with the Belle detector at the KEKB asymmetric-energy $e^{+}e^{-}$ collider. The $B_{s}^{0}\\rightarrow\\phi\\gamma$ branching fraction is measured to be $(3.6 \\pm 0.5 (\\mathrm{stat.}) \\pm 0.3 (\\mathrm{syst.}) \\pm 0.6 (f_{s})) \\times 10^{-5}$, where $f_{s}$ is the fraction of $B_{s}^{(*)}\\bar{B}_{s}^{(*)}$ in $b\\bar{b}$ events. Our result is in good agreement with the theoretical predictions as well as with a recent measurement from LHCb. We observe no statistically significant signal for the decay $B_{s}^{0}\\rightarrow\\gamma\\gamma$ and set a $90\\%$ confidence-level upper limit on its branching fraction at $3.1 \\times 10^{-6}$. This constitutes a significant improvement over the previous result.
Classification / Thesaurus:Belle II
Comment of the Author/Creator:6 pages, 3 figures
External Publication Status:published
Document Type:Article
Communicated by:MPI für Physik
Affiliations:
Identifiers:
Full Text: You have privileges to view the following file(s): arxiv:1411.7771.pdf Uploading file not finished...
The scope and number of records on eDoc is subject to the collection policies defined by each institute - see "info" button in the collection browse view. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16426770389080048, "perplexity": 1857.016768093729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655879532.0/warc/CC-MAIN-20200702142549-20200702172549-00069.warc.gz"} |
https://homework.cpm.org/category/CON_FOUND/textbook/mc1/chapter/7/lesson/7.1.2/problem/7-21 | ### Home > MC1 > Chapter 7 > Lesson 7.1.2 > Problem7-21
7-21.
Use a Giant One to change each of the following fractions to a number written as a fraction over $100$. Then write each portion as a percent.
1. $\frac { 3 } { 20 }$
$20\left(5\right) = 100$
1. $\frac { 3 } { 40 }$
What do you multiply by $40$ to get $100$? Use the number in the Giant One.
$\frac{3}{40\ }\cdot$ $=\frac{7.5}{100}=7.5$ | {"extraction_info": {"found_math": true, "script_math_tex": 8, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9695615172386169, "perplexity": 1464.627116094256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00442.warc.gz"} |
https://asmedigitalcollection.asme.org/fluidsengineering/article-abstract/132/1/011303/467228/Liquid-Sheet-Breakup-in-Gas-Centered-Swirl-Coaxial?redirectedFrom=fulltext | The study deals with the breakup behavior of swirling liquid sheets discharging from gas-centered swirl coaxial atomizers with attention focused toward the understanding of the role of central gas jet on the liquid sheet breakup. Cold flow experiments on the liquid sheet breakup were carried out by employing custom fabricated gas-centered swirl coaxial atomizers using water and air as experimental fluids. Photographic techniques were employed to capture the flow behavior of liquid sheets at different flow conditions. Quantitative variation on the breakup length of the liquid sheet and spray width were obtained from the measurements deduced from the images of liquid sheets. The sheet breakup process is significantly influenced by the central air jet. It is observed that low inertia liquid sheets are more vulnerable to the presence of the central air jet and develop shorter breakup lengths at smaller values of the air jet Reynolds number $Reg$. High inertia liquid sheets ignore the presence of the central air jet at smaller values of $Reg$ and eventually develop shorter breakup lengths at higher values of $Reg$. The experimental evidences suggest that the central air jet causes corrugations on the liquid sheet surface, which may be promoting the production of thick liquid ligaments from the sheet surface. The level of surface corrugations on the liquid sheet increases with increasing $Reg$. Qualitative analysis of experimental observations reveals that the entrainment process of air established between the inner surface of the liquid sheet and the central air jet is the primary trigger for the sheet breakup.
1.
Ryan
,
H. M.
,
Anderson
,
W. E.
,
Pal
,
S.
, and
Santoro
,
R. J.
, 1995, “
Atomization Characteristics of Impinging Liquid Jets
,”
J. Propul. Power
0748-4658,
11
, pp.
135
145
.
2.
Ashgriz
,
N.
,
Brocklehurst
,
W.
, and
Talley
,
D.
, 2001, “
Mixing Mechanisms in a Pair of Impinging Jets
,”
J. Propul. Power
0748-4658,
17
, pp.
736
749
.
3.
Mayer
,
W. O. H.
, 1994, “
Coaxial Atomization of a Round Liquid Jet in a High Speed Gas Stream: A Phenomenological Study
,”
Exp. Fluids
0723-4864,
16
, pp.
401
410
.
4.
Rahman
,
S. A.
,
Pal
,
S.
, and
Santoro
,
R. J.
, 1995, “
Swirl Coaxial Atomization: Cold-Flow and Hot-Fire Experiments
,” AIAA Paper No. 95-0381.
5.
Sivakumar
,
D.
, and
Raghunandan
,
B. N.
, 1996, “
Jet Interaction in Liquid-Liquid Coaxial Injectors
,”
ASME J. Fluids Eng.
0098-2202,
118
, pp.
329
334
.
6.
Sivakumar
,
D.
, and
Raghunandan
,
B. N.
, 1998, “
Hysteric Interaction of Conical Liquid Sheets From Coaxial Atomizers: Influence on the Spray Characteristics
,”
Phys. Fluids
1070-6631,
10
, pp.
1384
1397
.
7.
Cohn
,
R. K.
,
Strakey
,
P. A.
,
Bates
,
R. W.
,
Talley
,
R. G.
,
Muss
,
J. A.
, and
Johnson
,
C. W.
, 2003, “
Swirl Coaxial Injector Development
,” AIAA Paper No. 2003-0124.
8.
Soller
,
S.
,
Wagner
,
R.
,
Kau
,
H. -P.
,
Martin
,
P.
, and
Maeding
,
C.
, 2005, “
Characterization of Main Chamber Injectors for GOX/Kerosene in a Single Element Rocket Combustor
,” AIAA Paper No. 2005-3750.
9.
Muss
,
J. A.
,
Johnson
,
C. W.
,
Cheng
,
G. C.
, and
Cohn
,
R.
, 2003, “
Numerical Cold Flow and Combustion Characterization of Swirl Coaxial Injectors
,” AIAA Paper No. 2003-0125.
10.
Lefebvre
,
A. H.
, 1989,
Atomization and Sprays
,
Hemisphere
,
New York
.
11.
Dombrowski
,
N.
, and
Fraser
,
R. P.
, 1954, “
A Photographic Investigation Into the Disintegration of Liquid Sheets
,”
Philos. Trans. R. Soc. London
0962-8428,
247
, pp.
101
130
.
12.
Squire
,
H. B.
, 1953, “
Investigation of the Instability of a Moving Liquid Film
,”
Br. J. Appl. Phys.
0508-3443,
4
, pp.
167
169
.
13.
Taylor
,
G. I.
, 1959, “
The Dynamics of Thin Sheets of Fluid. II Waves on Fluid Sheets
,”
Proc. R. Soc. London, Ser. A
0950-1207,
253
, pp.
296
312
.
14.
Taylor
,
G. I.
, 1959, “
The Dynamics of Thin Sheets of Fluid. III Disintegration of Fluid Sheets
,”
Proc. R. Soc. London, Ser. A
0950-1207,
253
, pp.
313
321
.
15.
Dombrowski
,
N.
, and
Johns
,
W. R.
, 1963, “
The Aerodynamic Instability and Disintegration of Viscous Liquid Sheets
,”
Chem. Eng. Sci.
0009-2509,
17
, pp.
291
305
.
16.
Clark
,
C. J.
, and
Dombrowski
,
N.
, 1972, “
Aerodynamic Instability and Disintegration of Inviscid Liquid Sheets
,”
Proc. R. Soc. London, Ser. A
0950-1207,
329
, pp.
467
478
.
17.
Hagerty
,
W.
, and
Shea
,
J. F.
, 1955, “
A Study of the Stability of Moving Liquid Film
,”
ASME J. Appl. Mech.
0021-8936,
22
, pp.
509
514
.
18.
Lin
,
S. P.
, 2003,
Breakup of Liquid Sheets and Jets
,
Cambridge University Press
,
London
.
19.
Mansour
,
A.
, and
Chigier
,
N.
, 1990, “
Disintegration of Liquid Sheets
,”
Phys. Fluids A
0899-8213,
2
, pp.
706
719
.
20.
Stapper
,
B. E.
,
Sowa
,
W. A.
, and
Samuelsen
,
G. S.
, 1992, “
An Experimental Study of the Effects of Liquid Properties on the Breakup of a Two-Dimensional Liquid Sheet
,”
ASME J. Eng. Gas Turbines Power
0742-4795,
114
, pp.
39
45
.
21.
Park
,
J.
,
Huh
,
K. Y.
,
Li
,
X.
, and
Renksizbulut
,
M.
, 2004, “
Experimental Investigations on Cellular Breakup of a Planar Liquid Sheet From an Air-Blast Nozzle
,”
Phys. Fluids
1070-6631,
16
, pp.
625
632
.
22.
,
M.
,
Carvalho
,
I. S.
, and
Heitor
,
M. V.
, 2001, “
Visualization of the Disintegration of an Annular Liquid Sheet in a Coaxial Air Blast Injector at Low Atomizing Air Velocities
,”
Optical Diagnostics in Engineering
1364-4173,
5
, pp.
27
38
.
23.
Lozano
,
A.
,
Barreras
,
F.
,
Hauke
,
G.
, and
Dopazo
,
C.
, 2001, “
Longitudinal Instabilities in an Air-Blasted Liquid Sheet
,”
J. Fluid Mech.
0022-1120,
437
, pp.
143
173
.
24.
Moffat
,
R. J.
, 1988, “
Describing the Uncertainties in Experimental Results
,”
Exp. Therm. Fluid Sci.
0894-1777,
1
, pp.
3
17
. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42762377858161926, "perplexity": 21035.599636491646}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144979.91/warc/CC-MAIN-20200220131529-20200220161529-00029.warc.gz"} |
https://proofwiki.org/wiki/Completion_of_Normed_Division_Ring | # Completion of Normed Division Ring
## Theorem
Let $\struct {R, \norm {\, \cdot \,} }$ be a normed division ring.
Then:
$\struct {R, \norm {\, \cdot \,} }$ has a normed division ring completion $\struct {R', \norm {\, \cdot \,}' }$
## Proof
Let $d$ be the metric induced by $\struct {R, \norm {\, \cdot \,} }$.
Let $\mathcal C$ be the ring of Cauchy sequences over $R$.
Let $\mathcal N = \set {\sequence {x_n}: \displaystyle \lim_{n \mathop \to \infty} x_n = 0_R}$.
Let $\norm {\, \cdot \,}:\mathcal C \, \big / \mathcal N \to \R_{\ge 0}$ be the norm on the quotient ring $\mathcal C \, \big / \mathcal N$ defined by:
$\displaystyle \forall \sequence {x_n} + \mathcal N: \norm {\sequence {x_n} + \mathcal N } = \lim_{n \mathop \to \infty} \norm{x_n}$
Let $d'$ be the metric induced by $\struct {\mathcal C \, \big / \mathcal N, \norm {\, \cdot \,} }$.
By Quotient Ring of Cauchy Sequences is Normed Division Ring, $\struct {\mathcal C \, \big / \mathcal N, \norm {\, \cdot \,} }$ is a normed division ring.
By Quotient of Cauchy Sequences is Metric Completion, $\struct {\mathcal C \, \big / \mathcal N, d' }$ is the metric completion of $\struct {R, d}$.
Let $\phi: R \to \mathcal C \, \big / \mathcal N$ be the mapping from $R$ to the quotient ring $\mathcal C \,\big / \mathcal N$ defined by:
$\quad \quad \quad \forall a \in R: \map \phi a = \tuple {a, a, a, \ldots} + \mathcal N$
where $\tuple {a, a, a, \ldots} + \mathcal N$ is the left coset in $\mathcal C \, \big / \mathcal N$ that contains the constant sequence $\tuple {a, a, a, \ldots}$.
By Quotient of Cauchy Sequences is Metric Completion, $\map \phi R$ is a dense subset of $\struct {\mathcal C \, \big / \mathcal N, d' }$.
By the definition of a normed division ring completion, $\struct {\mathcal C \, \big / \mathcal N, \norm {\, \cdot \,} }$ is a normed division ring completion of $\struct {R, \norm {\, \cdot \,} }$.
$\blacksquare$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956879019737244, "perplexity": 91.69721329983615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256082.54/warc/CC-MAIN-20190520162024-20190520184024-00253.warc.gz"} |
http://spark.apache.org/docs/latest/mllib-decision-tree.html | # Decision Trees - RDD-based API
Decision trees and their ensembles are popular methods for the machine learning tasks of classification and regression. Decision trees are widely used since they are easy to interpret, handle categorical features, extend to the multiclass classification setting, do not require feature scaling, and are able to capture non-linearities and feature interactions. Tree ensemble algorithms such as random forests and boosting are among the top performers for classification and regression tasks.
spark.mllib supports decision trees for binary and multiclass classification and for regression, using both continuous and categorical features. The implementation partitions data by rows, allowing distributed training with millions of instances.
Ensembles of trees (Random Forests and Gradient-Boosted Trees) are described in the Ensembles guide.
## Basic algorithm
The decision tree is a greedy algorithm that performs a recursive binary partitioning of the feature space. The tree predicts the same label for each bottommost (leaf) partition. Each partition is chosen greedily by selecting the best split from a set of possible splits, in order to maximize the information gain at a tree node. In other words, the split chosen at each tree node is chosen from the set $\underset{s}{\operatorname{argmax}} IG(D,s)$ where $IG(D,s)$ is the information gain when a split $s$ is applied to a dataset $D$.
### Node impurity and information gain
The node impurity is a measure of the homogeneity of the labels at the node. The current implementation provides two impurity measures for classification (Gini impurity and entropy) and one impurity measure for regression (variance).
Gini impurity Classification $\sum_{i=1}^{C} f_i(1-f_i)$$f_i is the frequency of label i at a node and C is the number of unique labels. Entropy Classification \sum_{i=1}^{C} -f_ilog(f_i)$$f_i$ is the frequency of label $i$ at a node and $C$ is the number of unique labels.
Variance Regression $\frac{1}{N} \sum_{i=1}^{N} (y_i - \mu)^2$$y_i$ is label for an instance, $N$ is the number of instances and $\mu$ is the mean given by $\frac{1}{N} \sum_{i=1}^N y_i$.
The information gain is the difference between the parent node impurity and the weighted sum of the two child node impurities. Assuming that a split $s$ partitions the dataset $D$ of size $N$ into two datasets $D_{left}$ and $D_{right}$ of sizes $N_{left}$ and $N_{right}$, respectively, the information gain is:
$IG(D,s) = Impurity(D) - \frac{N_{left}}{N} Impurity(D_{left}) - \frac{N_{right}}{N} Impurity(D_{right})$
### Split candidates
Continuous features
For small datasets in single-machine implementations, the split candidates for each continuous feature are typically the unique values for the feature. Some implementations sort the feature values and then use the ordered unique values as split candidates for faster tree calculations.
Sorting feature values is expensive for large distributed datasets. This implementation computes an approximate set of split candidates by performing a quantile calculation over a sampled fraction of the data. The ordered splits create “bins” and the maximum number of such bins can be specified using the maxBins parameter.
Note that the number of bins cannot be greater than the number of instances $N$ (a rare scenario since the default maxBins value is 32). The tree algorithm automatically reduces the number of bins if the condition is not satisfied.
Categorical features
For a categorical feature with $M$ possible values (categories), one could come up with $2^{M-1}-1$ split candidates. For binary (0/1) classification and regression, we can reduce the number of split candidates to $M-1$ by ordering the categorical feature values by the average label. (See Section 9.2.4 in Elements of Statistical Machine Learning for details.) For example, for a binary classification problem with one categorical feature with three categories A, B and C whose corresponding proportions of label 1 are 0.2, 0.6 and 0.4, the categorical features are ordered as A, C, B. The two split candidates are A | C, B and A , C | B where | denotes the split.
In multiclass classification, all $2^{M-1}-1$ possible splits are used whenever possible. When $2^{M-1}-1$ is greater than the maxBins parameter, we use a (heuristic) method similar to the method used for binary classification and regression. The $M$ categorical feature values are ordered by impurity, and the resulting $M-1$ split candidates are considered.
### Stopping rule
The recursive tree construction is stopped at a node when one of the following conditions is met:
1. The node depth is equal to the maxDepth training parameter.
2. No split candidate leads to an information gain greater than minInfoGain.
3. No split candidate produces child nodes which each have at least minInstancesPerNode training instances.
## Usage tips
We include a few guidelines for using decision trees by discussing the various parameters. The parameters are listed below roughly in order of descending importance. New users should mainly consider the “Problem specification parameters” section and the maxDepth parameter.
### Problem specification parameters
These parameters describe the problem you want to solve and your dataset. They should be specified and do not require tuning.
• algo: Type of decision tree, either Classification or Regression.
• numClasses: Number of classes (for Classification only).
• categoricalFeaturesInfo: Specifies which features are categorical and how many categorical values each of those features can take. This is given as a map from feature indices to feature arity (number of categories). Any features not in this map are treated as continuous.
• For example, Map(0 -> 2, 4 -> 10) specifies that feature 0 is binary (taking values 0 or 1) and that feature 4 has 10 categories (values {0, 1, ..., 9}). Note that feature indices are 0-based: features 0 and 4 are the 1st and 5th elements of an instance’s feature vector.
• Note that you do not have to specify categoricalFeaturesInfo. The algorithm will still run and may get reasonable results. However, performance should be better if categorical features are properly designated.
### Stopping criteria
These parameters determine when the tree stops building (adding new nodes). When tuning these parameters, be careful to validate on held-out test data to avoid overfitting.
• maxDepth: Maximum depth of a tree. Deeper trees are more expressive (potentially allowing higher accuracy), but they are also more costly to train and are more likely to overfit.
• minInstancesPerNode: For a node to be split further, each of its children must receive at least this number of training instances. This is commonly used with RandomForest since those are often trained deeper than individual trees.
• minInfoGain: For a node to be split further, the split must improve at least this much (in terms of information gain).
### Tunable parameters
These parameters may be tuned. Be careful to validate on held-out test data when tuning in order to avoid overfitting.
• maxBins: Number of bins used when discretizing continuous features.
• Increasing maxBins allows the algorithm to consider more split candidates and make fine-grained split decisions. However, it also increases computation and communication.
• Note that the maxBins parameter must be at least the maximum number of categories $M$ for any categorical feature.
• maxMemoryInMB: Amount of memory to be used for collecting sufficient statistics.
• The default value is conservatively chosen to be 256 MB to allow the decision algorithm to work in most scenarios. Increasing maxMemoryInMB can lead to faster training (if the memory is available) by allowing fewer passes over the data. However, there may be decreasing returns as maxMemoryInMB grows since the amount of communication on each iteration can be proportional to maxMemoryInMB.
• Implementation details: For faster processing, the decision tree algorithm collects statistics about groups of nodes to split (rather than 1 node at a time). The number of nodes which can be handled in one group is determined by the memory requirements (which vary per features). The maxMemoryInMB parameter specifies the memory limit in terms of megabytes which each worker can use for these statistics.
• subsamplingRate: Fraction of the training data used for learning the decision tree. This parameter is most relevant for training ensembles of trees (using RandomForest and GradientBoostedTrees), where it can be useful to subsample the original data. For training a single decision tree, this parameter is less useful since the number of training instances is generally not the main constraint.
• impurity: Impurity measure (discussed above) used to choose between candidate splits. This measure must match the algo parameter.
### Caching and checkpointing
MLlib 1.2 adds several features for scaling up to larger (deeper) trees and tree ensembles. When maxDepth is set to be large, it can be useful to turn on node ID caching and checkpointing. These parameters are also useful for RandomForest when numTrees is set to be large.
• useNodeIdCache: If this is set to true, the algorithm will avoid passing the current model (tree or trees) to executors on each iteration.
• This can be useful with deep trees (speeding up computation on workers) and for large Random Forests (reducing communication on each iteration).
• Implementation details: By default, the algorithm communicates the current model to executors so that executors can match training instances with tree nodes. When this setting is turned on, then the algorithm will instead cache this information.
Node ID caching generates a sequence of RDDs (1 per iteration). This long lineage can cause performance problems, but checkpointing intermediate RDDs can alleviate those problems. Note that checkpointing is only applicable when useNodeIdCache is set to true.
• checkpointDir: Directory for checkpointing node ID cache RDDs.
• checkpointInterval: Frequency for checkpointing node ID cache RDDs. Setting this too low will cause extra overhead from writing to HDFS; setting this too high can cause problems if executors fail and the RDD needs to be recomputed.
## Scaling
Computation scales approximately linearly in the number of training instances, in the number of features, and in the maxBins parameter. Communication scales approximately linearly in the number of features and in maxBins.
The implemented algorithm reads both sparse and dense data. However, it is not optimized for sparse input.
## Examples
### Classification
The example below demonstrates how to load a LIBSVM data file, parse it as an RDD of LabeledPoint and then perform classification using a decision tree with Gini impurity as an impurity measure and a maximum tree depth of 5. The test error is calculated to measure the algorithm accuracy.
Refer to the DecisionTree Scala docs and DecisionTreeModel Scala docs for details on the API.
import org.apache.spark.mllib.tree.DecisionTree
import org.apache.spark.mllib.tree.model.DecisionTreeModel
import org.apache.spark.mllib.util.MLUtils
// Load and parse the data file.
// Split the data into training and test sets (30% held out for testing)
val splits = data.randomSplit(Array(0.7, 0.3))
val (trainingData, testData) = (splits(0), splits(1))
// Train a DecisionTree model.
// Empty categoricalFeaturesInfo indicates all features are continuous.
val numClasses = 2
val categoricalFeaturesInfo = Map[Int, Int]()
val impurity = "gini"
val maxDepth = 5
val maxBins = 32
val model = DecisionTree.trainClassifier(trainingData, numClasses, categoricalFeaturesInfo,
impurity, maxDepth, maxBins)
// Evaluate model on test instances and compute test error
val labelAndPreds = testData.map { point =>
val prediction = model.predict(point.features)
(point.label, prediction)
}
val testErr = labelAndPreds.filter(r => r._1 != r._2).count().toDouble / testData.count()
println("Test Error = " + testErr)
println("Learned classification tree model:\n" + model.toDebugString)
model.save(sc, "target/tmp/myDecisionTreeClassificationModel")
Find full example code at "examples/src/main/scala/org/apache/spark/examples/mllib/DecisionTreeClassificationExample.scala" in the Spark repo.
Refer to the DecisionTree Java docs and DecisionTreeModel Java docs for details on the API.
import java.util.HashMap;
import java.util.Map;
import scala.Tuple2;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.mllib.regression.LabeledPoint;
import org.apache.spark.mllib.tree.DecisionTree;
import org.apache.spark.mllib.tree.model.DecisionTreeModel;
import org.apache.spark.mllib.util.MLUtils;
JavaSparkContext jsc = new JavaSparkContext(sparkConf);
// Load and parse the data file.
String datapath = "data/mllib/sample_libsvm_data.txt";
// Split the data into training and test sets (30% held out for testing)
JavaRDD<LabeledPoint>[] splits = data.randomSplit(new double[]{0.7, 0.3});
JavaRDD<LabeledPoint> trainingData = splits[0];
JavaRDD<LabeledPoint> testData = splits[1];
// Set parameters.
// Empty categoricalFeaturesInfo indicates all features are continuous.
Integer numClasses = 2;
Map<Integer, Integer> categoricalFeaturesInfo = new HashMap<>();
String impurity = "gini";
Integer maxDepth = 5;
Integer maxBins = 32;
// Train a DecisionTree model for classification.
final DecisionTreeModel model = DecisionTree.trainClassifier(trainingData, numClasses,
categoricalFeaturesInfo, impurity, maxDepth, maxBins);
// Evaluate model on test instances and compute test error
JavaPairRDD<Double, Double> predictionAndLabel =
testData.mapToPair(new PairFunction<LabeledPoint, Double, Double>() {
@Override
public Tuple2<Double, Double> call(LabeledPoint p) {
return new Tuple2<>(model.predict(p.features()), p.label());
}
});
Double testErr =
1.0 * predictionAndLabel.filter(new Function<Tuple2<Double, Double>, Boolean>() {
@Override
public Boolean call(Tuple2<Double, Double> pl) {
return !pl._1().equals(pl._2());
}
}).count() / testData.count();
System.out.println("Test Error: " + testErr);
System.out.println("Learned classification tree model:\n" + model.toDebugString());
model.save(jsc.sc(), "target/tmp/myDecisionTreeClassificationModel");
DecisionTreeModel sameModel = DecisionTreeModel
Find full example code at "examples/src/main/java/org/apache/spark/examples/mllib/JavaDecisionTreeClassificationExample.java" in the Spark repo.
Refer to the DecisionTree Python docs and DecisionTreeModel Python docs for more details on the API.
from pyspark.mllib.tree import DecisionTree, DecisionTreeModel
from pyspark.mllib.util import MLUtils
# Load and parse the data file into an RDD of LabeledPoint.
# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = data.randomSplit([0.7, 0.3])
# Train a DecisionTree model.
# Empty categoricalFeaturesInfo indicates all features are continuous.
model = DecisionTree.trainClassifier(trainingData, numClasses=2, categoricalFeaturesInfo={},
impurity='gini', maxDepth=5, maxBins=32)
# Evaluate model on test instances and compute test error
predictions = model.predict(testData.map(lambda x: x.features))
labelsAndPredictions = testData.map(lambda lp: lp.label).zip(predictions)
testErr = labelsAndPredictions.filter(lambda (v, p): v != p).count() / float(testData.count())
print('Test Error = ' + str(testErr))
print('Learned classification tree model:')
print(model.toDebugString())
model.save(sc, "target/tmp/myDecisionTreeClassificationModel")
Find full example code at "examples/src/main/python/mllib/decision_tree_classification_example.py" in the Spark repo.
### Regression
The example below demonstrates how to load a LIBSVM data file, parse it as an RDD of LabeledPoint and then perform regression using a decision tree with variance as an impurity measure and a maximum tree depth of 5. The Mean Squared Error (MSE) is computed at the end to evaluate goodness of fit.
Refer to the DecisionTree Scala docs and DecisionTreeModel Scala docs for details on the API.
import org.apache.spark.mllib.tree.DecisionTree
import org.apache.spark.mllib.tree.model.DecisionTreeModel
import org.apache.spark.mllib.util.MLUtils
// Load and parse the data file.
// Split the data into training and test sets (30% held out for testing)
val splits = data.randomSplit(Array(0.7, 0.3))
val (trainingData, testData) = (splits(0), splits(1))
// Train a DecisionTree model.
// Empty categoricalFeaturesInfo indicates all features are continuous.
val categoricalFeaturesInfo = Map[Int, Int]()
val impurity = "variance"
val maxDepth = 5
val maxBins = 32
val model = DecisionTree.trainRegressor(trainingData, categoricalFeaturesInfo, impurity,
maxDepth, maxBins)
// Evaluate model on test instances and compute test error
val labelsAndPredictions = testData.map { point =>
val prediction = model.predict(point.features)
(point.label, prediction)
}
val testMSE = labelsAndPredictions.map{ case (v, p) => math.pow(v - p, 2) }.mean()
println("Test Mean Squared Error = " + testMSE)
println("Learned regression tree model:\n" + model.toDebugString)
model.save(sc, "target/tmp/myDecisionTreeRegressionModel")
Find full example code at "examples/src/main/scala/org/apache/spark/examples/mllib/DecisionTreeRegressionExample.scala" in the Spark repo.
Refer to the DecisionTree Java docs and DecisionTreeModel Java docs for details on the API.
import java.util.HashMap;
import java.util.Map;
import scala.Tuple2;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.mllib.regression.LabeledPoint;
import org.apache.spark.mllib.tree.DecisionTree;
import org.apache.spark.mllib.tree.model.DecisionTreeModel;
import org.apache.spark.mllib.util.MLUtils;
JavaSparkContext jsc = new JavaSparkContext(sparkConf);
// Load and parse the data file.
String datapath = "data/mllib/sample_libsvm_data.txt";
// Split the data into training and test sets (30% held out for testing)
JavaRDD<LabeledPoint>[] splits = data.randomSplit(new double[]{0.7, 0.3});
JavaRDD<LabeledPoint> trainingData = splits[0];
JavaRDD<LabeledPoint> testData = splits[1];
// Set parameters.
// Empty categoricalFeaturesInfo indicates all features are continuous.
Map<Integer, Integer> categoricalFeaturesInfo = new HashMap<>();
String impurity = "variance";
Integer maxDepth = 5;
Integer maxBins = 32;
// Train a DecisionTree model.
final DecisionTreeModel model = DecisionTree.trainRegressor(trainingData,
categoricalFeaturesInfo, impurity, maxDepth, maxBins);
// Evaluate model on test instances and compute test error
JavaPairRDD<Double, Double> predictionAndLabel =
testData.mapToPair(new PairFunction<LabeledPoint, Double, Double>() {
@Override
public Tuple2<Double, Double> call(LabeledPoint p) {
return new Tuple2<>(model.predict(p.features()), p.label());
}
});
Double testMSE =
predictionAndLabel.map(new Function<Tuple2<Double, Double>, Double>() {
@Override
public Double call(Tuple2<Double, Double> pl) {
Double diff = pl._1() - pl._2();
return diff * diff;
}
}).reduce(new Function2<Double, Double, Double>() {
@Override
public Double call(Double a, Double b) {
return a + b;
}
}) / data.count();
System.out.println("Test Mean Squared Error: " + testMSE);
System.out.println("Learned regression tree model:\n" + model.toDebugString());
model.save(jsc.sc(), "target/tmp/myDecisionTreeRegressionModel");
DecisionTreeModel sameModel = DecisionTreeModel
Find full example code at "examples/src/main/java/org/apache/spark/examples/mllib/JavaDecisionTreeRegressionExample.java" in the Spark repo.
Refer to the DecisionTree Python docs and DecisionTreeModel Python docs for more details on the API.
from pyspark.mllib.tree import DecisionTree, DecisionTreeModel
from pyspark.mllib.util import MLUtils
# Load and parse the data file into an RDD of LabeledPoint.
# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = data.randomSplit([0.7, 0.3])
# Train a DecisionTree model.
# Empty categoricalFeaturesInfo indicates all features are continuous.
model = DecisionTree.trainRegressor(trainingData, categoricalFeaturesInfo={},
impurity='variance', maxDepth=5, maxBins=32)
# Evaluate model on test instances and compute test error
predictions = model.predict(testData.map(lambda x: x.features))
labelsAndPredictions = testData.map(lambda lp: lp.label).zip(predictions)
testMSE = labelsAndPredictions.map(lambda (v, p): (v - p) * (v - p)).sum() /\
float(testData.count())
print('Test Mean Squared Error = ' + str(testMSE))
print('Learned regression tree model:')
print(model.toDebugString()) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23493048548698425, "perplexity": 10833.716473942974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540975.18/warc/CC-MAIN-20161202170900-00302-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/41154/can-a-probabilistic-turing-machine-compute-an-uncomputable-number | # Can a probabilistic Turing Machine compute an uncomputable number?
Can a probabilistic Turing Machine compute an uncomputable number?
My question probably does not make sense, but, that being the case, is there a reasonably simple formal explanation for it. I should add that I am pretty much ignorant of probabilistic TM and randomized algorithms. I looked at wikipedia, but may even have misunderstood what I read.
The reason I am asking that is that only the computable numbers can have their digits enumerated by a Turing Machine.
But with a probabilistic Turing Machine, I can enumerate any infinite sequence of digits, hence also sequences corresponding to non computable numbers.
Actually, since there are only countably many computable numbers, while there are uncountably many reals that can have their digit enumerated, I could say that my probabilistic Turing Machine can be made to enumerate the digits of a non computable number with probability 1.
I believe this can only be fallacious, but why? Is there a specific provision in the definition of probabilistic TM that prevents that?
Actually, I run into this by thinking whether various computation models can be simulated by a deterministic TM, in question "Are nondeterministic algorithm and randomized algorithms algorithms on a deterministic Turing machine?". Another p[ossibly related question is "Are there any practical differences between a Turing machine with a PRNG and a probabilistic Turing machine?".
• What does it mean for a probabilistic Turing machine to compute a number? If I give you a probabilistic Turing machine, can you tell me which number it computes, if any? – Yuval Filmus Apr 8 '15 at 22:51
Consider the following reasonable definition for a Turing machine computing an irrational number in $[0,1]$.
A Turing machine computes an irrational $r \in [0,1]$ if, on input $n$, it outputs the first $n$ digits (after the decimal) of the binary representation of $r$.
One can think of many extensions of this definition for probabilistic Turing machines. Here is a very permissive one.
A probabilistic Turing machine computes an irrational $r \in [0,1]$ if, on input $n$, (1) it outputs the first $n$ digits of $r$ with probability $p$; (2) it outputs any other string with probability less than $p$; (3) it never halts with probability less than $p$.
Under this definition, it is not immediately clear whether everything that you can compute is indeed computable (under the sense of the first definition).
However, there are some modifications that do allow us to conclude that the resulting number is computable, for example:
1. We can insist that the machine always halt.
2. We can insist that $p > 1/2$.
Other modifications are not necessarily enough. For example, does it help if we assume that the non-halting probability tend to $0$ with $n$?
Summarizing, it might depend on the model. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8684816360473633, "perplexity": 296.3122715989653}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540510336.29/warc/CC-MAIN-20191208122818-20191208150818-00549.warc.gz"} |
http://users.umiacs.umd.edu/~hal/HBC/hbc_v0_1.html | HBC: Hierarchical Bayes Compiler
Pre-release version 0.1
HBC is a toolkit for implementing hierarchical Bayesian models. HBC created because I felt like I spend too much time writing boilerplate code for inference problems in Bayesian models. There are several goals of HBC:
1. Allow a natural implementation of hierarchal models.
2. Enable quick and dirty debugging of models for standard data types.
3. Focus on large-dimension discrete models.
4. More general that simple Gibbs sampling (eg., allowing for maximizations, EM and message passing).
5. Allow for hierarchical models to be easily embedded in larger programs.
6. Automatic Rao-Blackwellization (aka collapsing).
7. Allow efficient execution via compilation to other languages (such as C, Java, Matlab, etc.).
These goals distinguish HBC from other Bayesian modeling software, such as Bugs (or WinBugs). In particular, our primary goal is that models created in HBC can be used directly, rather than only as a first-pass test. Moreover, we aim for scalability with respect to data size. Finally, since the goal of HBC is to compile hierarchical models into standard programming languages (like C), these models can easily be used as part of a larger system. This last point is in the spirit of the dynamic programming language Dyna.
Note that some of these aren't yet supported (in particular: 4 and 6) but should be coming soon!
A Quick Example
To give a flavor of what HBC is all about, here is a complete implementation of a Bayesian mixture of Gaussians model in HBC format:
alpha ~ Gam(10,10)
mu_{k} ~ NorMV(vec(0.0,1,dim), 1) , k \in [1,K]
si2 ~ IG(10,10)
pi ~ DirSym(alpha, K)
z_{n} ~ Mult(pi) , n \in [1,N]
x_{n} ~ NorMV(mu_{z_{n}}, si2) , n \in [1,N]
If you are used to reading hierarchical models, it should be quite clear what this model does. Moreover, by keeping to a very LaTeX-like style, it is quite straightforward to automatically typeset any hierarchical model. If this file were stored in mix_gauss.hier, and if we had data for x stored in a file called X, we could run this model (with two Gaussians) directly by saying:
hbc simulate --loadM X x N dim --define K 2 mix_gauss.hier
Perhaps closer to my heart would be a six-line implementation of the Latent Dirichlet Allocation model, complete with hyperparameter estimation:
alpha ~ Gam(0.1,1)
eta ~ Gam(0.1,1)
beta_{k} ~ DirSym(eta, V) , k \in [1,K]
theta_{d} ~ DirSym(alpha, K) , d \in [1,D]
z_{d,n} ~ Mult(theta_{d}) , d \in [1,D] , n \in [1,N_{d}]
w_{d,n} ~ Mult(beta_{z_{d,n}}) , d \in [1,D] , n \in [1,N_{d}]
This code can either be run directly (eg., by a simulate call as above) or compiled to native C code for (much) faster execuation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376076459884644, "perplexity": 3318.360808015679}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370529375.49/warc/CC-MAIN-20200405053120-20200405083120-00153.warc.gz"} |
https://www.semanticscholar.org/paper/The-Weight-Distribution-of-Quasi-quadratic-Residue-Boston-Hao/e3239244301b92eb8a3187c783dc979426368806 | # The Weight Distribution of Quasi-quadratic Residue Codes
@article{Boston2017TheWD,
title={The Weight Distribution of Quasi-quadratic Residue Codes},
author={Nigel Boston and Jing Hao},
year={2017},
volume={12},
pages={363-385}
}
• Published 18 May 2017
• Computer Science
We investigate a family of codes called quasi-quadratic residue (QQR) codes . We are interested in these codes mainly for two reasons: Firstly, they have close relations with hyperelliptic curves and Goppa's Conjecture, and serve as a strong tool in studying those objects. Secondly, they are very good codes. Computational results show they have large minimum distances when \begin{document}$p\equiv 3 \pmod 8$\end{document} . Our studies focus on the weight distributions of these codes. We will…
## References
SHOWING 1-10 OF 23 REFERENCES
• D. Joyner
• Mathematics, Computer Science
Discret. Math. Theor. Comput. Sci.
• 2008
This paper investigates how coding theory bounds give rise to bounds such as the following example: for all sufficiently large primes p there exists a subset S ⊂ GF(p) for which the bound |X_S(GF(p))| > 1.39p holds.
• Computer Science
ArXiv
• 2008
It is shown in this paper that, for p=137 A_2m = A_34, the Hamming weight distributions of the binary augmented and extended quadratic residue codes of prime 137 may be obtained with out the need of exhaustive codeword enumeration.
• Mathematics, Computer Science
IEEE Transactions on Information Theory
• 2006
It is argued that by using a slightly more complex group than a cyclic group, namely, the dihedral group, the existence of asymptotically good codes that are invariant under the action of the group on itself can be guaranteed.
• Computer Science
IEEE Transactions on Information Theory
• 2004
We give a lower bound for the minimum distance of double circulant binary quadratic residue codes for primes p/spl equiv//spl plusmn/3(mod8). This bound improves on the square root bound obtained by
• I. Duursma
• Mathematics, Computer Science
Discret. Appl. Math.
• 2001
• M. Karlin
• Computer Science
IEEE Trans. Inf. Theory
• 1969
A new circulant echelon canonical form for the perfect (23,12) Golay code is presented and a definite improvement is obtained on the best previously known Bose-Chaudhuri-Hocquenghem cyclic codes.
• P. Gaborit
• Computer Science
J. Comb. Theory, Ser. A
• 2002
A generalization of the Pless symmetry codes to different fields is presented and it is proven that the automorphism group of some of these codes contains the group PSL2(q).
It is proved that depth-three serially concatenated Turbo codes can be asymptotically good and a sharp O( k−1/2) upper bound on the probability that a random binary string generated according to a k-wise independent probability measure has any given weight is found.
This chapter discusses the many decoding algorithms for codes on curves, as well as the Fourier transform and cyclic codes, and their applications in programming, computer science, and mathematics.
This chapter gives constructions of self-dual codes over any Frobenius ring, and describes linear complementary dual codes and makes a new definition of a broad generalization encompassing both self- dual and linear complementary codes. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9515348672866821, "perplexity": 1685.8725334835024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499829.29/warc/CC-MAIN-20230130201044-20230130231044-00411.warc.gz"} |
https://www.gamedev.net/forums/topic/336304-how-would-i-rotate-around-a-center-point/ | # How would I rotate around a center point?
## Recommended Posts
Say I want to rotate the other points around the center point, and I don't have opengl to push, rotate, and pop... how can I do it? [Edited by - ScottC on August 2, 2005 4:52:12 AM]
##### Share on other sites
You could use matrices like the APIs but just code it all yourself, it's fairly easy. Just search google for 2D Rotation Matrices or just 3D ones if thats what you need and also google for matrix maths.
EDIT:
ace
##### Share on other sites
Quote:
Original post by ace_lovegroveYou could use matrices like the APIs but just code it all yourself, it's fairly easy. Just search google for 2D Rotation Matrices or just 3D ones if thats what you need and also google for matrix maths.EDIT:^^Linkified^^ace
the sites I find are confusing.
##### Share on other sites
You need to grasp a few concepts before this will work:
- matrix multiplication (click)
- matrix/vector multiplication
Let us assume a 2D rotation. Furthermore simplify by rotating around the origin first so C = (0,0). We shall rotate Phi angles in radians. Clicky2 gives the rotation matrix for this:
cos Phi sin Phi-sin Phi cos Phi
Just fill in the values and you have a simple rotation matrix.
Transformation matrices are applied to a coordinate by multiplying the coordinates vector with the matrix. If the previous matrix is R and we want to rotate the point P we compute P' = P R. See Clicky1.
Now transformation can, besides rotations, also be translations. We could build a matrix that transforms some point P 5 units up along the Y axis. Thus, we could also construct a transformation matrix that would move the origin (0,0) to the desired center point C = (Cx,Cy). Now the last step is that we can combine two transformation matrices using multiplication.
We had a rotation matrix R and we just introduced a translation matrix T. If you want to rotate around C you can simply compute the transform matrix M = T R and transform your points with this.
This will probably sound complicated but I suggest you read out the given links and try some examples on paper. Feel free to ask if you are stuck.
Illco
##### Share on other sites
It is not difficult at all.
You can do that on tree steps, all convened into a single step
Say the origin is p(x0, y0)
The first thing you do is to translate everything to the origin
x’ = x – x0
y’ = y – y0
in matrix form this is equivalent to
[x’ y’ 1] = [x y 1] * [1 0 0]
[0 1 0]
[-x0 -y0 1]
now you rotate each point by the angle you want
x’’ = x’ cos(a) - y’ sin (a)
y’’ = x’ sin(a) + y’ cos (a)
in matrix from this is
[x’’ y’’ 1] = [x’ y’ 1] * [cos(a) sin(a) 0]
[-sin(a) cos(a) 0]
[0 0 1]
finally you translate everything back to where it was
x’’’ = x’’ + x0
y’’’ = y’’ + y0
and in matrix form this is:
[x’’’ y’’’ 1] = [x’’ y’’ 1] * [1 0 0]
[0 1 0]
[x0 y0 1]
Now calling the tree matrices inv(T), R, T, and concatenating all tree equation you get
P’’’ = P * inv(T) * R * T
The expression :
inv(T) * R * T
Is very common in linear algebra and it is called similar transformation. You can develop the algebra by expanding the expression and save few operations here and there, by I recommendation you do the matrix multiplies in some matrix class and it will guaranty correct result not matter what the situation is.
##### Share on other sites
Quote:
Original post by Anonymous PosterIt is not difficult at all.
ironically this is the only thing I could understand...
##### Share on other sites
Basically rotation math is always done around the origin. (0, 0) So if you want to rotate something around this point, you make sure it's coordinates are relative to this, probably with a translation if it's not in local space. Do your rotate, then translate it to the point you want it.
##### Share on other sites
Assume View means the point/target you want to rotate around.
// This rotates the object's position around the view depending on the values passed in.void c3dObject::RotateAroundView(float angle, float x, float y, float z){ CVector3 vNewView; // Get the view vector (The direction we are facing) CVector3 vView = m_vPosition - m_vView; // Calculate the sine and cosine of the angle once float cosTheta = (float)cos(angle); float sinTheta = (float)sin(angle); // Find the new x position for the new rotated point vNewView.X = (cosTheta + (1 - cosTheta) * x * x) * vView.X; vNewView.X += ((1 - cosTheta) * x * y - z * sinTheta) * vView.Y; vNewView.X += ((1 - cosTheta) * x * z + y * sinTheta) * vView.Z; // Find the new y position for the new rotated point vNewView.Y = ((1 - cosTheta) * x * y + z * sinTheta) * vView.X; vNewView.Y += (cosTheta + (1 - cosTheta) * y * y) * vView.Y; vNewView.Y += ((1 - cosTheta) * y * z - x * sinTheta) * vView.Z; // Find the new z position for the new rotated point vNewView.Z = ((1 - cosTheta) * x * z - y * sinTheta) * vView.X; vNewView.Z += ((1 - cosTheta) * y * z + x * sinTheta) * vView.Y; vNewView.Z += (cosTheta + (1 - cosTheta) * z * z) * vView.Z; // Now we just add the newly rotated vector to our position to set // our new rotated view of our camera. m_vPosition = m_vView + vNewView;}
##### Share on other sites
Well I am sorry I confused you. What GodBeastX is exactly what I said.
If you know a least the concept of matriices and matrix multiplication it should be clear. If you do not know that but you have a matrix class this is what you do
Matrix RoationAboutAPoint ( Point P, Angle A)
{
Matrix invT = TranlationMatrix (P.Scale(-1));
Matrix R = RotationMatrix (A);
Matrix T = ranlationMatrix (P);
Matrix mat = invT * R * T;
Return mat
}
then you multiply each point by the matrix mat.
## Create an account
Register a new account
• ### Forum Statistics
• Total Topics
627736
• Total Posts
2978868
• 10
• 10
• 21
• 14
• 12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3131597936153412, "perplexity": 2170.8626313935747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825473.61/warc/CC-MAIN-20171022222745-20171023002745-00877.warc.gz"} |
https://www.bartleby.com/solution-answer/chapter-17-problem-8ps-chemistry-and-chemical-reactivity-10th-edition/9781337399074/10bdb30a-a2ce-11e8-9bb5-0ece094302b6 | Question
Chapter 17, Problem 8PS
a)
Interpretation Introduction
## Interpretation:Calculate the pH of the resulting buffer solution formed by adding 2.75 g of NaCH3CHOHCO2, sodium lactate, to 5.00×102 ml of 0.100 M lactic acid.Concept introduction:The Henderson-Hasselbalch equation relates pH of a buffer to pKa of acid, concentration of conjugate base and concentration of acid. The expression is written as, pH=pKa+log[conjugate base][acid] (1)This equation shows that pH of the buffer solution is controlled by two major factors. First, Strength of the acid can be expressed in terms of pKa and second, the relative concentration of acid and its conjugate base at equilibrium. It can be seen from equation (1) that pH of the buffer solution is comparable to pKa values. So this equation can be used to establish a relation between pH and pKa value of acid.pKa is defined as the negative logarithm of Ka value of the acid and is given as pKa=−log(Ka) (2)
b)
Interpretation Introduction
### Interpretation:Calculate the value of pH for 0.1 M solution of lactic acid and then compare with the pH of the buffered solution.Concept introduction:In aqueous solution an acid undergoes ionization. The ionization of an acid is expressed in terms of the equilibrium constant. The quantitative measurement tells about the strength of the acid. Higher the value of Ka stronger will be the acid. The acid dissociation can be represented as following equilibrium HA(aq.)+ H2O(l)⇌H3O+(aq.) + A−1(aq.)A weak acid undergoes partial dissociation in aqueous solution and the expression for dissociation constant,Ka (or equilibrium constant) is given as, Ka=[H3O+](eq)[A−](eq)[HA](eq)Here, [H3O+](eq) is the equilibrium concentration of hydronium ion. [A−](eq) is the equilibrium concentration of conjugate base of the acid. [HA](eq) is the equilibrium concentration of acid.The ICE table gives the relationship between the concentrations of species at equilibrium.EquationHA(aq)+H2O(aq)⇌H3O+(aq)+A−Initial(M)c00Change(M)−cx+cx+cxEquilibrium(M)c−cxcxcxThe expression for the acid-dissociation constant, Ka, is given as follows, Ka=[H3O+](eq)[A−](eq)[HA](eq)The pH of a solution is basically the measure of the molar concentration of the H+ or H3O+ ion in the solution. More the concentration of H+ or H3O+ ion in the solution, lesser will be the pH value and more acidic will be the solution.The expression for pH is given as, pH=−log[H+] (4)
Expert Solution & Answer
### Want to see the full answer?
Check out a sample textbook solution
Chemistry & Chemical Reactivity
10th Edition
ISBN: 9781337399074
Author: John C. Kotz, Paul M. Treichel, John Townsend, David Treichel
Publisher: Cengage Learning
Not helpful? See similar books
Chemistry & Chemical Reactivity
Principles Of Chemical Reactivity: Other Aspects Of Aqueous Equilibria. 8PS
Want to see this answer and more?
Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes!*
*Response times may vary by subject and question complexity. Median response time is 34 minutes for paid subscribers and may be longer for promotional offers.
Knowledge Booster
Need a deep-dive on the concept behind this application? Look no further. Learn more about this topic, chemistry and related others by exploring similar questions and additional content below.
Need a deep-dive on the concept behind this application? Watch these videos for an in-depth look
Recommended textbooks for you
• Chemistry for Engineering Students
Chemistry
ISBN:9781337398909
Author:Lawrence S. Brown, Tom Holme
Publisher:Cengage Learning
Chemistry: Principles and Reactions
Chemistry
ISBN:9781305079373
Author:William L. Masterton, Cecile N. Hurley
Publisher:Cengage Learning
General Chemistry - Standalone book (MindTap Cour...
Chemistry
ISBN:9781305580343
Author:Steven D. Gammon, Ebbing, Darrell Ebbing, Steven D., Darrell; Gammon, Darrell Ebbing; Steven D. Gammon, Darrell D.; Gammon, Ebbing; Steven D. Gammon; Darrell
Publisher:Cengage Learning
• Chemistry & Chemical Reactivity
Chemistry
ISBN:9781337399074
Author:John C. Kotz, Paul M. Treichel, John Townsend, David Treichel
Publisher:Cengage Learning
Chemistry & Chemical Reactivity
Chemistry
ISBN:9781133949640
Author:John C. Kotz, Paul M. Treichel, John Townsend, David Treichel
Publisher:Cengage Learning
• Chemistry for Engineering Students
Chemistry
ISBN:9781337398909
Author:Lawrence S. Brown, Tom Holme
Publisher:Cengage Learning
Chemistry: Principles and Reactions
Chemistry
ISBN:9781305079373
Author:William L. Masterton, Cecile N. Hurley
Publisher:Cengage Learning
General Chemistry - Standalone book (MindTap Cour...
Chemistry
ISBN:9781305580343
Author:Steven D. Gammon, Ebbing, Darrell Ebbing, Steven D., Darrell; Gammon, Darrell Ebbing; Steven D. Gammon, Darrell D.; Gammon, Ebbing; Steven D. Gammon; Darrell
Publisher:Cengage Learning
Chemistry & Chemical Reactivity
Chemistry
ISBN:9781337399074
Author:John C. Kotz, Paul M. Treichel, John Townsend, David Treichel
Publisher:Cengage Learning
Chemistry & Chemical Reactivity
Chemistry
ISBN:9781133949640
Author:John C. Kotz, Paul M. Treichel, John Townsend, David Treichel
Publisher:Cengage Learning | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8325554728507996, "perplexity": 12204.031068397126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00423.warc.gz"} |
https://quant.stackexchange.com/questions/27763/why-does-jump-process-has-to-be-cadlag-and-not-the-other-way-around/27764 | # Why does jump process has to be Cadlag and not the other way around
In all books and references that I have been exposed to, the jump processes have been defined to be Cadlag(right continuous with left limits). But no one has explained why this is the preferable case, why can't it be Caglad?
I suspect it has something to do with filtration, but I don't know the exact reasoning.
Let's imagine a simple process like a Poisson process. It is naturally cadlag, because at the time you jump, you jump. Just before, you have not jumped. Mathematically, if the first jump occurs at $t$, $\forall s<t, N_s=0$ and $N_t=1$. It means that the jump occuring at time $t$ is $t$-measurable (even if it is not predictible).
• That makes sense, but I guess a left continuous could also be $t-measurable$. The portfolio position, for example, has usually been assumed to be a predictable process. In that sense, of course it's hard to imagine what a Cadlag position will be, it's the trader's own decision to change position so there's should be no surprise. If jumps are "surprises" then I agree it's naturally Cadlag, but is there a mathematical reasoning for it? – Kenneth Chen Jun 23 '16 at 18:49
• speaking of the poisson process caglad would mean that jumps at time $t$ is $t_+$-measurable for the process. Do you agree that it would be weird that the jump time $\tau$ to be not a stopping time of the filtration of the process ? – MJ73550 Jun 24 '16 at 9:12
• MJ73550 why a caglad process is $t_{+}$ measurable please? That is indeed weird if that's the case. – Kenneth Chen Jul 4 '16 at 14:08
• I did not say that the process is $t_+$ measurable. I said that knowing jump has occurred before t I.e $\tau\leq t$ would be $t_+$ measurable due to the caglad behavior (draw it to convince you). That's why we like cadlag processes. – MJ73550 Jul 4 '16 at 19:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7312135100364685, "perplexity": 445.34718159871716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986654086.1/warc/CC-MAIN-20191014173924-20191014201424-00207.warc.gz"} |
http://star-www.rl.ac.uk/star/docs/sc4.htx/sc4se12.html | ### 12 Dealing with Files
#### 12.1 Extracting parts of filenames
Occasionally you’ll want to work with parts of a filename, such as the path or the file type. The C-shell provides filename modifiers that select the various portions. A couple are shown in the example below.
set type = $1:e set name =$1:r
if ( $type == "bdf" ) then echo "Use BDF2NDF on a VAX to convert the Interim file$1"
else if ( $type != "dst" ) then hdstrace$name
else
hdstrace @’"$1"’ endif Suppose the first argument of a script, $1, is a filename called galaxy.bdf. The value of variable type is bdf and name equates to galaxy because of the presence of the filename modifiers :e and :r. The rest of the script uses the file type to control the processing, in this case to provide a listing of the contents of a data file using the Hdstrace utility.
The complete list of modifiers, their meanings, and examples is presented in the table below.
Modifier Value returned Value for filename /star/bin/kappa/comwest.sdf :e Portion of the filename following a full stop; if the filename does not contain a full stop, it returns a null string sdf :r Portion of the filename preceding a full stop; if there is no full stop present, it returns the complete filename comwest :h The path of the filename (mnemonic: h for head) /star/bin/kappa :t The tail of the file specification, excluding the path comwest.sdf
#### 12.2 Process a Series of Files
One of the most common things you’ll want to do, having devised a data-processing path, is to apply those operations to a series of data files. For this you need a foreach...end construct.
convert # Only need be invoked once per process
foreach file (*.fit)
stats $file end This takes all the FITS files in the current directory and computes the statistics for them using the stats command from (SUN/95). file is a shell variable. Each time in the loop file is assigned to the name of the next file of the list between the parentheses. The * is the familiar wildcard which matches any string. Remember when you want to use the shell variable’s value you prefix it with a $. Thus $file is the filename. ##### 12.2.1 NDFs Some data formats like the NDF demand that only the file name (i.e. what appears before the last dot) be given in commands. To achieve this you must first strip off the remainder (the file extension or type) with the :rfile modifier. foreach file (*.sdf) histogram$file:r accept
end
This processes all the HDS files in the current directory and calculates an histogram for each of them using the histogram command from (SUN/95). It assumes that the files are NDFs. The :r instructs the shell to remove the file extension (the part of the name following the the rightmost full stop). If we didn’t do this, the histogram task would try to find NDFs called SDF within each of the HDS files.
##### 12.2.2 Wildcarded lists of files
You can give a list of files separated by spaces, each of which can include the various UNIX wildcards. Thus the code below would report the name of each NDF and its standard deviation. The NDFs are called ‘Z’ followed by a single character, ccd1, ccd2, ccd3, and spot.
foreach file (Z?.sdf ccd[1-3].sdf spot.sdf)
echo "NDF:" $file:r"; sigma: "‘stats$file:r | grep "Standard deviation"‘
end
echo writes to standard output, so you can write text including values of shell variables to the screen or redirect it to a file. Thus the output produced by stats is piped (the | is the pipe) into the UNIX grep utility to search for the string "Standard deviation". The ‘ ‘ invokes the command, and the resulting standard deviation is substituted.
You might just want to provide an arbitrary list of NDFs as arguments to a generic script. Suppose you had a script called splotem, and you have made it executable with chmod +x splotem.
#!/bin/csh
figaro # Only need be invoked once per process
foreach file ($*) if (-e$file) then
splot $file:r accept endif end Notice the -e file-comparison operator. It tests whether the file exists or not. (Section 12.4 has a full list of the file operators.) To plot a series of spectra stored in NDFs, you just invoke it something like this. % ./splotem myndf.sdf arc[a-z].sdf hd[0-9]*.sdf See the glossary for a list of the available wildcards such as the [a-z] in the above example. ##### 12.2.3 Exclude the .sdf for NDFs In the splotem example from the previous section the list of NDFs on the command line required the inclusion of the .sdf file extension. Having to supply the .sdf for an NDF is abnormal. For reasons of familiarity and ease of use, you probably want your relevant scripts to accept a list of NDF names and to append the file extension automatically before the list is passed to foreach. So let’s modify the previous example to do this. #!/bin/csh figaro # Only need be invoked once per process # Append the HDS file extension to the supplied arguments. set ndfs set i = 1 while ($i <= $#argv ) set ndfs = ($ndfs[*] $argv[i]".sdf") @ i =$i + 1
end
# Plot each 1-dimensional NDFs.
foreach file ($ndfs[*]) if (-e$file) then
splot $file:r accept endif end This loops through all the arguments and appends the HDS-file extension to them by using a work array ndfs. The set defines a value for a shell variable; don’t forget the spaces around the =. ndfs[*] means all the elements of variable ndfs. The loop adds elements to ndfs which is initialised without a value. Note the necessary parentheses around the expression ($ndfs[*] $argv[i]".sdf"). On the command line the wildcards have to be passed verbatim, because the shell will try to match with files than don’t have the .sdf file extension. Thus you must protect the wildcards with quotes. It’s a nuisance, but the advantages of wildcards more than compensate. % ./splotem myndf ’arc[a-z]’ ’hd[0-9]*’ % ./noise myndf ’ccd[a-z]’ If you forget to write the ’ ’, you’ll receive a No match error. ##### 12.2.4 Examine a series of NDFs A common need is to browse through several datasets, perhaps to locate a specific one, or to determine which are acceptable for further processing. The following presents images of a series of NDFs using the display task of (SUN/95). The title of each plot tells you which NDF is currently displayed. foreach file (*.sdf) display$file:r axes style="’title==$file:r’" accept sleep 5 end sleep pauses the process for a given number of seconds, allowing you to view each image. If this is too inflexible you could add a prompt so the script displays the image once you press the return key. set nfiles = ‘ls *.sdf | wc -w‘ set i = 1 foreach file (*.sdf) display$file:r axes style="’title==$file:r’" accept # Prompt for the next file unless it is the last. if ($i < $nfiles ) then echo -n "Next?" set next =$<
# Increment the file counter by one.
@ i++
endif
end
The first lines shows a quick way of counting the number of files. It uses ls to expand the wildcards, then the command wc to count the number of words. The back quotes cause the instruction between them to be run and the values generated to be assigned to variable nfiles.
You can substitute another visualisation command for display as appropriate. You can also use the graphics database to plot more than one image on the screen or to hardcopy. The script $KAPPA_DIR/multiplot.csh does the latter. #### 12.3 Filename modification Thus far the examples have not created a new file. When you want to create an output file, you need a name for it. This could be an explicit name, one derived from the process identification number, one generated by some counter, or from the input filename. Here we deal with all but the trivial first case. ##### 12.3.1 Appending to the input filename To help identify datasets and to indicate the processing steps used to generate them, their names are often created by appending suffices to the original file name. This is illustrated below. foreach file (*.sdf) set ndf =$file:r
block in=$ndf out=$ndf"_sm" accept
end
This uses block from (SUN/95) to perform block smoothing on a series of NDFs, creating new NDFs, each of which takes the name of the corresponding input NDF with a _sm suffix. The accept keyword accepts the suggested defaults for parameters that would otherwise be prompted. We use the set to assign the NDF name to variable ndf for clarity.
##### 12.3.2 Appending a counter to the input filename
If a counter is preferred, this example
set count = 1
foreach file (*.sdf)
set ndf = $file:r @ count =$count + 1
block in=$ndf out=smooth$count accept
end
would behave as the previous one except that the output NDFs would be called smooth1, smooth2 and so on.
##### 12.3.3 Appending to the input filename
Whilst appending a suffix after each data-processing stage is feasible, it can generate some long names, which are tedious to handle. Instead you might want to replace part of the input name with a new string. The following creates another shell variable, ndfout by replacing the string _flat from the input NDF name with _sm. The script pipes the input name into the sed editor which performs the substitution.
foreach file (*_flat.sdf)
set ndf = $file:r set ndfout = ‘echo$ndf | sed ’s#_flat#_sm#’‘
block in=$ndf out=$ndfout accept
end
The # is a delimiter for the strings being substituted; it should be a character that is not present in the strings being altered. Notice the ‘ ‘ quotes in the assignment of ndfout. These instruct the shell to process the expression immediately, rather than treating it as a literal string. This is how you can put values output from UNIX commands and other applications into shell variables.
#### 12.4 File operators
There is a special class of C-shell operator that lets you test the properties of a file. A file operator is used in comparison expressions of the form if (file_operator file) then. A list of file operators is tabulated to the right.
The most common usage is to test for a file’s existence. The following only runs cleanup if the first argument is an existing file.
File operators Operator True if: -d file is a directory -e file exists -f file is ordinary -o you are the owner of the file -r file is readable by you -w file is writable by you -x file is executable by you -z file is empty
# Check that the file given by the first
# argument exists before attempting to
# use it.
if ( -e $1 ) then cleanup$1
endif
Here are some other examples.
# Remove any empty directories.
if ( -d $file && -z$file ) then
rmdir $file # Give execute access to text files with a .csh extension. else if ($file:e == "csh" && -f $file ) then chmod +x$file
endif
#### 12.5 Creating text files
A frequent feature of scripts is redirecting the output from tasks to a text file. For instance,
hdstrace $file:r >$file:r.lis
foo
Command ./doubleword reads its standard input from the file mynovel.txt. The «word obtains the input data from the script file itself until there is line beginning word. You may also include variables and commands to execute as the $, \, and ‘ ‘ retain their special meaning. If you want these characters to be treated literally, say to prevent substitution, insert a \ before the delimiting word. The command myprog reads from the script, substituting the value of variable nstars in the second line, and the number of lines in file brightnesses.txt in the third line. The technical term for such files are here documents. #### 12.9 Discarding text output The output from some routines is often unwanted in scripts. In these cases redirect the standard output to a null file. correlate in1=frame1 in2=frame2 out=framec > /dev/null Here the text output from the task correlate is disposed of to the /dev/null file. Messages from Starlink tasks and usually Fortran channel 6 write to standard output. #### 12.10 Obtaining dataset attributes When writing a data-processing pipeline connecting several applications you will often need to know some attribute of the data file, such as its number of dimensions, its shape, whether or not it may contain bad pixels, a variance array or a specified extension. The way to access these data is with ndftrace from (SUN/95) and parget commands. ndftrace inquires the data, and parget communicates the information to a shell variable. ##### 12.10.1 Obtaining dataset shape Suppose that you want to process all the two-dimensional NDFs in a directory. You would write something like this in your script. foreach file (*.sdf) ndftrace$file:r > /dev/null
set nodims = ‘parget ndim ndftrace‘
if ( $nodims == 2 ) then <perform the processing of the two-dimensional datasets> endif end Note although called ndftrace, this function can determine the properties of foreign data formats through the automatic conversion system (SUN/55, SSN/20). Of course, other formats do not have all the facilities of an NDF. If you want the dimensions of a FITS file supplied as the first argument you need this ingredient. ndftrace$1 > /dev/null
set dims = ‘parget dims ndftrace‘
Then dims[$i$] will contain the size of the ${i}^{th}$ dimension. Similarly
ndftrace $1 > /dev/null set lbnd = ‘parget lbound ndftrace‘ set ubnd = ‘parget ubound‘ will assign the pixel bounds to arrays lbnd and ubnd. ##### 12.10.2 Available attributes Below is a complete list of the results parameters from ndftrace. If the parameter is an array, it will have one element per dimension of the data array (given by parameter NDIM); except for EXTNAM and EXTTYPE where there is one element per extension (given by parameter NEXTN). Several of the axis parameters are only set if the ndftrace input keyword fullaxis is set (not the default). To obtain, say, the data type of the axis centres of the current dataset, the code would look like this. ndftrace fullaxis accept > dev/null set axtype = ‘parget atype ndftrace‘ Name Array? Meaning AEND Yes The axis upper extents of the NDF. For non-monotonic axes, zero is used. See parameter AMONO. This is not assigned if AXIS is FALSE. AFORM Yes The storage forms of the axis centres of the NDF. This is only written when parameter FULLAXIS is TRUE and AXIS is TRUE. ALABEL Yes The axis labels of the NDF. This is not assigned if AXIS is FALSE. AMONO Yes These are TRUE when the axis centres are monotonic, and FALSE otherwise. This is not assigned if AXIS is FALSE. ANORM Yes The axis normalisation flags of the NDF. This is only written when FULLAXIS is TRUE and AXIS is TRUE. ASTART Yes The axis lower extents of the NDF. For non-monotonic axes, zero is used. See parameter AMONO. This is not assigned if AXIS is FALSE. ATYPE Yes The data types of the axis centres of the NDF. This is only written when FULLAXIS is TRUE and AXIS is TRUE. AUNITS Yes The axis units of the NDF. This is not assigned if AXIS is FALSE. AVARIANCE Yes Whether or not there are axis variance arrays present in the NDF. This is only written when FULLAXIS is TRUE and AXIS is TRUE. AXIS Whether or not the NDF has an axis system. BAD If TRUE, the NDF’s data array may contain bad values. BADBITS The BADBITS mask. This is only valid when QUALITY is TRUE. CURRENT The integer Frame index of the current co-ordinate Frame in the WCS component. DIMS Yes The dimensions of the NDF. EXTNAME Yes The names of the extensions in the NDF. It is only written when NEXTN is positive. EXTTYPE Yes The types of the extensions in the NDF. Their order corresponds to the names in EXTNAME. It is only written when NEXTN is positive. FDIM Yes The numbers of axes in each co-ordinate Frame stored in the WCS component of the NDF. The elements in this parameter correspond to those in FDOMAIN and FTITLE. The number of elements in each of these parameters is given by NFRAME. FDOMAIN Yes The domain of each co-ordinate Frame stored in the WCS component of the NDF. The elements in this parameter correspond to those in FDIM and FTITLE. The number of elements in each of these parameters is given by NFRAME. FLABEL Yes The axis labels from the current WCS Frame of the NDF. FLBND Yes The lower bounds of the bounding box enclosing the NDF in the current WCS Frame. The number of elements in this parameter is equal to the number of axes in the current WCS Frame (see FDIM). FORM The storage form of the NDF’s data array. FTITLE Yes The title of each co-ordinate Frame stored in the WCS component of the NDF. The elements in this parameter correspond to those in FDOMAIN and FDIM. The number of elements in each of these parameters is given by NFRAME. Name Array? Meaning FUBND Yes The upper bounds of the bounding box enclosing the NDF in the current WCS Frame. The number of elements in this parameter is equal to the number of axes in the current WCS Frame (see FDIM). FUNIT Yes The axis units from the current WCS Frame of the NDF. HISTORY Whether or not the NDF contains HISTORY records. LABEL The label of the NDF. LBOUND Yes The lower bounds of the NDF. NDIM The number of dimensions of the NDF. NEXTN The number of extensions in the NDF. NFRAME The number of WCS domains described by FDIM, FDOMAIN and FTITLE. Set to zero if WCS is FALSE. QUALITY Whether or not the NDF contains a QUALITY array. TITLE The title of the NDF. TYPE The data type of the NDF’s data array. UBOUND Yes The upper bounds of the NDF. UNITS The units of the NDF. VARIANCE Whether or not the NDF contains a VARIANCE array. WCS Whether or not the NDF has any WCS co-ordinate Frames, over and above the default GRID, PIXEL and AXIS Frames. WIDTH Yes Whether or not there are axis width arrays present in the NDF. This is only written when FULLAXIS is TRUE and AXIS is TRUE. ##### 12.10.3 Does the dataset have variance/quality/axis/history information? Suppose you have an application which demands that variance information be present, say for optimal extraction of spectra, you could test for the existence of a variance array in your FITS file called dataset.fit like this. # Enable automatic conversion convert # Needs to be invoked only once per process set file = dataset.fit ndftrace$file > /dev/null
set varpres = ‘parget variance ndftrace‘
if ( $varpres == "FALSE" ) then echo "File$file does not contain variance information"
else
<process the dataset>
endif
The logical results parameters have values TRUE or FALSE. You merely substitute another component such as quality or axis in the parget command to test for the presence of these components.
##### 12.10.4 Testing for bad pixels
Imagine you have an application which could not process bad pixels. You could test whether a dataset might contain bad pixels, and run some pre-processing task to remove them first. This attribute could be inquired via ndftrace. If you need to know whether or not any were actually present, you should run setbad from (SUN/95) first.
setbad $file ndftrace$file > /dev/null
if ( badpix == "TRUE" ) then
else
goto tidy
endif
<perform data processing>
tidy:
<tidy any temporary files, windows etc.>
exit
Here we also introduce the goto command—yes there really is one. It is usually reserved for exiting (goto exit), or, as here, moving to a named label. This lets us skip over some code, and move directly to the closedown tidying operations. Notice the colon terminating the label itself, and that it is absent from the goto command.
##### 12.10.5 Testing for a spectral dataset
One recipe for testing for a spectrum is to look at the axis labels. (whereas a modern approach might use WCS information). Here is a longer example showing how this might be implemented. Suppose the name of the dataset being probed is stored in variable ndf.
# Get the full attributes.
ndftrace $ndf fullaxis accept > /dev/null # Assign the axis labels and number of dimensions to variables. set axlabel = ‘parget atype ndftrace‘ set nodims = ‘parget ndim‘ # Exit the script when there are too many dimensions to handle. if ($nodims > 2 ) then
echo Cannot process a $nodims-dimensional dataset. goto exit endif # Loop for each dimension or until a spectral axis is detected. set i = 1 set spectrum = FALSE while ($i <= nodims && $spectrum == FALSE ) # For simplicity the definition of a spectral axis is that # the axis label is one of a list of acceptable values. This # test could be made more sophisticated. The toupper converts the # label to uppercase to simplify the comparison. Note the \ line # continuation. set uaxlabel = ‘echo$axlabel[$i] | awk ’{print toupper($0)}’‘
if ( $uaxlabel == "WAVELENGTH" ||$uaxlabel == "FREQUENCY" \
$uaxlabel == "VELOCITY" ) then # Record that the axis is found and which dimension it is. set spectrum = TRUE set spaxis =$i
endif
@ i++
end
# Process the spectrum.
if ( $spectrum == TRUE ) then # Rotate the dataset to make the spectral axis along the first # dimension. if ($spaxis == 2 ) then
irot90 $file$file"_rot" accept
# Fit the continuum.
sfit spectrum=$file"_rot" order=2 output=$file"_fit" accept
else
sfit spectrum=$file order=2 output=$file"_fit accept
end if
endif
Associated with FITS files and many NDFs is header information stored in 80-character ‘cards’. It is possible to use these ancillary data in your script. Each non-comment header has a keyword, by which you can reference it; a value; and usually a comment. (SUN/95) from V0.10 has a few commands for processing FITS header information described in the following sections.
##### 12.11.1 Testing for the existence of a FITS header value
Suppose that you wanted to determine whether an NDF called image123 contains an AIRMASS keyword in its FITS headers (stored in the FITS extension).
set airpres = ‘fitsexist image123 airmass‘
if ( $airpres == "TRUE" ) then <access AIRMASS FITS header> endif Variable airpres would be assigned "TRUE" when the AIRMASS card was present, and "FALSE" otherwise. Remember that the ‘ ‘ quotes cause the enclosed command to be executed. ##### 12.11.2 Reading a FITS header value Once we know the named header exists, we can then assign its value to a shell variable. set airpres = ‘fitsexist image123 airmass‘ if ($airpres == "TRUE" ) then
set airmass = ‘fitsval image123 airmass‘
echo "The airmass for image123 is $airmass." endif ##### 12.11.3 Writing or modifying a FITS header value We can also write new headers at specified locations (the default being just before the END card), or revise the value and/or comment of existing headers. As we know the header AIRMASS exists in image123, the following revises the value and comment of the AIRMASS header. It also writes a new header called FILTER immediately preceding the AIRMASS card assigning it value B and comment Waveband. fitswrite image123 airmass value=1.062 comment=\"Corrected airmass\" fitswrite image123 filter position=airmass value=B comment=Waveband As we want the metacharacters " to be treated literally, each is preceded by a backslash. #### 12.12 Accessing other objects You can manipulate data objects in HDS files, such as components of an NDF’s extension. There are several Starlink applications for this purpose including the FIGARO commands copobj, creobj, delobj, renobj, setobj; and the (SUN/95) commands setext, and erase. For example, if you wanted to obtain the value of the EPOCH object from an extension called IRAS_ASTROMETRY in an NDF called lmc, you could do it like this. set year = ‘setext lmc xname=iras_astrometry option=get \ cname=epoch noloop‘ The noloop prevents prompting for another extension-editing operation. The single backslash is the line continuation. #### 12.13 Defining NDF sections with variables If you want to define a subset or superset of a dataset, most Starlink applications recognise NDF sections (see SUN/95’s chapter called “NDF Sections”) appended after the name. A naïve approach might expect the following to work set lbnd = 50 set ubnd = 120 linplot$KAPPA_DIR/spectrum"($lbnd:$ubnd)"
display $KAPPA_DIR/comwest"($lbnd:$ubnd",$lbnd~$ubnd)" however, they generate the Bad : modifier in$ ($). error. That’s because it is stupidly looking for a filename modifier :$ (see Section 12.1).
Instead here are some recipes that work.
set lbnd = 50
set ubnd = 120
set lrange = "101:150"
linplot $KAPPA_DIR/spectrum"($lbnd":"$ubnd)" stats abc"(-20:99,~$ubnd)"
display $KAPPA_DIR/comwest"($lbnd":"$ubnd",$lbnd":"$ubnd")" histogram hale-bopp.fit’(’$lbnd’:’$ubnd’,’$lbnd’:’$ubnd’)’ ndfcopy$file1.imh"("$lbnd":"$ubnd","$lrange")"$work"1"
splot hd102456’(’\$ubnd~60’)’
An easy-to-remember formula is to enclose the parentheses and colons in quotes. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48521968722343445, "perplexity": 620.0372567954706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887849.3/warc/CC-MAIN-20180119085553-20180119105553-00358.warc.gz"} |
https://geoenergymath.com/2014/07/ | # Correlation of time series
[mathjax]The Southern Oscillation embedded with the ENSO behavior is what is called a dipole [1], or in other vernacular, a standing wave. Whenever the atmospheric pressure at Tahiti is high, the pressure at Darwin is low, and vice-versa. Of course the standing wave is not perfect and far from being a classic sine wave.
To characterize the quality of the dipole, we can use a measure such as a correlation coefficient applied to the two time series. Flipping the sign of Tahiti and applying a correlation coefficient to SOI, we get Figure 1 below:
Fig 1 : Anti-correlation between Tahiti and Darwin. The sign of Tahiti is reversed to see better the correlation. The correlation coefficient is calculated to be 0.55 or 55/100.
Note that this correlation coefficient is “only” 0.55 when comparing the two time-series, yet the two sets of data are clearly aligned. What this tells us is that other factors, such as noise in the measurements, can easily drop correlated waveforms well below unity.
This is what we have to keep in mind when evaluating correlations of data with models as we can see in the following examples.
# Sloshing Animation
The models of ENSO for SOI and proxy records apply sloshing dynamics to describe the quasi-periodic behavior. see J. B. Frandsen, “Sloshing motions in excited tanks,” Journal of Computational Physics, vol. 196, no. 1, pp. 53–87, 2004.
The following GIF animations are supplementary material from S. S. Kolukula and P. Chellapandi, “Finite Element Simulation of Dynamic Stability of plane free-surface of a liquid under vertical excitation.”
Detuning Effect.gif shows the animation of sloshing fluid for the fourth test case, with frequency ratio Ω3 = 0.5 and forcing amplitudeV = 0.2: test case 4 as shown in Figure 4. This case corresponds to instability in the second sloshing mode lying in the first instability region. Figure 8(b) shows the free-surface elevation and Figure 9 shows the moving mesh generated in this case.
Dynamic Instability.gif shows the animation of sloshing fluid for the second test case which lies in the unstable region, with frequency ratio Ω1 = 0.5 and forcing amplitude kV =0.3: test case 2 as shown in Figure 4. Figure 6 shows the free-surface elevation and Figure 7 shows the moving mesh generated in this case. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8592103123664856, "perplexity": 1100.7634745820835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00261.warc.gz"} |
http://www.gamedev.net/page/resources/_/technical/artificial-intelligence/pathfinding-with-the-c4-game-engine-r2420?st=210 | • Create Account
$5 Categories (See All) Like 0Likes Dislike Pathfinding With the C4 Game Engine By Jon Watte | Published Oct 09 2007 09:37 PM in Artificial Intelligence navmesh code navpoint level point controller path node function If you find this article contains errors or problems rendering it unreadable (missing images or files, mangled code, improper text formatting, etc) please contact the editor so corrections can be made. Thank you for helping us improve this resource Introduction The C4 game engine has to be one of the best kept secrets in the independent gaming community. It has been in development by the principal developer, Eric Lengyel, for several years. It supports many advanced features such as real-time dynamic shadows for any number of lights, fully calculated specular or micro-facet reflections on any surface, and comes with a robust editor to import and prepare geometry for the game engine. It also implements a robust portal culling system, including support in the tools for building portaled level geometry. Once you buy the engine for$200, you get not only free updates for the life of the engine, but also the full source code. This is quite comparable to the offering that Garage Games
has for the Torque Game Engine, but the comparison stops there. Where Torque needs long lighting compile phases, C4 just runs once you've saved your level. Where Torque code resembles a tentacle
monster, with tendrils reaching from all place to any other place, the C4 code is very modular and easy to find your way around. Where Torque exporters for packages such as 3ds Max are notoriously
finicky and crashing, C4 uses COLLADA for importing any geometry from anywhere. And where Garage Games never answers questions, Eric provides top-notch service and bug fixing through the C4 forums.
Perhaps the only reason Eric can do that is that C4 doesn't have as big a following, so it might be in my best interest to not turn you on to C4 :-)
C4 does miss some things that are available in some other game engines, though. It is currently mostly an indoors engine (with outdoors being next on the road map). The physics is
quite basic -- spheres and rays collided against the ground, with simple euler based physics. And, last, there is no real scripting language in C4. There are triggers, and controllers, and a visual
macro package that can run in response to triggers, but any "real" coding has to be done in C++.
Another thing missing in C4 (and missing in TGE, too), is support for navigation and path finding for NPCs. I spent the last few evenings trying to work something out, mainly for the
challenge, and the chance to experience C4 in a little more depth. This article presents my findings, which includes source code and some advice about how to use it with the C4 engine.
While the source code included doesn't expose any C4 code (doing so would be against the C4 license agreement), but it will use the general framework of C4 in its Locator markers and
Controller node attachments. However, you can use these same techniques implemented in this code, in some game of your own, as long as you have the same features available to you: placing markers in
the world, finding these markers, and testing whether you can move through the world along a given direction or not.
Requirements
The requirement for this navigation system is to make it possible to plan a path, for a player character or non-player character, from point A to point B within the game world, without
too much burden on the CPU at runtime. While there are some systems that can do this entirely automatically, the system I present here is implemented based on hints given by the level artist. This
has two benefits: First, it allows the artist to express things he knows about the navigability of a level, that an automatic algorithm might not. Second, it's a lot simpler to implement!
The implementation will construct a graph of "navpoints," where each navpoint is connected to some other navpoints in a directed graph approach. An edge in this graph from navpoint A
to navpoint B means, that if an NPC starts at point A, and aims at point B, and walks forward, he will get to point B (unless some other movable obstacle is in the way).
There is an additional gnarl, in that the NPC will not be right on one of these navpoints when it needs navigation services, and the final destination (say, the player, or some in-game
goal) may not be right at a navpoint either. Thus, we need to be able to get to the closest navpoint from where we currently are, and we need to be able to get from the endpoint of the navigation
path to the destination location.
So, to put it all in one place:
• The artist or level designer places navpoints in the level editor, to indicate generally navigable areas.
• These navpoints are discovered during level loading, and the connectivity between them is automatically calculated.
• Navpoint system provides a function "find the navpoint closest to point P, from which you can actually get to P."
• Navpoint system provides a function "find the navpoint closest to point Q, such that you can get from Q to that navpoint."
• Navpoint system provides a function "find a path along the navpoint mesh that travels from navpoint A to navpoint B."
Finding the Navpoints
In the C4 editor, you can place "Locator" markers. You do this by opening the Marker page, selecting the Locator tool, and clicking in the world editor. Once a Locator is placed, you
can move it around with the node movement tool, and you can do Get Info on the marker to set the Locator Type, a four-letter code that is used in the game to understand what kind of marker it is.
In the system I implemented, the artist will place Locator markers and make them of Locator type "navi". The system will then find connections between markers on level load time. The
component that does this connection finding is a Controller. Controllers are one of the main ways of getting custom code into the C4 scene graph. Any node can have zero or one controllers assigned to
it, and the controller can expose Settings which are edited on the node in the World Editor.
I wanted to be able to create different kinds of nav meshes, say for wheeled vehicles versus kangaroos, so I decided that each NavmeshController instance will only consider "navi"
Locators that are direct children of the controllers node. Thus, the artist will place all the locators, select them all, and Group them together. You will then add the "Navmesh" controller to that
Group node, and set pathfinding parameters (such as jump height) on that controller.
Thus, in the NavmeshController::Preprocess() function, which is called when the level first starts up, the Navmesh finds all its children that are Locators of type "navi," and then
proceeds to test connectivity between each pair of markers. To prevent this from taking a very long time (N-squared ray casts), the artist can set a maximum distance (radius plus vertical
displacement) in the controller settings, and any pair of locators that are further away than this will not be considered as a connected pair. The connected neighbors for each navpoint are then
stored in an array. I chose one global array with a separate index table for cacheability, but I think the code would have been cleaner if I just stored one small array for each navpoint. This
connectivity array is not stored in the level file; instead it's calculated each time the level starts up. For my small test levels, that operation is so fast you can't measure it; for a really large
level, it might make sense to allow saving the calculated connectivity.
Finding Navigability
In my first implementation, I just cast a ray from point A to point B to see whether there was anything in the way. This was great at making NPCs not walk through walls, but they would
gladly throw themselves into a lava canyon that was between two separate ledges, as long as the raycast from a navpoint on ledge A to ledge B was unimpeded.
To improve this behavior, I first added the capability for an artist to mark, for a given navpoint, which other navpoints should NOT be considered reachable, even if the raycast says
it is. While this allows problem cases to be manually fixed, it turned out to be a cumbersome process, and because of that, very fragile in the face of change to the level geometry.
To even be able to debug these problems, I added a command to the C4 command console which shows and hides navmeshes in the world. To be able to tell different meshes apart (meshes
that come from different NavmeshController instances, and thus different groups of "navi" locators), I added a mesh display color property to the controller. I also added some functions to re-build
the navigation mesh at runtime, and to list the general status of the navigation mesh system.
Other problems with the generated mesh included very complex interconnectivity, where a navpoint in the corner of a room might be connected to every other navpoint in that room. While
technically correct, this creates meshes that look bad (but might play very well). To work around this issue, I added a feature in the calculation where connectivity between two navpoints will not be
considered if other connected navpoints are "closer" and in the "same general direction."
At this point, with enough manual tweaking, and setting the global "radius" and "vertical" values according to the level, something playable could be created.
Refining Navigability
Throwing yourself in a lava moat does not count as "intelligent" behavior for an NPC. While working around it with explicit node pair exclusion might work for a tortured artist, it
won't work when trying to solve the problem of moving from a random point P to the closest navmesh point. Thus, a better way of finding navigability over some area of level must be found.
I decided to brute-force it. Once I know that there's not a wall between point A and point B, I walk the extent of the ray and sample the height difference along the path. If there is
a drop, it doesn't matter, as it's OK to jump off ledges. As long as the drop is not too steep! I added a parameter to the controller for what's considered too steep. Additionally, if a single step
up is too steep, the NPC won't be able to climp or jump up, so I added another parameter to the controller for what's considered too steep. Last, I added a third parameter to determine how far to
step in each iteration of finding the height profile. This will let artists make navmesh compilation a little faster on levels that don't have complex ramps, moats or other height complexity, while
allowing for a very fine-grained navigability determination on harder levels.
The nice part of it is that this function can be used both when calculating navigability during start-up, and when trying to find the closest navmesh point that you can actually get
to.
Using the Code
Just add the NavmeshController.cpp and NavmeshController.h files to your "Game" DLL MSVC project and build (this is typically "Game.dll" or "SimpleChar.dll" or "Skeleton.dll" depending
on how you started your game).
In your World subclass Render() function, after calling World::Render(), you migth want to call MaybeRenderNavmeshes(). This will draw the navmeshes in wireframe, if the console
command to turn them on has been run. ("navmesh show")
If the level designer has built one or more nav meshes, they will be automatically created when the level is loaded. To actually get ahold of one, by name of the group node containing
the controller, call GetNavmeshController("name"). If you pass NULL for name, the first navmesh (in scene graph unpacking order) will be returned; this is mostly useful when you have only one
navmesh.
To plan a path from point A to point B, call NavmeshPathCreate(begin, end). This will do initial path planning to get to the navmesh, and will then navigate through the mesh to the
desired destination. Each step for your navigating character, you'll want to call Move(pos, &dir), passing in your current position, and getting out a desired direction to move in. Once you're
done with the navigation (either at the destination, or choosing a different goal), call Dispose() on the returned NavmeshPath.
The NavmeshControllers will be deleted when the level is unloaded, so you do not need to separately manage their lifetime.
mentioning, though.
First, the specific use of templates is a pattern that I use quite often, but might confuse you if you haven't seen it before. When you create a large number of objects of the same
general kind, such as settings controls in a GUI, and also wanting to marshal/demarshal data that those same controls act on (or perhaps saving/loading to file, etc), it's nice if you don't have to
update a zillion places each time you add a new member variable. To solve that problem, I use a single visitor function, that visits all the member variables, passing in various information about
each member as it's being visited. While the file saving code might not care what the title of the member should be when displayed in a GUI, it also doesn't really hurt. Thus, for each operation on
the members, a separate actor is created and passed to the visitor function. Most of those actors can actually be re-used between object implementations, so while it's a lot of code to write a single
float to a file, or create a single text edit control, it saves a lot of code in the long run. Given that I added different control parameters while moving along with the implementation, I believe
this pattern paid for itself during this development effort alone. Any new controller I write for C4 will be a pure time win -- not to mention the bugs I avoid by doing each thing only in one
place!
Next, the implementation uses no private or protected members. This is in general a pretty bad idea, if you want to expose the implementation. However, the navmesh implementation is in
a .cpp file, so no client of the navmesh can poke at those non-private members, so in this case, it's actually a benefit. Some of you may know of this as "interface programming." Unfortunately, the
rest of C4 does not use interfaces; instead it uses concrete classes with sometimes deep inheritance, and heavy use of private members. The draw-back, as I'm sure you know, is that any change to an
implementation detail (such as the type of a member variable) will cause a re-compilation of every client of that class, even if the functional interface to the class has not changed.
The A-star function (called NavmeshController::AStarSearchIntoWalkPath) is all one big function, but I believe it's OK to make it one function, as it keeps all the concepts in one
place for this algorithm. I will not explain A-star in depth, as there are many tutorials on that on the web, except to say that this implementation shows A-star on a generalized directed graph,
Finally, the versioning mechanism used in the file I/O is something you might want to pay attention to. It allows you to add new members to an existing data type, and have them persist
in newly saved files, while still correctly opening older file versions. In fact, with only a little bit of code change, you could even support writing older versions of files!
The 'navmesh' console command
In the console, once a navmesh is instantiated, you can use the navmesh command to get some useful feedback about the navmeshes and how they do path finding. Here are some example
commands.
navmesh show
Turn on display of the navmesh for each loaded navmesh controller. The display will use the color configured on each controller for the respective meshes. The color will be bright where
connectivity from one node to another starts, and will be black at the arriving node, to help in understanding one- way connectivity.
navmesh hide foo2
Turn off display of the navmesh for any controller that is attached to a group node that has a name that contains "foo2" as a substring.
navmesh path 4,1,2 -5,-3,2
Using the first navmesh found in the level, find a path from the location (4,1,2) to the location (-5,-3,2). Display that path in white. If no path can be found, print an error to the
console.
navmesh path clear
Clear all the displayed paths. Currently, this flavor of the command does not take an optional navmesh name substring identifier -- it always clears all navmesh paths.
navmesh
Display the name and status of all loaded navmeshes.
Final notes
I release the pathfinding code under the MIT license, which basically means that you're free to use it for any purpose, as long as my copyright is retained, and as long as you
indemnify and hold harmless me for any damage resulting out of your such use, because I claim no merchantability or fitness for a particular purpose of the code.
I would like to thank Eric Lengyel for writing the C4 engine. I would like to thank the community at the C4 message boards for feedback on the initial implementation and this article.
I would like to apologize to my wife, as I've been all grumpy sitting up until the middle of the night writing this code and article. And don't forget to "http://downloads.gamedev.net/features/programming/c4pathing/pathfinding-070824.zip">download the code! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2282198816537857, "perplexity": 1739.2224266986284}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541525.50/warc/CC-MAIN-20161202170901-00248-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://libros.duhnnae.com/2017/sep/150524545559-Trispectrum-estimator-in-equilateral-type-non-Gaussian-models-High-Energy-Physics-Theory.php | # Trispectrum estimator in equilateral type non-Gaussian models - High Energy Physics - Theory
Trispectrum estimator in equilateral type non-Gaussian models - High Energy Physics - Theory - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
Abstract: We investigate an estimator to measure the primordial trispectrum inequilateral type non-Gaussian models such as k-inflation, single field DBIinflation and multi-field DBI inflation models from Cosmic Microwave BackgroundCMB anisotropies. The shape of the trispectrum whose amplitude is notconstrained by the bispectrum in the context of effective theory of inflationand k-inflation is known to admit a separable form of the estimator for CMBanisotropies. We show that this shape is $87 \%$ correlated with the fullquantum trispectrum in single field DBI inflation, while it is $33 \%$correlated with the one in multi-field DBI inflation when curvatureperturbation is originated from purely entropic contribution. This suggeststhat $g { m NL} ^{equil}$, the amplitude of this particular shape, provides areasonable measure of the non-Gaussianity from the trispectrum in equilateralnon-Gaussian models. We relate model parameters such as the sound speed, $c s$and the transfer coefficient from entropy perturbations to the curvatureperturbation, $T {\mathcal{R} S}$ with $g { m NL} ^{equil}$, which enables usto constrain model parameters in these models once $g { m NL}^{equil}$ ismeasured in WMAP and Planck.
Autor: Shuntaro Mizuno, Kazuya Koyama
Fuente: https://arxiv.org/ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920020341873169, "perplexity": 7061.818503802843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948597585.99/warc/CC-MAIN-20171217210620-20171217232620-00771.warc.gz"} |
https://www.kitronik.co.uk/blog/how-a-thermistor-works | ### My Cart:
0 item(s) - \$0.00
You have no items in your shopping cart.
# How A Thermistor Works
## Introduction
A thermistor is a component that has a resistance that changes with temperature. There are two types of thermistor. Those with a resistance that increase with temperature (Positive Temperature Coefficient – PTC) and those with a resistance that falls with temperature (Negative Temperature Coefficient – NTC).
## Temperature coefficient
Most have a resistance falls as the temperatures increases (NTC).
The amount by which the resistance decrease as the temperature decreases is not constant. It varies with temperature. A formula can be used to calculate the resistance of the thermistor at any given temperature. Normally these are calculated for you and the information can be found in the devices datasheet.
## Applications
There are many applications for a thermistor. Three of the most popular are listed below.
### Temperature sensing
The most obvious application for a thermistor is to measure temperature. They are used to do this in a wide range of products such as thermostats.
### In rush current limiting
In this application the thermistor is used to initially oppose the flow of current (by having a high resistance) into a circuit. Then as the thermistor warms up (due to the flow of electricity through the device) it resistance drops letting current flow more easily.
### Circuit protection
In this application the thermistor is used to protect a circuit by limiting the amount of current that can flow into it. I’ve too much current starts to flow into a circuit through the thermistor this causes the thermistor to warm up. This in turn increases the resistance of the thermistor reducing the current that can flow into the circuit.
### Example
The circuit shown bloew shows a simple way of constructing a circuit that turns on when it goes hot. The decrease in resistance of the thermistor in relation to the other resistor which is fixed as the temperature rises will cause the transistor to turn on. The value of the fixed resistor will depend on the thermistor used, the transistor used and the supply voltage. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221988081932068, "perplexity": 741.1282911254594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131295993.24/warc/CC-MAIN-20150323172135-00155-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://economics.stackexchange.com/questions/25043/augmented-gravity-model | # Augmented Gravity Model
I am currently using the gravity model for my dissertation on migration flows. Do gravity models need to be augmented by dummy variables only or can other explanatory variables (such as the unemployment rate in the destination/ origin country) be included please?
All your feedback is greatly appreciated.
\begin{align*} M_{ij} = &\beta_0 \times log(g) + \beta_1 \times log(P_i) + \beta_2 \times log(P_j) + \beta_3 \times log(X_i) + \\ &\beta_4 \times log(X_j) + \beta_5 \times log(D_{ij}) + \varepsilon_{ij} \end{align*}
where $$X_i$$ is a vector of explanatory variables describing different features of the origin (i.e. push factors) and $$X_j$$ is a vector of explanatory variables describing features of the destination (i.e. pull factors). Push factors are those characteristics of the origin place that encourage (discourage) out-migration (in-migration), such as low incomes, high unemployment, high prices, in general few opportunities for development. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995996356010437, "perplexity": 3372.7843268463694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371612531.68/warc/CC-MAIN-20200406004220-20200406034720-00487.warc.gz"} |
https://lo.calho.st/posts/ndnsim-custom-fields/ | The recommended way to build something on top of ndnSIM is to fork its scenario template repository and work inside there. You still need to download and compile the actual framework, however you will simply install it into /usr/local and link to it instead of actually working inside the main repository.
It turns out that this workflow actually makes certain tasks a lot more difficult. You might think a network simulator would make it easy to add new header fields to packets. Well, think again.
## First Steps
What do we want to do? Our goal is to just add one field to the Interest packet header. The ndn::Interest class inherits an interface called ndn::TagHost, which allows you to attach arbitrary tags to it. Defining your own tag can be as simple as a single typedef, if you only need to contain a single value in that tag:
typedef ndn::SimpleTag<uint64_t, 0x60000001> MyCustomTag;
You simply specify the type of the tag and make up an ID for it. However, you must pick an unused tag from the valid range given in the ndn-cxx wiki. My 0x60000001 is the first value in this range.
To attach a tag to an Interest, you simply call the setTag method:
interest.setTag<MyCustomTag>(std::make_shared<MyCustomTag>(54321));
To read a tag from an Interest, there is a corresponding getTag method:
std::shared_ptr<MyCustomTag> tag = interest.getTag<MyCustomTag>();
This gives you a pointer to the tag object, and you can get the value out of it quite easily… But first, check if it is null.
if (tag == nullptr) {
// no tag
}
else {
uint64_t tagValue = tag->get();
}
However, now is where we encounter our problem. Our tag will not actually be encoded and sent over the network. That’s right – we can attach a tag to the Interest, but when it arrives at the next hop it will be gone.
How can we fix this?
## Investigation
Vanilla ndnSIM uses these sorts of tags itself in a few places. One obvious one is the HopCountTag, which you can use to figure out how far a packet has gone in the network. A grep through the ndnSIM source brings us to a class called GenericLinkService. This class is responsible for actually encoding packets and sending them out on the wire. In particular, we can find the bit responsible for encoding the HopCountTag in a method called encodeLpFields:
shared_ptr<lp::HopCountTag> hopCountTag = netPkt.getTag<lp::HopCountTag>();
if (hopCountTag != nullptr) {
}
else {
}
Clearly, we need to define a MyCustomTagField to be able to encode our new tag.
## Declaring a Tag
This is actually pretty easy, but first you need to know what kind of witchcraft is going on. Let’s start with the actual code to define the field, then go on to analyze it:
enum {
TlvMyCustomTag = 901
};
First, we define a constant for the TLV type ID… There are actually a few hidden constraints to what we can pick. If we don’t do this right, we get a packet parse error. Why?
Let’s look at ndn::lp::Packet’s wireDecode method:
for (const Block& element : wire.elements()) {
detail::FieldInfo info(element.type());
if (!info.isRecognized && !info.canIgnore) {
BOOST_THROW_EXCEPTION(Error("unrecognized field cannot be ignored"));
}
...
}
Apparently, this FieldInfo class tells the decoder whether the field is recognized, and whether it can ignore it if it isn’t. Let’s peek at the constructor:
FieldInfo::FieldInfo(uint64_t tlv)
: ...
{
boost::mpl::for_each<FieldSet>(boost::bind(ExtractFieldInfo(), this, _1));
if (!isRecognized) {
&& (tlvType & 0x01) == 0x01;
}
}
Now this is interesting… To figure out what a TLV tag is, it iterates over FieldSet (which only contains the built-in tags, and we can’t override). However, if it doesn’t find a match, it determines if it is ignorable based on the value of the TLV type ID. We can’t make the field recognized without forking the actual ndnSIM core, but we can make it ignorable by choosing the right ID.
To save you from looking up tlv::HEADER3_MIN and tlv::HEADER3_MAX, they are 800 and 959, respectively. Also, don’t forget that the low bit has to be set. And don’t pick one of the types that is already used.
Moving on from the TLV ID nonsense, the rest of the FieldDecl is pretty straightforward. We pass a flag that says “this goes in the header,” followed by the type of the value and the TLV ID we just made up.
Note that for some reason, the code won’t compile if the type is specified as anything other than uint64_t. I didn’t care enough to figure this out, but it seems to have something to do with the fact that the only integer EncodeHelper defined is for uint64_t.
## Encoding the Tag
So far, we have defined our tag twice: once for the high-level Interest object, and once for the low-level TLV encoding. Now, we need to write code to convert between these two representations.
To do this, we need to create a new LinkService. Sounds intimidating, but really all we need to do is make a copy of GenericLinkService and change a few things. Yes, literally copy generic-link-service.hpp and generic-link-service.cpp out of ns3/ndnSIM/NFD/daemon/face/ and into your own project. Rename the file as you see fit, and carefully rename the class to something like CustomTagLinkService. You will want to be careful because we still need to implement the GenericLinkServiceCounters interface if we don’t want to break anything. We can also avoid redefining the nested Options class by using a typedef to import it from GenericLinkService into the new CustomTagLinkService namespace.
Now that we have an identical clone of the GenericLinkService, let’s fix it. To encode your new field, take a look at the encodeLpFields method. Follow the pattern used by the CongestionMarkTag field to implement your new custom one:
shared_ptr<MyCustomTag> myCustomTag = netPkt.getTag<MyCustomTag>();
if (myCustomTag != nullptr) {
}
Then, add the corresponding decoding logic to decodeInterest:
if (firstPkt.has<MyCustomTagField>()) {
interest->setTag(make_shared<MyCustomTag>(firstPkt.get<MyCustomTagField>()));
}
Add the same code to the decodeData and decodeNack methods if you need them.
Specifying a custom LinkService isn’t going to do us any good if we don’t tell ndnSIM to use it. We’ll have to replace the callback that sets up a Face in order to do this. We’re going to focus on Faces for PointToPointNetDevices, but the following can be generalized for other types of links.
The call from our scenario file will look something like this:
stackHelper.UpdateFaceCreateCallback(
PointToPointNetDevice::GetTypeId(),
MakeCallback(CustomTagNetDeviceCallback)
);
For context, this is a method of the StackHelper that you’re probably already using to install the NDN stack on nodes. To write the callback, copy the logic from the PointToPointNetDeviceCallback in that same class. All you have to change is the instantiation of the LinkService – replace the GenericLinkService with your own. You will also need to copy the constructFaceUri method (verbatim) because your callback will need to refer to it, but it is out of scope.
## Other Caveats
By default, the scenario template wants to compile your code in C++11 mode. However, the LinkService uses some C++14 features, so you’ll have to edit the flags in .waf-tools/default-compiler-flags.py. Note that you need to re-run ./waf configure if you edit these flags.
## Conclusion
I think this is way too much effort just to add a field to a packet. We’ve duplicated a lot of logic in order to do something so small. I feel like the ndnSIM developers should have made it a bit easier to add fields to a packet… At worst, I might expect a call to the StackHelper to add new fields. It would likely be possible to write a generic enough LinkService which will encode any custom fields as long as mappings between the TLV classes and tag classes are provided. I look forward to this feature, because it would have made the middle part of my week go a lot more smoothly. Until then, I hope that this post can be useful to anyone else trying to do the same thing. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2777412235736847, "perplexity": 1252.8611724011791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00210.warc.gz"} |
https://pdglive.lbl.gov/DataBlock.action?node=S048M&home=BXXX040 | # ${{\boldsymbol \Xi}_{{c}}^{0}}$ MASS INSPIRE search
The fit uses the ${{\mathit \Xi}_{{c}}^{0}}$ and ${{\mathit \Xi}_{{c}}^{+}}$ mass and mass-difference measurements.
VALUE (MeV) EVTS DOCUMENT ID TECN COMMENT
$\bf{ 2470.90 {}^{+0.22}_{-0.29}}$ OUR FIT
$\bf{ 2470.99 {}^{+0.30}_{-0.50}}$ OUR AVERAGE
$2470.85$ $\pm0.24$ $\pm0.55$ 3.4k
2014 B
CDF ${{\mathit p}}{{\overline{\mathit p}}}$ at 1.96 TeV
$2471.0$ $\pm0.3$ ${}^{+0.2}_{-1.4}$ 8.6k 1
2005
BELL ${{\mathit e}^{+}}{{\mathit e}^{-}}$ , ${{\mathit \Upsilon}{(4S)}}$
$2470.0$ $\pm2.8$ $\pm2.6$ 85
1998 B
E687 ${{\mathit \gamma}}{}^{}\mathrm {Be}$, ${{\overline{\mathit E}}}_{\gamma }$ = $220$ GeV
$2469$ $\pm2$ $\pm3$ 9
1992 B
CLEO ${{\mathit \Omega}^{-}}{{\mathit K}^{+}}$
$2472.1$ $\pm2.7$ $\pm1.6$ 54
1990 F
ARG ${{\mathit e}^{+}}{{\mathit e}^{-}}$ at ${{\mathit \Upsilon}{(4S)}}$
$2473.3$ $\pm1.9$ $\pm1.2$ 4
1990
ACCM ${{\mathit \pi}^{-}}$ (${{\mathit K}^{-}}$) ${}^{}\mathrm {Cu}$ 230 GeV
$2472$ $\pm3$ $\pm4$ 19
1989
CLEO ${{\mathit e}^{+}}{{\mathit e}^{-}}$ $10.6$ GeV
• • • We do not use the following data for averages, fits, limits, etc. • • •
$2462.1$ $\pm3.1$ $\pm1.4$ 42 2
1993 C
E687
$2471$ $\pm3$ $\pm4$ 14
1989
CLEO See ALAM 1989
1 The systematic error was (wrongly) given the other way round in LESIAK 2005 .
2 The FRABETTI 1993C mass is well below the other measurements.
References:
AALTONEN 2014B
PR D89 072014 Mass and Lifetime Measurements of Bottom and Charm Baryons in ${{\mathit p}}{{\overline{\mathit p}}}$ Collisions at $\sqrt {s }$ = 1.96 TeV
LESIAK 2005
PL B605 237 Measurement of Masses and Branching Ratios of ${{\mathit \Xi}_{{c}}^{+}}$ and ${{\mathit \Xi}_{{c}}^{0}}$ Baryons
FRABETTI 1998B
PL B426 403 Observation of a Narrow State Decaying into ${{\mathit \Xi}_{{c}}^{0}}{{\mathit \pi}^{+}}$
FRABETTI 1993C
PRL 70 2058 Measurement of the Lifetime of the ${{\mathit \Xi}_{{c}}^{0}}$
HENDERSON 1992B
PL B283 161 Observation of the Decay ${{\mathit \Xi}_{{c}}^{0}}$ $\rightarrow$ ${{\mathit \Omega}^{-}}{{\mathit K}^{+}}$
ALBRECHT 1990F
PL B247 121 Measurement of ${{\mathit \Xi}_{{c}}{(2460)}}$ Production in ${{\mathit e}^{+}}{{\mathit e}^{-}}$ Annihilation at 10.5 GeV Centre-of-Mass Energies
BARLAG 1990
PL B236 495 First Measurement of the Lifetime of the Charmed Strange Baryon ${{\mathit \Xi}_{{c}}^{0}}$
ALAM 1989
PL B226 401 Measurement of the Isospin Mass Splitting between ${{\mathit \Xi}_{{c}}^{+}}$ and ${{\mathit \Xi}_{{c}}^{0}}$
AVERY 1989
PRL 62 863 Observation of the Charmed Strange Baryon ${{\mathit \Xi}_{{c}}{(2460)}^{0}}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9883622527122498, "perplexity": 7461.518069123866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879537.28/warc/CC-MAIN-20201022111909-20201022141909-00695.warc.gz"} |
https://www.physicsforums.com/threads/convolution-of-a-dirac-delta-function.222873/ | # Convolution of a dirac delta function
1. Mar 19, 2008
### pka
Alright...so I've got a question about the convolution of a dirac delta function (or unit step). So, I know what my final answer is supposed to be but I cannot understand how to solve the last portion of it which involves the convolution of a dirac/unit step function. It looks like this:
10 * Inverse laplace of [ H(s) * e ^ (-5s) ]
where H(s) = (1/20) * (1 - e ^ -20t).
---Note:
This is what I've done to lead me to the dirac/unit step. Btw, I'm calling it the dirac/unit step function because I get the dirac delta function in my answer whereas the answer has a unit step function. So, just for clarity's sake I will call the unit step function u(t - a) and the dirac delta function d(t - a).
Now, let's continue.
Saying L(s) = e ^ -5s. So that its inverse laplace, l(t) = d(t - 5).
Let's also say that M(s) = H(s) * L(s).
Convolution time!!! And I get m(t) = (1/20) * (1 - e ^ -20t) * Integral from 0 to t of d(tau - 5) d(tau).
I'm sorry about my notations, I don't know how to put in an integral sign or...any other fancies. =/
This is where my trouble starts. I thought the integral of a dirac delta function would be just h(t). But that's not right. In order for my answer to make any sense then the integral of d(tau - 5) should be just d(t - 5) to get a fairly simple answer of d(t - 5) * h(t).
Any help in this matter would be greatly appreciated. Links too! If I've posed a really simple question in too much writing then feel free to let me know or if I'm thinking about this way too hard then please...also let me know. But many thanks to any advice or help anyone can offer me.
2. Mar 19, 2008
### pka
Actually...I think I've solved my problem!!!! In integral of the dirac delta should be just the unit step function....giving me what I need. And so...the convolution turns out to be m(t) = h(t) * u(t - a). So...then it's just 10 * m(t).
:D Can anyone tell me if my answer is correct in its thought and all that? :D
Similar Discussions: Convolution of a dirac delta function | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.924045741558075, "perplexity": 592.3129552650915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685993.12/warc/CC-MAIN-20170919183419-20170919203419-00071.warc.gz"} |
http://autoplot.org/Test_dataset_urls | # Test dataset urls
This is an old set of URIs for testing, back when this was done by hand when making a release. Autoplot is tested now automatically and continuously, see http://jfaden.net/hudson/. See also http://autoplot.org//developer.listOfUris . This is tested with http://jfaden.net:8080/hudson/job/autoplot-test140/
TODO: this page should be updated to make a correct set of links, and Autoplot's test140 which tests all the URIs on a page should be used to test. That way, any one could add URIs here and they would be tested.
TODO: This uses an old vap server, jnlp.cgi, which does not appear to be working. This page should be updated to use http://autoplot.org/autoplot.jnlp?...
# 1. TSDS
http://timeseries.org/get.cgi?StartDate=19950101&EndDate=19950109&ppd=144&out=tsml¶m1=OMNI_OMNIHR-22-v0
http://timeseries.org/get.cgi?StartDate=19950101&EndDate=19950104&ext=bin&out=tsml&ppd=144¶m1=OMNI_OMNIHR-22-v0
http://timeseries.org/OMNI_OMNIHR-22-v0-to_19950101-tf_19950104-ppd_144-filter_0-ext_bin.bin
# 2. Das2Server
Autoplot gets confused about the escaping. "vap+das2server" turns into "vap das2server" and the das2Server file part is removed. This probably has something to do with its TimeSeriesBrowse capability.
# 3. CDF
Suspect problem identifying valid data: http://cdaweb.gsfc.nasa.gov/istp_public/data/cluster/c2/pp/fgm/2003/c2_pp_fgm_20030114_v01.cdf?Epoch__C2_PP_FGM
Strange message:
java.lang.RuntimeException: java.lang.IllegalArgumentException: not supported: Lo E PD
at org.virbo.autoplot.ApplicationModel.resetDataSetSourceURL(ApplicationModel.java:249)
No data is displayed: vap:ftp://cdaweb.gsfc.nasa.gov/pub/istp/themis/tha/l2/fgm/2007/tha_l2_fgm_20070224_v01.cdf?tha_fgh_gse This is corrected and will be released soon. The problem was the "COMPONENT_0" conventions used for Themis lead to the timetags being interpretted as invalid.
Vectors plotted as spectrogram: ftp://cdaweb.gsfc.nasa.gov/pub/istp/geotail/def_or/1995/ge_or_def_19950101_v02.cdf?GSE_POS
Works fine, but nicely demonstrates AutoHistogram's robust statistics and the potential to indentify fill values automatically: ftp://cdaweb.gsfc.nasa.gov/pub/istp/geotail/mgf/1998/ge_k0_mgf_19980102_v01.cdf?IB
# 5. FITS
This fails because negative CADENCE and MONOTONIC=true. vap:http://www.astro.princeton.edu/~frei/Gcat_htm/Catalog/Fits/n4013_lR.fits
# 6. ASCII
java.lang.IllegalArgumentException: unable to identify time format for 1990-11-05T16:31:00.000Z vap+dat:http://www.igpp.ucla.edu/cache2/GOMA_3001/DATA/SUMMARY/E1_SUMM_GSE_GSM.TAB?timeFormat=ISO8601&column=field1
This demonstrates fractional day of year: vap+dat:http://wind.nasa.gov/swe_apbimax/wi_swe_fc_apbimax.1995005.txt?comment=;&column=21&timeFormat=$Y+$j&time=field0
Fill string is recognized, -1e31 is inserted, but this is not marked as fill: vap+dat:http://goes.ngdc.noaa.gov/data/avg/$Y/A105$y$m.TXT?skip=23&timeFormat=$y$m$d+$H$M&column=E1&time=YYMMDD&fill=32700&timerange=Dec+2004
I'd expect this to read in the column as a rank 1 dataset: http://www-pw.physics.uiowa.edu/~jbf/L1times.2.dat?fixedColumns=29-35
And this gets a null pointer exception in AsciiParser.getFieldIndex line 1024: http://www-pw.physics.uiowa.edu/~jbf/L1times.2.dat?fixedColumns=0-24,29-35&column=field1
Very large with $b and${skip}: http://vho.nasa.gov/mission/soho/celias_pm_30sec/2003.txt
# 12. Issues with URIs
It would be nice to support plus notation with excel spreadsheets. Also, this shows an issue with the excel data source which file:///Documents%20and%20Settings/sklemuk/Desktop/UCSF%20Voice%20Conference%202008/Product%20Summary.xls?sheet=nist%20lo&column=H
<message>go file:///Documents%20and%20Settings/sklemuk/Desktop/UCSF%20Voice%20Conference%202008/Product%20Summary.xls?sheet=nist%20lo&column=H</message>
<message>java.lang.NullPointerException
at org.virbo.excel.ExcelSpreadsheetDataSource.getDataSet(ExcelSpreadsheetDataSource.java:89) at org.virbo.autoplot.ApplicationModel.loadDataSet(ApplicationModel.java:1322) at org.virbo.autoplot.ApplicationModel.updateImmediately(ApplicationModel.java:1260) at org.virbo.autoplot.ApplicationModel.access$600(ApplicationModel.java:112) at org.virbo.autoplot.ApplicationModel$8.run(ApplicationModel.java:1293) at org.das2.system.RequestProcessor\$Runner.run(RequestProcessor.java:201) at java.lang.Thread.run(Unknown Source)
# 13. Miscellaneous URIs
Demonstrates problem with AutoHistogram: http://goes.ngdc.noaa.gov/data/avg/2004/A1050402.TXT
# 14. VAPs in the wild
VAP files are Autoplot configuration files, an xml version of the DOM tree. I'd expect these to be very fragile right now, but I'll try to support them:
From VMO, the data here contains the search date, but the time axis is not properly located: http://vmo.nasa.gov/vxotmp/vap/VMO/Granule/OMNI/PT1H/omni2_1994.vap | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19584308564662933, "perplexity": 8855.890563745099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583879117.74/warc/CC-MAIN-20190123003356-20190123025356-00160.warc.gz"} |
https://link.springer.com/article/10.1007%2Fs11661-011-1045-9 | Metallurgical and Materials Transactions A
, Volume 43, Issue 6, pp 1845–1860
# Prediction of Inhomogeneous Distribution of Microalloy Precipitates in Continuous-Cast High-Strength, Low-Alloy Steel Slab
• Suparna Roy
• Sudipta Patra
• S. Neogy
• A. Laik
• S. K. Choudhary
• Debalay Chakrabarti
Article
DOI: 10.1007/s11661-011-1045-9
Roy, S., Patra, S., Neogy, S. et al. Metall and Mat Trans A (2012) 43: 1845. doi:10.1007/s11661-011-1045-9
## Abstract
Spatial distribution in size and frequency of microalloy precipitates have been characterized in two continuous-cast high-strength, low-alloy steel slabs, one containing Nb, Ti, and V and the other containing only Ti. Microsegregation during casting resulted in an inhomogeneous distribution of Nb and Ti precipitates in as-cast slabs. A model has been proposed in this study based on the detailed characterization of cast microalloy precipitates for predicting the spatial distribution in size and volume fraction of precipitates. The present model considers different models, which have been proposed earlier. Microsegregation during solidification has been predicted from the model proposed by Clyne and Kurz. Homogenization of alloying elements during cooling of the cast slab has been predicted following the approach suggested by Kurz and Fisher. Thermo-Calc software predicted the thermodynamic stability and volume fraction of microalloy precipitates at interdendritic and dendritic regions. Finally, classical nucleation and growth theory of precipitation have been used to predict the size distribution of microalloy precipitates at the aforementioned regions. The accurate prediction and control over the precipitate size and fractions may help in avoiding the hot-cracking problem during casting and selecting the processing parameters for reheating and rolling of the slabs.
## 1 Introduction
Carbide, nitride, or carbonitride precipitates formed by the microalloying elements such as, Nb, Ti, and V provide grain refinement and precipitation strengthening in high-strength, low-alloy (HSLA) steel.[1] Microalloy precipitates in continuous-cast slab can influence the microstructural changes taking place during subsequent processing such as reheating and rolling and, hence, need to be studied. Industrial reheating of HSLA steel is aimed at dissolving the Nb precipitates to encourage the fine-scale, strain-induced Nb(C,N) precipitation during hot rolling.[1,2] Pinning of austenite (γ) grain boundaries by the microalloy precipitates also prevents the excessive γ grain growth during soaking.[1,3] The choice of reheating temperature and time, therefore, should be based on the characterization of as-cast precipitates.[1,3,4]
The nature, shape, and size of the microalloy precipitates have been widely investigated in as-cast slab as well as in thermomechanically controlled rolled (TMCR) HSLA steel plates/strips.[4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] Both macro- and microsegregation during casting may result in an inhomogeneous distribution of the precipitates.[7, 8, 9, 10, 11,16, 17, 18, 19, 20] The formation of large (>1 μm) Nb-rich dendritic precipitates[7,8,16,17] or eutectic (Nb,Ti,V)(C,N) particles[11] in the interdendritic boundaries indicated the segregation of microalloying elements, especially Nb. A higher volume fraction of Nb precipitates in pearlitic regions, which coincided with the interdendritic regions, compared with the ferritic regions, which coincided with the dendrite-center regions of TMCR steels containing 0.023 to 0.057 wt pct Nb, also has been attributed to interdendritic segregation.[18]
Clustering of coarse microalloy precipitates (such as dendritic (Nb,Ti)(C,N) and TiN)) in the interdendritic region of as-cast slab can lead to slab-surface cracking during continuous casting.[9,21,22] Hence, prediction and control over precipitate size and spatial distribution of precipitates is crucial for maintaining the cast-slab quality. The effect of segregation on the stability of microalloy precipitates and on the precipitate size distributions at different regions (solute-rich and solute-depleted) of as-cast slab is not well understood. Hence, a model has been proposed here, based on the detailed characterization of cast microalloy precipitates, for predicting the spatial distribution in size and volume fraction of precipitates. The present model considers different models, which have been proposed earlier. Microsegregation during solidification has been predicted from the model proposed by Clyne and Kurz.[23] Homogenization of alloying elements during cooling of the cast slab has been predicted following the approach suggested by Kurz and Fisher.[24] Thermo-Calc software predicted the thermodynamic stability and volume fraction of microalloy precipitates at interdendritic and dendritic regions. Finally, classical nucleation and growth theory of precipitation have been used to predict the size distribution of microalloy precipitates at the aforementioned regions. The predictions were verified by the measurement of the local composition and characterization of precipitates from interdendritic and dendritic regions of the as-cast slabs.
## 2 Experimental Details
Two as–continuously cast (200 -mm thick and 1200 -mm wide) low-carbon microalloyed steel slabs have been investigated. The chemical compositions of the two slabs are given in Table I.
Table I
Chemical Compositions of the Investigated Slabs
Wt Pct
C
Si
Mn
P
S
Al
Nb
Ti
V
Slab 1
0.09
0.33
1.42
0.010
0.003
0.035
0.050
0.019
0.05
Slab 2
0.07
0.18
1.20
0.012
0.005
0.034
0.041
Slab 1 contained microalloying elements Nb, Ti, and V, whereas Slab 2 did not contain any Nb and V, but the concentration of Ti was twice that of Slab 1. As the investigated slabs were continuously cast commercial grades, the detailed time–temperature data during solidification was not available.
Through thickness slices (200 mm × 100 mm × 25 mm) were cut from the midwidth location of the continuously cast slabs. All microstructural specimens were collected from the top half of the slabs, at subsurface (SS, 0 to 20 mm from top surface), quarter-thickness (QT 40 to 60 mm from the top surface), and midthickness (MT, 90 to 110 mm from top surface) locations.
Standard techniques have been followed for metallographic sample preparations. The microstructural characterization in terms of secondary dendritic arm spacing (SDAS); second-phase fraction; and shape, size, and distribution of coarse- and fine-microalloy precipitates have been carried out using a LEICA DM6000M optical microscope, fitted with Leica M.W. and Leica L.A.S. image analysis software (Leica Microsystems Gmbh, Wettzlar, Germany), as well as using Zeiss EVO 60 (Carl Zeiss MicroImaging, LLC, Thornwood, NY) and JEOL1 7300 model scanning electron microscopes (SEMs), fitted with Oxford-Inca PENTA FETX3 software (Oxford Instrument PLC, Abingdon, Oxfordshire, United Kingdom) for energy-dispersive X-ray spectroscope (EDS). At least 100 precipitates have been studied from each microstructural region (interdendritic/dendrite center) of every sample for calculating the average precipitate size. Fine precipitates (>100 nm) were characterized under JEOL 2000FX and JEOL ZEM-2100* model transmission electron microscopes (TEMs). Local compositions from interdendritic and in dendrite center regions at QT and MT locations of as-cast slabs have been detected by an electron probe microanalyzer (EPMA), equipped with three wavelength dispersive spectrometers (Cameca SX 100, CAMECA, Société par Actions Simplifiée (SAS), Gennevilliers Cedex, France).
## 3 Microstructure and Precipitates in As-Cast Slabs
### 3.1 Microstructure of As-Cast Slabs
The microstructures from SS, QT, and MT locations of as–continuously cast slabs consisted of ferrite and pearlite (~15 to 20 pct), Figure 1. Ferrite grain sizes and SDAS increased from SS (Figures 1(a) and (c)) to QT (Figures 1(b) and (d) and to MT locations, possibly because of the decrease in slab cooling rate. The number-averaged SDAS values measured until the narrow, equiaxed zone (~25-mm thick) at the slab centerline in both slabs are given in Figure 2. The mean ferrite grain sizes (measured in equivalent circle diameter) were 20 to 25 μm at SS, 35 to 40 μm at QT, and 55 to 60 μm at MT.
### 3.2 Precipitates in the As-Cast Slab 1
QT location (~50 -mm below the top surface) was selected for detail precipitate quantification and detection of local compositions, as previous studies[4,20] reported a consistent segregation profile at that location. An inhomogeneous distribution of precipitates was found on the polished surface of both investigated slabs, with precipitate-rich regions (circled in Figure 3(a)) surrounded by regions of low precipitate density (Figure 3(a)). The bright precipitates in Figure 3(a) are magnified in Figure 3(b), and the corresponding EDS analysis (Figure 3(c)) revealed them to be Nb-rich carbonitrides (either Nb(C,N) or (Nb,Ti)(C,N)). Darker constituents in Figure 3(a) were either MnS inclusions or cuboidal TiN particles (Figure 3(d)). The brighter and darker appearances of the precipitates (or inclusions) in compositional contract of back-scattered electron images (Figures 3(a) and (b) are caused by their higher or lower (average) atomic numbers, respectively, compared with the Fe matrix. A fraction of Nb-rich precipitates, MnS and TiN, were much higher in interdendritic regions (on or around the pearlite and bainite) compared with the dendrite center (ferrite) regions. The separation between subsequent precipitate-rich regions (center-to-center distance of 140 to 160 μm) at the QT location was consistent with the SDAS values measured at that location (~150 μm). This observation indicates that interdendritic segregation was responsible for the inhomogeneous distribution of the precipitates and inclusions, with precipitate-rich regions being the interdendritic regions and precipitate-lean regions being the dendrite center regions
The shape and size of various microalloy precipitates observed in Slab 1 were as follows: (1) cuboidal TiN particles (700 to 1800 nm), (2) star- or cruciform-shaped (winged) precipitates (Nb,Ti)(C,N) (40 to 1300 nm), (3) cuboidal-shaped (Nb,Ti)(C,N) (30 to 700 nm), and (4) spherical NbC and VC (3 to 50 nm). The frequency of spherical precipitates (~75 pct) was much higher compared with cuboidal (~10 pct) and star/cruciform precipitates (~15 pct). Star/cruciform-shaped particles were predominantly observed in interdendritic regions, which can be attributed to the microseregation-induced precipitation.[4,7,16, 17, 18, 19, 20] Grouping/clustering of Nb-rich precipitates is evident in Figures 4(a) and (b). Nb precipitates were also observed in rows, Figures 3(a) and 4(c)), which can be attributed to the rejection of solute atoms from the interdendritic melts, as the liquid melt front advanced during solidification.[16] An EDS line scan confirmed that the distances between such “interdendritic precipitate bands” were consistent with the SDAS. Selected area diffraction (SAD) analysis was also carried out in TEM to identify the nature of the microalloy precipitates. For example, Figure 4(d) shows the SAD pattern for VC precipitates. Spherical VC and (Nb,V)C precipitates (<30 nm), however, were uniformly distributed throughout the ferrite matrix (Figure 4(d)).
### 3.3 Precipitates in As-Cast Slab 2
Similar to Slab 1, TiN particles in Slab 2 were present at a higher density in interdendritic regions (Figure 5(a)) compared with dendrite center regions (Figure 5(b)). The wide variation in cuboidal TiN particle sizes (30 nm to 7 μm) can be reflected by the presence of large and small particles (Figures 5(a) through (d)). A Ti peak is visible in the EDS analysis collected from the large TiN particle in Figure 5(a). A dark-field (TEM) image showing a TiN particle and the corresponding SAD analysis is shown in Figure 5(d). The inhomogeneous distribution of TiN can be attributed to the microsegregation of Ti and N.[7,8,17] AlN particles have not been found in either slab possibly because of the presence of sufficient Ti for combining with all the N present in steels.
### 3.4 Complex Precipitates and Microalloy Segregation at MT
The heterogeneous precipitation of (Nb,Ti)(C,N) in Slab 1 on the MnS inclusion (Figure 6(a)) and TiN in Slab 2 on the Al2O3 inclusion (Figure 6(b)) can be outcomes of microsegregation.[17] Such complex particles may hamper the mechanical properties (such as ductility and low-temperature impact toughness) of the slabs.[17,21,22] Microalloy segregates as large as 10 to 15 μm of Nb-rich (Nb,Ti)(C,N) (Figures 7(a) and (b)) and Ti-rich (Nb,Ti)(C,N) (Figures 7(c) and (d)) were found at the MT location of Slab 1. The segregation of Nb-rich (Nb,Ti)(C,N) was associated with MnS inclusions (Figure 7(a)). The formation of large, eutectic (Nb,Ti)(C,N) has been reported earlier in HSLA steel,[11] which can also be attributed to the macrosegregation of Nb and Ti. However, the presence of Ti-rich (Nb,Ti)(C,N) besides Nb-rich segregates has not been reported earlier. Such constituents may also hamper the mechanical properties at the centerline of the as-cast slab.
### 3.5 Fraction of Microalloy Precipitates in the As-Cast Slabs
The number-density of microalloy precipitates was ~2 to 3 times higher at interdendritic (i.e., precipitate-rich) regions compared with the dendrite center (i.e., precipitate-poor) regions in each slab (Figure 8), indicating the microsegregation-induced inhomogeneous precipitate distribution. A greater Ti content in Slab 2 possibly resulted in larger TiN particles in Slab 2 (up to ~7 μm), than that in Slab 1 (up to ~1.8 μm). The number densities (number/mm2) and average sizes (nm) of microalloy precipitates measured at SS, QT, and MT locations of each slab (Table II) indicate that precipitate densities and sizes increased from the SS toward the MT possibly because of the following reasons: (1) the increase in solute level caused by macrosegregation and (2) the slower cooling rate toward the slab center, allowing for more time for precipitate growth.
Table II
Number Density (per mm2) and Average Size (nm) of Microalloy Precipitates Measured in Solute-Rich and Solute-Poor Regions at SS, QT, and MT Locations of Slab 1 and Slab 2
Number Density per mm2 (×106)
Average Size ( nm)
Steel 1
Steel 2
Steel 1
Steel 2
SS
solute-rich
12
0.6
35
70
solute-poor
4
0.4
20
50
average
7
0.5
25
60
QT
solute-rich
15
1.0
50
88
solute-poor
6
0.5
30
45
average
10
0.7
40
60
MT
solute-rich
18
2
70
105
solute-poor
8
0.4
30
45
average
13
0.9
45
70
The higher precipitate density in Slab 1 compared with Slab 2 (Figure 8, Table II) can be attributed to the presence of Nb and V, which formed numerous, fine NbC, VC, and (Nb,V)C precipitates in Slab 1.
### 3.6 Measurement of Local Compositions in Interdendritic and Dendrite Center Regions
The concentration of alloying elements (in wt pct) was measured by microanalysis of the interdendritic and dendrite center regions at the QT location of Slabs 1 and 2 using EPMA (Table III).
Table III
Concentrations of Various Elements (in wt pct) Obtained by Microanalysis of the Interdendritic and Dendrite Center Regions at Various Locations of Slab 1 and Slab 2 Using EPMA
Elements
C
Si
Mn
P
S
Al
Nb
Ti
V
N
Measured at QT of Slab 1
Interdendritic
0.11
0.40
1.6
0.020
0.010
0.04
0.08
0.040
0.055
0.010
Dendrite center
0.07
0.30
1.2
0.005
0.001
0.04
0.02
0.010
0.045
0.006
Measured at QT of Slab 2
Interdendritic
0.11
0.30
1.4
0.020
0.010
0.03
0.06
0.11
Dendrite center
0.08
0.20
1.1
0.004
0.001
0.03
0.03
0.005
Wave-length dispersive X-ray spectroscope (WDS) was preferred in this case because of its high accuracy, especially for low atomic number elements. Table III clearly suggests that interdendritic regions were solute-rich and dendrite-center regions were solute-depleted. Elements such as Nb, Ti, P, S, and Mn were clearly partitioned between the aforementioned regions. C and N levels were also higher in interdendritic regions than those in the dendrite center regions (Table III). Elements such as Si, V, and Al were distributed homogeneously throughout the matrix.
## 4 Theoretical Analysis
### 4.1 Dependence of Microsegregation on the Solidification Sequence
According to the Thermo-Calc software, solidification in Slabs 1 and 2 is expected to start at around the same temperature (~1798 K [1425 °C] to 1793 K [1520 °C]) with the formation of δ ferrite (Figure 9(a)). Austenite starts to form at around 1758 K (1485 °C) (Figure 9(a)). Complete solidification is predicted at 1730 K (1457 °C) in Slab 1 and at 1718 K (1445 °C) in Slab 2. Therefore, the freezing range of both slabs was similar; however, a slightly greater freezing range in Slab 2 (possibly because of its lower C level) might have promoted dendrite coarsening, which resulted in a slightly higher SDAS in Slab 2 than in Slab 1 (Figure 2).[25]
During thick-slab continuous casting, the metal in contact with the water-cooled copper mold (i.e., at the SS region) solidifies as the solute depleted δ ferrite (i.e., at the SS region). Solidification at the SS region will generally be completed as δ ferrite because of the increased cooling rate resulting in nonequilibrium solidification.[18] Considering the subsequent δγ transformation and decomposition of γ into ferrite and pearlite, a greater microalloy precipitate size and number density of precipitates is expected in and around pearlite (or bainite) compared with the ferrite grain center regions,[18] as found experimentally. At a greater depth (determined by slab composition and cooling rate) below the slab surface, the solidification sequence may change to mixed δ/γ, which will result in different segregation behavior. The change in solidification sequence and associated spatial distribution of the microalloy precipitates have been discussed earlier in detail.[18]
### 4.2 Microsegregation Models
Partitioning of various alloying elements between liquid and solid phases during equilibrium solidification can also be calculated from Thermo-Calc software (Thermo-Calc Software, Stockholm, Sweden). Thermo-Calc uses the following Scheil–Gulliver model,[26] which is the simplest expression for calculating the solute redistribution in liquid CL and in solid CS, considering the nominal composition of the steel C0, and the weight fraction of solid fS in the solidifying volume:
$$C_{\text{L}} = C_{0} \left( {1 - f_{\text{S}} } \right)^{{k_{p} - 1}} \,{\text{where}},k_{p} = \frac{{C_{\text{S}} }}{{C_{\text{L}} }}$$
(1)
The equilibrium partition ratio (kp) of various alloying elements in steel are listed in Table IV,[11,27,28] for a different solidification route (LL + δ and LL + γ).
Table IV
Partition Coefficients of Solutes During Solidification in Delta-Ferrite $$\left( {k_{p}^{{\delta{/}{\text{L}}}} } \right)$$ Route and in Austenite Route $$\left( {k_{p}^{{\gamma{/}{\text{L}}}} } \right)$$ Diffusivity of Solute Elements in δ Ferrite $$\left( {D_{s}^{\delta } } \right)$$ and in Austenite $$\left( {D_{s}^{\gamma } } \right)$$[11,25,29]
Element
$$k_{p}^{{\delta{/}{\text{L}}}}$$
$$k_{p}^{{\gamma{/}{\text{L}}}}$$
$$D_{s}^{\delta }$$ × 104 (m2/s)
$$D_{s}^{\gamma }$$ × 104 (m2/s)
C
0.19
0.34
0.0127exp(–81,379/RT)
0.15exp(–143,511/RT)
Si
0.77
0.52
8.0exp(–248,948/RT)
0.30exp(–251,458/RT)
Mn
0.77
0.79
0.76exp(–224,430/RT)
0.055exp(–249,366/RT)
Al
0.60
0.60
5.9exp(–96,441/RT)
5.9exp(–241,417/RT)
P
0.23
0.13
2.9exp(–230,120/RT)
0.01exp(–182,841/RT)
S
0.05
0.035
4.56exp(–214,639/RT)
2.4exp(–223,426/RT)
Ti
0.38
0.33
3.15exp(–247,693/RT)
0.15exp(–250,956/RT)
V
0.93
0.63
4.8exp(–239,994/RT)
0.284exp(–250,956/RT)
Nb
0.40
0.22
50exp(–251,960/RT)
0.83exp(–266,479/RT)
N
0.25
0.48
1.57exp(–243,509/RT)
0.91exp(–168,490/RT)
O
0.03
0.03
0.0371exp(–96,441/RT)
5.75exp(–168,615/RT)
The prediction of solute partitioning indicates that the Nb level in the last solidifying liquid (CL) can reach ~6.0 times of the average Nb level (C0) in steel (Figure 9(b)). Ti, C, and N showed CL/C0 of ~3.3, ~5.0, and ~2.6, respectively. S showed the strongest partitioning with CL/C0 of ~20, whereas elements such as V and Al showed negligible partitioning during solidification (CL/C0 of ~1 to 1.5). This finding can explain the inhomogeneous distribution of MnS inclusions as well as Nb and Ti precipitates in the investigated slabs and the nearly homogeneous distribution of the V precipitates. However, the partitioning of alloying elements in the measured concentrations in Table III (Nb level in interdendritic region: average Nb level ~1.6) is smaller than that predicted by Thermo-Calc.
To better predict solute partitioning during solidification, compared with the lever rule and Scheil equation, Brody and Flemings[30] proposed the following equation:
$$C_{{{\text{L}},i}} = C_{0,i} \left[ {1 - \left( {1 - 2\alpha k_{p} } \right)f_{s} } \right]^{{\frac{{\left( {k_{p} - 1} \right)}}{{\left( {1 - 2\alpha k_{p} } \right)}}}}$$
(2)
where CL,i is the liquid concentration of a given solute (e.g., i) at the solid–liquid interface, C0,i is the initial liquid concentration, kp is the equilibrium partition coefficient of solute i, and fs is the solid fraction. The equilibrium partition ratio (kp) of various alloying elements in steel are listed in Table IV,[11,27,28] for a different solidification route (L → L + δ and L → L + γ). The back-diffusion coefficient α is defined as follows:
$$\alpha = \frac{{D_{\text{S}} t_{f} }}{{(0.5\lambda {}_{\text{S}})^{2} }}$$
(3)
where DS is the diffusion coefficient of solute in the solid phase (either δ ferrite or γ) in cm2s−1 (Table IV), λS is the SDAS in cm, and tf is the local solidification time (seconds), which is expressed as follows:
$$t_{f} = \frac{{T_{\text{L}} - T_{\text{S}} }}{{C_{\text{R}} }}$$
(4)
where TL and TS are the liquidus and solidus temperatures of the steel (predicted using Thermo-Calc software) and CR is the average cooling rate during solidification, which can be obtained from the measured SDAS (λS) at any location of the slab using the following expression[31]:
$$\lambda_{\text{S}} = \left( {169.1 - 720.9C_{{0,{\text{C}}}} } \right)C_{\text{R}}^{ - 0.4935}$$
(5)
where C0,C is the nominal C content of the steel (for C < 0.15 wt pct). Clyne and Kurz[23] replaced the back-diffusion coefficient (α) in Eq. [2] with the term Ω, which is defined as follows:
$$\Upomega = \alpha \left[ {1 - \exp \left( { - \frac{1}{\alpha }} \right)} \right] - \frac{1}{2}\exp \left( { - \frac{1}{2\alpha }} \right)$$
(6)
The Clyne and Kurz[23] model is suitable for predicting the microsegregation in low-C steels.[29,30] Using the measured SDAS values (Figure 2), the CR and tf values can be calculated from Eqs. [4] and [5] for the following locations of the slabs: SS (i.e., 10 mm from the top surface): CR ~4 K/s and tf ~ 12 seconds; QT: CR ~1 K/s and tf ~50 seconds; and MT: CR ~0.2 K/s and tf ~250 seconds. The partitioning of microalloying elements (Nb, Ti, and V) predicted from the Clyne and Kurz model[23] at SS and MT locations (Figure 10) show that microsegregation becomes severe with an increase in depth below the SS. Following the previous studies,[21,32] the composition in solid corresponding to solid fraction, fs ~0.05, and the composition in the liquid corresponding to fs ~0.95, are assumed to be the compositions at the middle of solute-depleted (dendrite center) regions and solute-rich (interdendritic) regions, respectively. The concentration of alloying elements predicted from the Clyne and Kurz model[23] at interdendritic and dendrite center regions at the QT location in the slabs is listed in Table V.
Table V
Concentration of Alloying Elements in Interdendritic and Dendrite Center Regions at QT Location of Slabs 1 and 2 Predicted by Clyne and Kurz Model[23] at the End of Solidification*
Predicted from Clyne–Kurz model[31] for QT location of Slab 1 Interdendritic 0.30 0.43 1.81 0.042 0.050 0.055 0.012 0.050 0.054 0.020 Dendrite center 0.02 0.25 1.08 0.002 0.001 0.020 0.020 0.008 0.047 0.007 Predicted from Clyne–Kurz model[31] for QT location of Slab 2 Interdendritic 0.350 0.233 1.55 0.051 0.080 0.05 — 0.100 — 0.023 Dendrite center 0.014 0.140 0.94 0.003 0.001 0.02 0.016 0.002 Predicted for Slab 1 considering homogenization during cooling[32] Interdendritic 0.090 0.38 1.64 0.025 0.020 0.040 0.090 0.040 0.050 0.007 Dendrite center 0.090 0.30 1.20 0.007 0.002 0.030 0.033 0.010 0.050 0.007 Predicted for Slab 2 considering homogenization during cooling[32] Interdendritic 0.07 0.20 1.38 0.020 0.030 0.04 — 0.070 — 0.007 Dendrite Center 0.07 0.16 1.12 0.007 0.002 0.04 0.023 0.007
*Concentrations have also been predicted at those regions, considering the homogenization[24] of as-cast slabs during cooling down to the ambient temperature
The difference between predicted and measured concentrations (Tables III and V) can be caused by the fact that the Clyne and Kurz model[23] predicts the solute partitioning during solidification without considering the homogenization taking place during the subsequent cooling of the slabs from solidus temperature to ambient temperature.
### 4.3 Homogenization During Cooling of As-Cast Slab
The change in the concentration profile resulting from microsegregation, during any homogenization treatment can be represented by the one-dimensional, time-dependent diffusion equation, and its likely solution can be expressed as follows[24]:
$$C(x,t) = C_{0} + \Updelta C\cos \left( {\frac{\pi x}{{\lambda_{s} }}} \right)\exp \left( { - \frac{t}{\tau }} \right)$$
(7)
where C(x,t) is the solute concentration at any point corresponding to the interdendritic or dendrite center regions after homogenization for time t at temperature T. C0 is the nominal composition of the steel, ΔC is the amplitude of the initial concentration profile, which is approximated as a cosine function,[24] λs is the secondary dendritic arm spacing, x is the distance along the direction perpendicular to the secondary dendritic arms, and τ is the relaxation time, which can be expressed as follows:
$$\tau = \frac{{\lambda_{s}^{2} }}{{\pi^{2} D_{s} }}$$
(8)
Starting with the predicted compositions at the end of solidification obtained from the Clyne and Kurz model,[23] in the middle of solute-rich (interdendritic) and solute-depleted (dendrite center) regions and assuming that the concentration profile follows a cosine function, the change in concentration at those regions during cooling has been calculated using Eq. [7]. Equations [7] and [8] are applicable to the isothermal holding condition, and the Additivity rule[33] has been used for continuous cooling. Following the predicted solidification sequence (Figure 9(a)) as the solidification reaches the completion (i.e., fs > 0.95), δ ferrite dominates the microstructure over a 5 to 10 K temperature range before δ transforms to γ. Because of the higher diffusivity of solutes in δ ferrite compared with γ (Table IV), substitutional solutes (such as Nb and Ti) are predicted to homogenize partly in δ ferrite, whereas negligible homogenization takes place within γ (Figure 11(a)). Because of the high diffusivity, interstitial elements, such as C and N, are expected to homogenize almost completely within the δ phase field (Figure 11(b)). The predicted concentration of alloying elements at the interdendritic and dendrite center regions at QT of both slabs, after the slabs cool to the ambient temperature (Table V), is close to the experimentally measured values listed in Table III. Solid-state homogenization was negligible during the solidification in austenitic route (L → L + γ), which might have occurred at the slab-center location,[18] resulting is strong segregation that formed large microalloy deposits (Figure 7).
### 4.4 Thermo-Calc Prediction of Precipitate Volume Fraction
To predict the precipitate volume fraction separately in the interdendritic and dendrite center regions of the microsegregated slabs, the concentration of alloying elements at those regions, calculated from the Clyne and Kurz model[23] (Table V) were fed into the Thermo-Calc software. Precipitates are expected to form at higher temperatures and at larger mole fractions in the solute-rich (interdendritic) regions compared with those in the solute-depleted (dendrite center) regions (Figure 12). TiN particles are predicted to form initially in the interdendritic liquid during solidification followed by their precipitation in the solid state (Figures 12(a) and (b)). The Thermo-Calc prediction of the internal composition of precipitates suggests that TiN formed predominantly at a higher temperature and converted to Ti(C,N) and then to TiC with the decrease in temperature. Nb precipitates were mainly carbides, which contained some Ti at higher temperatures. V precipitates formed at lower temperatures in γ, as well as in α, were mainly VC. Ti combined with almost all of the N present in Slab 1 formed TiN, which resulted in the subsequent precipitation of fine microalloy carbides (NbC and VC). VC precipitation is expected to be least affected by the microsegregation (Figure 12(a)), which agrees with the experimental observation. Similarly, in Thermo-Calc, the predicted compositions of interdendritic liquid near the MT location indicated that large amount of Ti- and Nb-rich particles are anticipated to form, which may explain the formation of large microalloy segregates (several micrometers in size, Figure 7). MnS inclusions were also predicted to form in the interdendritic melt, toward the end of solidification, and were expected to show inhomogeneous distribution. Using the density and molar volume of the precipitates (and Fe matrix) as listed in Table VI,[16] the mole fractions of the precipitates predicted in Figure 12 have been converted to the corresponding volume fractions, which were close to those measured by image analysis (Figure 13).
Table VI
Structural Details and Solubility Products of Several Precipitates (and Solid-Phases) Observed in the Investigated Steels*
Precipitate (crystal structure)
Lattice Parameter (nm)
Density (gm/cm3)
Molar Volume (cm3/mol)
Solubility Product log10[M][X] = A – B/T*
TiN (fcc)
0.4233
5.42
11.44
6.40 – 17,040/T (L)
0.322 – 8,000/T (γ)
TiC (fcc)
0.4313
4.89
12.27
2.75 – 7,000/T (γ)
4.4 – 9,575/T (α)
NbN (fcc)
0.4387
8.41
12.72
4.04 – 10,230/T (γ)
NbC (fcc)
0.4462
7.84
13.39
2.96 – 7,510/T (γ)
5.43 – 10,960/T (α)
Nb(C,N) (fcc)
0.4445
8.10
12.80
log[Nb + 12/14N] =
2.06 – 6,700/T (γ)
VN (fcc)
0.4118
6.18
10.52
3.02 – 7,840/T (γ)
VC (fcc)
0.4154
5.83
10.81
6.72 – 9,500/T (γ)
8.05 – 12,265/T (α)
Solid phases
γ-Fe (fcc)
0.357
8.15
6.85
α-Fe (bcc)
0.286
7.85
7.11
*M = microalloying elements (Nb/Ti/V); X = interstitial solutes (C/N)[1,6,34, 35, 36]
fcc = face-centered cubic
bcc = base-centered cubic
### 4.5 Prediction of Precipitate Size Distribution
The sizes of oxide and sulfide inclusions and TiN particles have been predicted in earlier studies considering the microsegregation of alloying elements during solidification.[37, 38, 39] The nucleation rate and growth-rate should remain constant in a homogeneously supersaturated metal, which is not the case for microsegregation. Hence, the nucleation and growth model have to be coupled with the microsegregation model to predict the size distribution of different precipitates in the interdendritic and dendrite center regions.
#### 4.5.1 Supersaturation and nucleation of microalloy precipitates
The time-dependent homogeneous nucleation of spherical particles can be expressed as follows[40, 41, 42, 43]:
$$I = N_{\text{V}} Z\beta^{*} \exp \left( { - \frac{{\Updelta G^{*} }}{kT}} \right)\exp \left( { - \frac{\tau }{t}} \right)$$
(9)
where NV is the number of nucleation sites per unit volume and ΔG* is the energy required to form a nucleus of critical size (r*) and kT (k is the Boltzmann constant and T is the absolute temperature in K) and t represents time. Expressions for calculating the Zeldovich factor Z, frequency factor β*, and incubation time τ, are given in References 40, 41, 42, 43. Ignoring the strain energy, the critical nucleus size radius r* can be expressed as follows:
$$r^{*} = - \frac{2.\sigma }{{\Updelta G_{v} }}$$
(10)
where σ is the interfacial energy of the nucleus and ΔGv is the volume free energy change during nucleation. Previous studies[32,38,39] discussed in detail the modification of Eq. [9] for predicting the TiN precipitation in liquid steel. ΔGv can be obtained from the supersaturation ratio η using the following equation:
$$\eta = \frac{{[{\text{wt \,pct \,Ti}}][{\text{wt \,pct \,N}}]}}{{L_{\text{TiN}} }}$$
(11)
where LTiN is the solubility product of TiN in liquid iron as given in Table VI. Interfacial energy σ ~0.8 J/m2 can be used for TiN precipitation in the liquid.[32,39,40]
From the previous equations, it is evident that a different level of supersaturation—resulting from microsegregation—in different regions of solidifying and solidified steel can result in different chemical driving forces (ΔGv) for precipitation between those regions. In the interdendritic region, higher ΔGv will increase the nucleation rate I and reduce the critical nucleus size (r*). According to the present study, TiN precipitation in the interdendritic liquid starts at η = ~5 to 6, which agrees with previous reports.[32,39,40] A continuous increase in η for TiN in the solute-rich and solute-depleted regions in the QT location of Slab 2, with the decrease in temperature, is shown in Figure 14. The η values have been calculated using the compositions determined by the Clyne and Kurz model[23] for interdendritic (TiN-interdendritic) and dendrite-center (TiN-Dendrite-Center) regions. The influence of [O] and [S] on the interfacial energy σ and, hence, on the nucleation rate I[42] has not been considered here.
Nearly complete homogenization of C and N and incomplete homogenization of Ti and Nb during slab cooling (Figure 11) may reduce the local difference in η values between the interdendritic (TiN-Int. Den.-Homogesd.) and the dendrite center (TiN-Den. Cen.-Homogesd.) regions, as shown by the red line in Figure 14(a). The effect of solid-state homogenization on the precipitation has not been considered in the existing precipitation models.[32, 33, 34,37, 38, 39, 40, 41, 42, 43] Using the predicted η values, the critical nucleus size (r*) for TiN precipitation has been calculated (Figure 14(b)). According to the literature,[2,34] the homogeneous nucleation of microalloy precipitates requires r* ≤ 1 nm. The TiN precipitation start temperatures obtained from r* measurement (~1770 K [~1497 °C] in the interdendritic region and ~1720 K [~1447 °C] in the dendrite center regions), therefore, closely match those obtained from the Thermo-Calc prediction (Figure 12(a)). A minimum separation of ~2 to 4 μm between the consecutive precipitates in precipitate “rows” (Figure 3(a), (Nb,Ti)(C,N) precipitates and Figure 6(a), TiN particles) can be explained by the diffusion field surrounding a particular nucleus,[32,39] which reduces the supersaturation and does not allow another nucleation within that field. As the temperature dropped in the γ-phase field, η reached a high value (>100), resulting in nearly homogeneous distribution of the fine TiN particles throughout the microstructure.
Similar calculations have been carried out for the solid-state precipitation (in γ) of NbC and VC in Slab 1. The solubility products of microalloy precipitates (Table VI) and their precipitation kinetics during continuous cooling (without deformation) have been collected from published work.[34, 35, 36] Heterogeneous precipitation on dislocations,[39] which may generate during the bending and straightening operation, however, not been considered here.
#### 4.5.2 Growth of microalloy precipitates
Diffusion-controlled growth of a single (spherical) particle of radius r, over isothermal holding time t can be obtained from the following equation[43]:
$$\frac{dr}{dt} = \frac{{D_{s} }}{r\alpha }\left( {\frac{{X_{0} - X_{I} }}{{X_{p} - X_{I} }}} \right)$$
(12)
where Ds is the diffusion coefficient of the slowest diffusing solute (such as Nb and Ti), α is the ratio of matrix to precipitate atomic volumes, X0 is the initial concentration (mole fraction) of the solute (i.e., microalloying elements), and Xp is the concentration of solute in the precipitate. XI is the concentration of solute in the matrix at the particle–matrix interface, which can be obtained from the equilibrium concentration of solute (Xe) in the matrix following the Gibbs–Thomson equation.[44] The concentration of microalloying elements at interdendritic and dendrite center regions predicted by Thermo-Calc R have been considered as the initial concentration Xat those regions (Table V). Xp and Xe at any temperature below the precipitate dissolution temperature can be obtained from Thermo-Calc. Following the Additivity rule,[33] Eq. [12] can be used in a continuous-cooling condition, considering the average cooling rate of the slab (at any location). Hence, it is possible to predict the precipitate growth rate and the final precipitate size at different regions of the investigated slabs. The evolution of precipitate size predicted from the proposed model with respect to the precipitation temperature in the interdendritic and in dendrite center regions of the investigated slabs is shown in Figure 15. The precipitates that nucleate at higher temperatures are expected to grow larger in size (Figure 15), as more time is available for diffusional growth and the diffusion is faster at a higher temperature. The temperature scale along the abscissa in Figure 15 can also be represented as the time from the onset of precipitation, considering the average cooling rate of the slabs. Compared with the dendrite center, larger precipitates should always form in interdendritc regions, where the precipitation starts at a higher temperature. Among the microalloy precipitates, TiN is expected to be largest in size, and the maximum TiN particle size predicted to form in interdendritic regions of Slab 1 (~1.2 μm) and Slab 2 (~8.0 μm, Figure 15) are close to the experimentally measured values (Figure 8). Rapid growth of TiN particles above 1753 K (1480 °C) (i.e. before the formation of γ, Figure 15) can be attributed to the higher diffusivity of Ti and N in liquid steel and in δ ferrite. As the diffusivity drops with the decrease in temperature, the precipitates formed at lower temperatures, such as VC in Slab 1 and TiC in Slab 2, could only reach a maximum size of 10 to 20 nm (Figure 15). At precipitation temperatures above 1700 K (1427 °C), the difference in TiN particle sizes formed in interdendritic and dendrite center regions is more than 500 nm, which drops below 20 nm at 1400 K (1127 °C) (Figure 15(a)). This behavior demonstrates the effect of solid-state homogenization on the evolution of precipitate size in a dendritic structure. Nb(C, N) precipitates in Slab 1 are predicted to reach a size of ~350 nm, which is slightly lower than the measured value (600 nm). This finding could be a result of the complex (Nb,Ti)(C,N) precipitation or heterogeneous nucleation of Nb(C, N) on top of preexisting TiN, which have not been considered in this model.
#### 4.5.3 Prediction on the effect of microsegregation on precipitate size distribution
Combining the nucleation rate and growth rate of the precipitates, the size distribution of the precipitates have been determined for solute-rich (i.e., interdendritic) regions and solute-depleted (i.e., dendrite center) regions at the QT location of as-cast slabs (Figure 16). Higher density and larger sizes of TiN and Nb(C, N) precipitates in the solute-rich regions of the slabs are evident from Figure 16. The predicted distributions closely followed the experimentally measured values.
The TiN particles are expected to be the largest of all the microalloy precipitates. A maximum predicted TiN particle size of ~1.6 μm in Slab 1 (Figure 16(a)) and of ~8 μm in Slab 2 are close to the experimentally measured values. The maximum size of Nb(C, N) is predicted to be ~600 nm in Slab 1 (Figure 16(b)), which is smaller than the measured value (~1.5 μm). Heterogeneous precipitation of NbC on top of preexisting TiN may be the cause of the deviation. Precipitates that formed at lower temperatures, such as VC in Slab 1 and Ti(C, N) in Slab 2, could only reach a maximum size of 10 to 20 nm (Figure 16(c)) as verified by the TEM study.
Continued improvement of the prediction requires the consideration of factors such as stereological correction factors in precipitate quantification, precipitation kinetics of complex precipitates, the effect of segregation of [S] and [O] on microalloy precipitation, actual cooling curves of the slabs, and the solidification mode at different locations of the slab. In this context, it is necessary to mention that the partition coefficients (kp) and diffusivity (Ds) of any individual alloying element (e.g., i) as used in the present calculations (Table IV) are valid for a binary solution of element (i) and Fe. Similarly, a binary microsegregation model proposed by Clyne and Kurz[23] has been used here for the back-diffusion calculation to predict the segregation level of individual alloying elements. However, in multicomponent systems, as the investigated steels, the presence of other solute elements (e.g., j, k, and l) can influence the partitioning and diffusion of element (i). To avoid complex mathematical calculations, these interaction effects have not been considered here, although it may introduce a certain error in the final prediction. Future studies need to consider this aspect for more accurate prediction.
To understand the sensitivity of the prediction on the choice of microsegregation models, the maximum precipitate sizes in Slab 1 have been predicted separately, considering the Scheil equation, Clyne and Kurz[23] back-diffusion model, and Lever rule. Figure 17 shows the different Nb levels predicted by these models in the interdendritic liquid of Slab 1, with the increase in solid fraction. Compared with the other models, the Scheil model predicts a significantly higher Nb level in the last solidifying liquid (0.79 wt pct), which is predicted to form Nb(C,N) precipitates as large as 5 μm in liquid steel. The largest Nb(C,N) precipitate size measured from the experimental study (600 nm) is far less than the predicted value. Similarly, the maximum TiN particle size predicted in interdendritic liquid (25 μm) considering the Scheil model is much higher than the measured size (1.8 μm). The Scheil model, therefore, seriously overpredicts the extent of microsegregation (Table III) and the corresponding precipitate size in the interdendritic regions. The microsegregation levels predicted by the Clyne and Kurz model is between the Scheil model and the Lever rule, and the satisfactory prediction of precipitate sizes from the Clyne and Kurz model is evident in Figure 16. The maximum precipitate sizes predicted in Slab 1 from the Level rule (260 μm for Nb(C,N) and 960 μm for TiN) were not as good as the Clyne and Kurz model, but were certainly better than that of the Scheil model. The prediction of precipitate size, therefore, was dependent on the microsegregation model. The present findings are in line with the observations of Won and Thomas[31] and Choudhary and Ghosh[29] regarding the prediction of microsegregation in low-carbon steels, although, future study will compare different models is greater detail.
## 5 Summary And Concluding Remarks
Spatial distribution in size and frequency of microalloy precipitates have been characterized using high-resolution SEM and TEM in two continuous-cast HSLA steel slabs, one containing Nb, Ti, and V and the other containing only Ti. Microsegregation during casting resulted in an inhomogeneous distribution of Nb and Ti precipitates in as-cast slabs, and precipitate-rich regions were separated by a distance similar to the SDAS. Large networks (several microns in size) of Nb- and Ti-rich phases were found at the segregated regions in the MT location, indicating the strong microalloy segregation during solidification. Such segregation can reduce the effective microalloy level of the steel required for fine-scale precipitation during and after rolling for grain refinement and precipitation strengthening.
Considering the microsegregation during solidification, the homogenization of the alloying elements during slab cooling, the thermodynamics of precipitation (using Thermo-Calc software), and the kinetics of precipitation (calculating the nucleation and growth-rate of precipitates), a model has been proposed here for predicting the precipitate size distribution and the amount of precipitates in the interdendritic and dendrite center regions in the segregated slabs. A comparison of the predicted results and the experimental data for precipitate characterization showed satisfactory prediction.
The accurate prediction and control over the precipitate size and fractions may help (1) in avoiding the hot-cracking problem and, hence, improve the slab quality, (2) in selecting the soaking time and temperature and predicting the γ grain size during soaking, and (3) in designing the rolling schedule for achieving the maximum benefit from the microalloy precipitates.
Footnotes
1
JEOL is a trademark of Japan Electron Optics Ltd., Tokyo.
## Acknowledgments
The authors would like to thank the Indian Institute of Technology Kharagpur for the provision of the ISIRD project research grant and the research facilities at the Department of Metallurgical and Materials Engineering Department, Steel Technology Centre and Central Research Facility. They would also like to acknowledge the help provided by Mr. Sukata Mandal in the characterization using SEM and Tata Steel, Jamshedpur for the provided research materials. The authors would also like to sincerely thank Dr. G.K. Dey and Dr. D. Srivastava from the Materials Science Division of Bhabha Atomic Research Centre, Mumbai for their constant support and encouragement in this work.
© The Minerals, Metals & Materials Society and ASM International 2012
## Authors and Affiliations
• Suparna Roy
• 1
• Sudipta Patra
• 1
• S. Neogy
• 2
• A. Laik
• 2
• S. K. Choudhary
• 3
• Debalay Chakrabarti
• 1
1. 1.Department of Metallurgical and Materials EngineeringIndian Institute of Technology (I.I.T.)KharagpurIndia
2. 2.Materials Science DivisionBhabha Atomic Research CentreMumbaiIndia
3. 3.Research and Development, Tata SteelJamshedpurIndia | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424169421195984, "perplexity": 4814.08477252853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608676.72/warc/CC-MAIN-20170526184113-20170526204113-00349.warc.gz"} |
http://lesprobabilitesdedemain.fr/edition2017/programme.html | ## Programme de la journée 2017
8h30 -- 9h00 Café d'accueil. 9h00 -- 9h20 Présentation de la journée par les organisateurs. 9h20 -- 10h20 Dmitry Chelkak 2D Ising model: combinatorics, CFT/CLE description at criticality and beyond 10h20 -- 10h40 Pause café. 10h40 -- 11h00 Paul Melotti Récurrence spatiales, modèles associés et leurs formes limites Slides 11h00 -- 11h20 Thomas Budzinski Flips sur les triangulations de la sphere : une borne inférieure pour le temps de mélange Slides 11h20 -- 11h40 Gabriela Ciolek Sharp Bernstein and Hoeffding type inequalities for regenerative Markov chains Slides 11h40 -- 12h00 Simon Coste Trou spectral de matrices de Markov sur des graphes Slides 12h00 -- 12h20 Alkéos Michaïl Perturbations of a large matrix by random matrices Slides 12h40 -- 13h40 Pause déjeuner 13h40 -- 14h00 Léo Miolane Limites fondamentales pour l'estimation de matrices de petit rang Slides 14h00 -- 14h20 Perla El Kettani A stochastic mass conserved reaction-diffusion equation with nonlinear diffusion Slides 14h20 -- 14h40 Julie Fournier Identification and isotropy characterization of deformed random fields through excursion sets Slides 14h40 -- 15h00 Henri Elad Altman Formules de Bismut Elworthy Li pour les processus de Bessel Slides 15h00 -- 15h20 Mohamed Ndaoud Constructing the fractional Brownian motion Slides 15h20 -- 15h40 Marion Sciauveau Cost functionals for large random trees Slides 15h40 -- 16h00 Pause café. 16h00 -- 16h20 Raphael Forien Flux de genes a travers une barriere geographique Slides 16h20 -- 16h40 Veronica Miro Pina Chromosome painting Slides 16h40 -- 17h40 Remco van der Hofstad Hypercube percolation
## Les résumés des exposés de la journée
### 9h20--10h20 Dmitry Chelkak (Russian Academy of Science and ENS)
#### 2D Ising model: combinatorics, CFT/CLE description at criticality and beyond
We begin this expository talk with a discussion of the combinatorics of the nearest-neighbor Ising model in 2D - an archetypical example of a statistical physics system that admits an order-disorder phase transition - and the underlying fermionic structure, which makes it accessible for the rigorous mathematical analysis. We then survey recent results on convergence of correlation functions at the critical temperature to conformally covariant scaling limits given by Conformal Field Theory, as well as the convergence of interfaces (domain walls) to the relevant Conformal Loop Ensemble. Is the case closed? Not at all: there are still many things to understand and to prove, especially for the non-critical and/or non-homogeneous model.
### 10h40--11h00 Paul Melotti (UPMC)
#### Récurrence spatiales, modèles associés et leurs formes limites
Certaines relations polynomiales, telles que les relations vérifiées par les mineurs d'une matrice, peuvent être interprétées comme des relations de récurrence sur Z^3. Dans certains cas, les solutions de ces récurrences présentent une propriété inattendue : ce sont des polynômes de Laurent en les conditions initiales. Peut-on donner une interprétation combinatoire de ce fait ? On verra que lorsqu'un objet combinatoire caché derrière ces relations est identifié, il présente des phénomènes de formes limites qui peuvent être calculées explicitement, le plus connu étant le "cercle arctique" des pavages du diamant aztèque. On parlera des récurrences dites de l'octaèdre, du cube, et d'une récurrence due à Kashaev.
### 11h20--11h40 Thomas Budzinski (Universite Paris-Saclay et ENS)
#### Flip sur les triangulations de la sphere : une borne inférieure pour les temps de mélange
One of the simplest ways to sample a uniform triangulation of the sphere with a fixed number n of faces is a Monte-Carlo method: we start from an arbitrary triangulation and flip repeatedly a uniformly chosen edge, i.e. we delete it and replace it with the other diagonal of the quadrilateral that appears. We will prove a lower bound of order n^{5/4} on the mixing time of this Markov chain.
### 11h20--11h40 Gabriela Ciolek (Telecom ParisTech)
#### Sharp Bernstein and Hoeffding type inequalities for regenerative Markov chains
The purpose of this talk is to present Bernstein and Hoeffding type functional inequalities for regenerative Markov chains. Furthermore, we generalize these results and show exponential bounds for suprema of empirical processes over a class of functions F which size is controlled by its uniform entropy number. All constants involved in the bounds of the considered inequalities are given in an explicit form which can be advantageous in practical considerations. We present the theory for regenerative Markov chains, however the inequalities are also valid in the Harris recurrent case.
### 11h40--12h00 Simon Coste (Universite Paris-Diderot et Universite Paul Sabatier)
#### Trou spectral de matrices de Markov sur des graphes
La théorème d’Alon-Friedman dit que la deuxième valeur propre d’un graphe aléatoire d-régulier converge vers 2sqrt(d-1) lorsque la taille du graphe tend vers l’infini. Ce théorème difficile est relié à des propriétés essentielles du graphe G, comme par exemple sa constante d’expansion ou encore la vitesse de convergence de la marche aléatoire simple sur G. Dans cet exposé, nous présenterons ces liens entre la deuxième valeur propre et les propriétés des graphes réguliers puis nous généraliserons ces résultats à des modèles de graphes plus généraux, en particulier des graphes orientés.
### 12h00--12h20 Alkéos Michaïl (Universite Paris Descartes)
#### Perturbations of a large matrix by random matrices
We provide a perturbative expansion for the empirical spectral distribution of a Hermitian matrix with large size perturbed by a random matrix with small operator norm whose entries in the eigenvector basis of the first one are independent with a variance profile. We prove that, depending on the order of magnitude of the perturbation, several regimes can appear (called perturbative and semi-perturbative regimes): the leading terms of the expansion are either related to free probability theory or to the one-dimensional Gaussian free field.
### 13h40--14h00 Léo Miolane (INRIA et ENS)
#### Phase transitions in low-rank matrix estimation. (joint work with Marc Lelarge)
We consider the estimation of noisy low-rank matrices. Our goal is to compute the minimal mean square error (MMSE) for this statistical problem. We will observe a phase transition: there exists a critical value of the signal-to-noise ratio above which it is possible to make a non-trivial guess about the signal, whereas this is impossible below this critical value.
### 14h00--14h20 Perla El Kettani (Universite Paris-Sud)
#### A stochastic mass conserved reaction-diffusion equation with nonlinear diffusion
In this talk, we study a stochastic mass conserved reaction-diffusion equation with a linear or nonlinear diffusion term and an additive noise corresponding to a Q-Brownian motion. We prove the existence and the uniqueness of the weak solution. The proof is based upon the monotonicity method. This is joint work with D.Hilhorst and K.Lee.
### 14h20--14h40 Julie Fournier (Universite Paris-Descartes & UPMC)
#### Identification and isotropy characterization of deformed random fields through excursion sets
A deterministic application θ : R² → R² deforms bijectively and regularly the plane and allows to build a deformed random field X ◦ θ : R² → R from a regular, stationary and isotropic random field X : R² → R. The deformed field X ◦ θ is in general not isotropic, however we give an explicit characterization of the deformations θ that preserve the isotropy. Further assuming that X is Gaussian, we introduce a weak form of isotropy of the field X ◦ θ, defined by an invariance property of the mean Euler characteristic of some of its excursion sets. Deformed fields satisfying this property are proved to be strictly isotropic. Besides, assuming that the mean Euler characteristic of excursions sets of X ◦ θ over some basic domains is known, we are able to identify θ.
Reference: hal-01495157.
### 14h40 -- 15h00 Henri Elad Altman (UPMC)
#### Formules de Bismut Elworthy Li pour les processus de Bessel
Bessel processes are a one-parameter family of nonnegative diffusion processes with a singular drift. When the parameter (called dimension) is smaller than one, the drift is non-dissipative, and deriving regularity properties for the transition semigroup in such a regime is a very difficult problem in general.
In my talk I will show that, nevertheless, the transition semigroups of Bessel processes of dimension between 0 and 1 satisfy a Bismut-Elworthy-Li formula, with the particularity that the martingale term is only in L^{p} for some p > 1, rather than L^{2} as in the dissipative case. As a consequence some interesting strong Feller bounds can be obtained.
### 15h00--15h20 Mohamed Ndaoud (X-CREST)
#### Constructing the fractional Brownian motion
In this talk, we give a new series expansion to simulate B a fractional Brownian motion based on harmonic analysis of the auto-covariance function. The construction proposed here reveals a link between Karhunen-Loève theorem and harmonic analysis for Gaussian processes with stationarity conditions. We also show some results on the convergence. In our case, the convergence holds in L2 and uniformly, with a rate-optimal decay of the norm of the rest of the series in both senses.
### 15h20--15h40 Marion Sciauveau (Ecole des Ponts)
#### Cost functionals for large random trees
Les arbres apparaissent naturellement dans de nombreux domaines tels que l'informatique pour le stockage de données ou encore la biologie pour classer des espèces dans des arbres phylogénétiques. Dans cet exposé, nous nous intéresserons aux limites de fonctionnelles additives de grands arbres aléatoires. Nous étudierons le cas des arbres binaires sous le modèle de Catalan (arbres aléatoires choisis uniformément parmi les arbres binaires enracinés complets ordonnés avec un nombre de nœud donné). On obtiendra un principe d'invariance pour ces fonctionnelles ainsi que les fluctuations associées.
La preuve repose sur le lien entre les arbres binaires et l'excursion brownienne normalisée.
### 16h00--16h20 Raphael Forien (Ecole Polytechnique)
#### Gene Flow across a geographical barrier
Consider a species scattered along a linear habitat. Physical obstacles can locally reduce migration and genetic exchanges between different parts of space. Tracing the position of an individual's ancestor(s) back in time allows to compute the expected genetic composition of such a population. These ancestral lineages behave as simple random walks on the integers outside of a bounded set around the origin. We present a continuous real-valued process which is obtained as a scaling limit of these random walks, and we give several other constructions of this process.
### 16h20 -- 16h40 Veronica Miro Pina (UPMC)
#### Chromosome painting
We consider a simple population genetics model with recombination. We assume that at time 0, all individuals of a haploid population have their unique chromosome painted in a distinct color. At rare birth events, due to recombination (modeled as a single crossing-over), the chromosome of the newborn is a mosaic of its two parental chromosomes. The partitioning process is then defined as the color partition of a sampled chromosome at time t. When t is large, all individuals end up having the same chromosome.
I will discuss some results on the partitioning process at stationarity, concerning the number of colours and the description of a typical color cluster.
### 16h40--17h40 Remco van der Hofstad (Technische Universiteit Eindhoven)
#### Hypercube percolation
Consider bond percolation on the hypercube {0,1}^n at the critical probability p_c defined such that the expected cluster size equals 2^{n/3}, where 2^{n/3} acts as the cube root of the number of vertices of the n-cube. Percolation on the Hamming cube was proposed by Erdös and Spencer (1979), and has proved to be substantially harder than percolation on the complete graph. In this talk, I will describe the percolation phase transition on the hypercube, and show that it shares many features with that on the complete graph.
In previous work with Borgs, Chayes, Slade and Spencer, and with Heydenreich, we have identified the subcritical and critical regimes of percolation on the hypercube. In particular, we know that for p=p_c(1+O(2^{-n/3})), the largest connected component has size roughly 2^{2n/3} and that this quantity is non-concentrated. In work with Asaf Nachmias, we identify the supercritical behavior of percolation on the hypercube, by showing that, for any sequence \epsilon_n tending to zero, but \epsilon_n being much larger than 2^{-n/3}, percolation at p_c(1+\epsilon_n) has, with high probability, a unique giant component of size (2+o(1))\epsilon_n 2^n. This also confirms that the validity of the proposed critical value. Finally, we `unlace' the proof by identifying the scaling of component sizes in the supercritical and critical regimes without relying on the percolation lace expansion. The lace expansion is a beautiful technique that is the major technical tool for high-dimensional percolation, but that is also quite involved and can have a disheartening effect on some. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8871912956237793, "perplexity": 3939.28936962782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825512.37/warc/CC-MAIN-20181214092734-20181214114234-00117.warc.gz"} |
https://chem.libretexts.org/Under_Construction/Purgatory/Core_Construction/Chemistry_30/Electrochemistry/2.6_Batteries | # 2.6 Batteries
Electrochemical cells used for power generation are called batteries. Although batteries come in many different shapes and sizes there are a few basic types. You won't be required to remember details of the batteries, but some general information and features of each type is presented here.
## 1. Primary batteries - (dry cell batteries)
• non-rechargeable
• electrolytes are present as a paste rather than as a liquid
• general purpose battery used for flashlights, transistor radios, toys, etc.
• The basic dry cell battery consists of: zinc case as the anode (oxidation); a graphite rod is the cathode (reduction) surrounded by a moist past of either MnO2, NH4Cl, and ZnCl2 or in alkaline dry cells a KOH electrolytic paste.
• General reactions for the battery - manganese(IV) oxide-zinc cell (different batteries have different reactions - you don't need to remember any of these reactions)
cathode $$\ce{2MnO2(s) + 2NH4+ + 2e- -> Mn2O3(s) + H2O(l) + 2NH3(aq)}$$ anode $$\ce{Zn(s) -> Zn^{2+}(aq) + 2e-}$$
• Maximum voltage 1.5V. By connecting several cells in series 90V can be achieved.
• Advantages of alkaline batteries - consistent voltage, increased capacity, longer shelf-life, and reliable operation at temperatures as low as -40°C
## 2. Secondary Batteries (storage batteries)
• rechargeable
• an example - lead-acid battery used in cars. Anode is grid of lead-antimony or lead-calcium alloy packed with spongy lead; Cathode is lead(IV) oxide. Electrolyte is aqueous sulfuric acid. Consists of numerous small cells connected in parallel (anode to anode; cathode to cathode).
• General reaction:
cathode $$\ce{PbO2(s) + 4H+ + SO4^{2-}(aq) + 2e- -> PbSO4(s) + 2H2O(l) + 2NH3(aq)}$$ anode $$\ce{Pb(s) + SO4^{2-}(aq) -> PbSO4(s) + 2e-}$$
• Secondary batteries are recharged by passing a current through the battery in the opposite direction. In a car battery this occurs when the engine is running.
• Other examples include the nickel-iron alkaline battery, nickel-zinc batter, nickel-cadmium alkaline battery, silver-zinc, silver-cadmium
## 3. Fuel Cells
• fuel cells are electrochemical cells that convert energy of a redox combustion reaction directly into electrical energy. Requires a continuous supply of reactants and a constant removal of products.
• Cathode reactant usually air or pure oxygen; anode fuel is a gas such as hydrogen, methane, or propane. Carbon electrodes typically contain a catalyst. The electrolyte is typically KOH.
• General reaction:
cathode $$\ce{O2(g) + 2H2O(l) + 4e- -> 4OH-(aq)}$$ anode $$\ce{2H2(g) + 4OH-(aq) -> 4H2O(l) + 4e-}$$ net $$\ce{2H2(g) + O2(g) -> 2H2O(l)}$$
• Advantages - no toxic waste products (water is the only product); very efficient energy conversion (70-80% efficient)
• Disadvantage - too expensive for large-scale use. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5015158653259277, "perplexity": 8903.715742676404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987834649.58/warc/CC-MAIN-20191023150047-20191023173547-00437.warc.gz"} |
https://classes.areteem.org/mod/forum/discuss.php?d=299&parent=642 | ## Online Course Discussion Forum
### MC2A Help 1
Re: Thanks
Hey David. The hints I put in there had the wrong numbers, but were for the problems you were asking (it said 16 instead of 15 and 17 instead of 16, but it is fixed now). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9634132385253906, "perplexity": 1588.5881218285408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999964.8/warc/CC-MAIN-20190625233231-20190626015231-00018.warc.gz"} |
https://www.physicsforums.com/threads/kinetic-theory.559123/ | # Kinetic Theory
1. Dec 11, 2011
### luigihs
Can someone explain me this theory ? and how to use the equation please
I have this in my notes but I dont understand :(
Average (translational) Kinetic Energy per molecule is
E= 3/2kT
The same, per mole, is U = 3/2 * R* T
2. Dec 11, 2011
### sophiecentaur
Hi
The basics of deriving this involve quite a long string of steps and comes under the heading of 'bookwork'. I think you should just sit down with the book and follow it through. Else you can just accept it.
If you don't have 'a book' then Wiki would be a way forward. Start with Boltzman Distribution
3. Dec 11, 2011
### technician
Do you recognise the experimental equation for the gas laws in the form
PV = nRT ? where n = number of moles
So for 1 mole the experimental law is PV = RT
The kinetic theory leads to an expression PV = (N/3) x mc^2 where N is the number of molecules.
If this equation is written as 2(N/3) x 0.5mc^2 it makes no difference but it does highlight a combination 0.5mc^2 which is average KE of molecules.
Putting the experimental equation and the theoretical equations together leads to
RT = 2(N/3) x 0.5mc^2 or 0.5mc^2 = (3/2)TR/N
so average KE = (3/2)TR/N
R is the gas constant and N is the number of molecules in 1 mole (Avagadros number)
The combination R/N of these constants is known as Boltzmanns constant, symbol k
Therefore average KE = (3/2)kT
Hope this helps
Similar Discussions: Kinetic Theory | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9457693099975586, "perplexity": 2446.851209166358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816841.86/warc/CC-MAIN-20180225170106-20180225190106-00273.warc.gz"} |
https://codegolf.stackexchange.com/questions/86880/hofstadter-q-sequence/88644 | # Definition
1. a(1) = 1
2. a(2) = 1
3. a(n) = a(n-a(n-1)) + a(n-a(n-2)) for n > 2 where n is an integer
Given positive integer n, generate a(n).
# Testcases
n a(n)
1 1
2 1
3 2
4 3
5 3
6 4
7 5
8 5
9 6
10 6
11 6
12 8
13 8
14 8
15 10
16 9
17 10
18 11
19 11
20 12
# Reference
• – Leaky Nun Jul 29 '16 at 15:25
• Can we return True in languages where it can be used as 1? – Dennis Jul 29 '16 at 15:53
• @Dennis If in that language true is equivalent to 1 then yes. – Leaky Nun Jul 29 '16 at 15:54
• Apart from the OEIS link it might be good to reference GEB where the sequence first appeared. – Martin Ender Jul 29 '16 at 16:16
• Completing the list of GEB-related sequence challenges. – Martin Ender Jul 29 '16 at 17:39
# Racket, 63 bytes
(define(a n)(if(> n 2)(for/sum([m'(1 2)])(a(- n(a(- n m)))))1))
# ><>, 65+2 = 67 bytes
^n;
.+]{0$v1}\ v2} @2->1[ v3}>- /:::1-1[ >4}:2)?^~~1]{0$.
/0$1[ Input neds to be present on the stack at program start, so +2 bytes for the -v flag. Try it online! More ridiculously slow recursive madness. Test case for 20 on TIO takes 20.5 seconds, so use larger inputs at your own risk # Clojure, 86 bytes (defn a[n](cond(< 0 n 3)1 1(+(a(- n(a(dec n))))(a(- n(a(- n 2))))))) Very literal. (defn a [n] (cond (< 0 n 3) 1 ; Return 1 if n is 1 or 2 :else (+ (a (- n (a (dec n)))) ; Else, recurse 4 times and do some math (a (- n (a (- n 2))))))) (doseq [n (range 1 21)] (println n (a n))) 1 1 2 1 3 2 4 3 5 3 6 4 7 5 8 5 9 6 10 6 11 6 12 8 13 8 14 8 15 10 16 9 17 10 18 11 19 11 20 12 • I think you can get rid of the 0 in the < statement, because the challenge specs state that the input is a positive integer. – clismique Feb 16 '17 at 9:50 • @Qwerp-Derp ohh, thanks. I'll fix that when I get on my laptop. – Carcigenicate Feb 16 '17 at 11:21 ## Lithp, 70 bytes (non-competing) (def a #N::((if(<= N 2)(1)((+(a(- N(a(- N 1))))(a(- N(a(- N 2))))))))) Warning: incredibly slow. Very recursive. Implements the exact algorithm in challenge. Non-competing because language is newer than challenge. Try it online! An alternate solution that is much faster, using caching of results: ## Lithp, 166 bytes ((def a #N::((if(<= N 2)(1)((+(b(- N(b(- N 1))))(b(- N(b(- N 2)))))))))(var C(dict)) (def b(scope #N::((if(!(dict-present C N))((dict-set C N(a N))))(dict-get C N))))) Try it online! • Just curious, why did you make functions like (def a #N::(+ N 1)), where a is a successor? – clismique Feb 16 '17 at 9:27 • I'm sorry I don't quite understand you. What do you mean by successor? – Andrakis Feb 16 '17 at 9:39 • A successor function is a function that increments a number, but that's besides the point. I'm just curious about the way to define functions in Lithp - why did you choose to do #arg:: when defining functions? I haven't really seen that done in a Lisp-like before. – clismique Feb 16 '17 at 9:40 • Ah, thank you. Firstly, lowercase names are atoms. Names beginning with an uppercase are variables. It follow Erlang's design in this way. Next, I struggled to read most Lisp code, though I loved the elegance of it. The way I've designed my syntax is to be easy to read, and an anonymous function (format: #[Args,...] :: ( calls .. )) was easy to see what arguments are being passed. It's only sort-of Lisp-like really. I like the elegance of the braces, and it's easy to parse. – Andrakis Feb 16 '17 at 9:47 • Oh, that's how it works, thanks! I was looking at the arguments and it seemed a bit weird, but now that you've explained it I can understand why it's like that. – clismique Feb 16 '17 at 9:49 # PHP, 56 bytes function q($n){return$n<3?:q($n-q($n-1))+q($n-q(\$n-2));}
recursive function; requires PHP 5.6 or later (or replace ?: with ?1:). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.440230131149292, "perplexity": 2992.204183638899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540564599.32/warc/CC-MAIN-20191213150805-20191213174805-00532.warc.gz"} |
http://math.stackexchange.com/questions/390393/finding-the-base-of-a-triangle?answertab=active | # finding the base of a triangle
In A triangle ABC ,AB=AC.D is a point inside the triangle such that AD=DC.Median on AC from D meets median on BC from A at the centroid of the triangle.If the area of the triangle ABC equals to $4\sqrt 3$ .Find the base i.e. BC. The method that i have used to solve this problem works by ending with two answers for $\frac12$ of BC and then having to check which one works by trial and error.It is more or less efficient,but I am assuming there's a better one.
-
If $AD=DC$ ike you said then $\Delta ABC$ is isosceles. Also we have that triangle $\Delta ABC$ is isosceles. A centroid is the intersection of the three medians of a triangle intersecting opposite vertices thus if the median from $AC$ meets median on $BC$ at the centroid, then both medians must intersect their respective opposite vertices. Then we can conclude that the median from $D$ on $AC$ is also a median from $B$ on $AC$. This median forms an angle of $90°$ with side $AC$ since $\Delta ADC$ is isosceles. Now we know that triangle $\Delta ABC$ has two medians that are also perpendicular bisectors, it is easy to see that triangle $\Delta ABC$ is equilateral and $AB = BC = CA.$
$$[\Delta ABC]= \cfrac 12 BC^2\sin 60° = \cfrac{\sqrt 3}{4}BC^2=4\sqrt 3 \implies BC=4$$
-
This is my own solution.At first,we know that triangle ABC is isosceles.We also know that tri ADC is also isosceles.So the median of that triangle bisects AC in a right angle. Note that the side opposite angle B is AC,the same as angle D.The median from angle B , like that of angle D will also meet at the centroid.So we can draw the inference that median from angles B and D are actually the same straight lline.But then the median from angle B bisects AC perpendicularly.So AB =BC=AC (since median of an isosceles triangle bisects it perpendicularly.This proves that ABC is an equilateral triangle.Now let the median on BC bisect BC at M.Then ,by congruency criterion,triangles ABM and AMC are equal, and so area of each of them is 2× sqrt 3.
We know, $2\times \sqrt{3} = 1/2(2\times 2\times \sqrt{3})$. So MC equals either (\sqrt{2} )\times 3 or simply 2.Now let us use the pythagorean theorem to see which fits.It seems that 2 is the only solution for MC.so $BC =4$.That is how I solved it.
-
I do not get it.why are we having different answers? – user77646 May 13 '13 at 14:25
And it is high time I started telling people to use elementary geometry to solve my problems. – user77646 May 13 '13 at 14:27
i made a mistake, forgot the $\frac 12$ in area of triange – user31280 May 13 '13 at 15:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8602029085159302, "perplexity": 686.3373846059917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097204.8/warc/CC-MAIN-20150627031817-00033-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://quant.stackexchange.com/questions/8478/close-form-for-stochastic-integral?answertab=active | # close form for stochastic integral
I am new to stochastic calculus. Can I know how to compute the close-form solution for $$\int_0^t \exp(\alpha s - \sigma W_s) \; ds$$ and $$\int_0^t \exp(\alpha s - \sigma W_s) \; dW_s.$$ I encounter that when trying to solve for the following SDE $$dX_t = \theta(\mu - X_t)\; dt + \sigma X_t \; dW_t$$
-
If the SDE is written correctly, that is not an Ornstein-Uhlenbeck process and your integrals don't seem to match it either. An O-U process has additive noise (i.e., diffusion function is not a function of the state variable) while the SDE as written has multiplicative noise. Also, an O-U process definitely does have a known analytical solution (see Doob, Ann. Math. 43, 1942). – horchler Jul 16 '13 at 18:29
@n.c. Your comment isn't accurate unfortunately. As "horchler" pointed out, the Ornstein-Uhlenbeck process does NOT have multiplicative noise, unlike the process posted in this question. To appropriately solve this SDE, consider applying Ito's Lemma on $Y_t = ln(X_t)$ – Mariam Aug 21 '13 at 16:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9555674195289612, "perplexity": 599.0575129486875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507445190.43/warc/CC-MAIN-20141017005725-00122-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://bitfunnel.org/strangeloop/ | BitFunnel performance estimation · BitFunnel
# BitFunnel performance estimation
Hi! I’m going to talk about two things today.
First, I’m going to talk about one way to think about performance. That is, one way you can reason about performance.
Second, I’m going to talk about search. We’re going to look at search as a case study because, when talking about perfomance, it’s often useful to have something concrete to reason about. We could use any problem domain. However, I think that the algorithm we’re going to discuss today is particularly interesting because we use it in Bing, despite the fact that it’s in a class of algorithms that’s been considered obsolete for almost 20 years (at least as core search engine technology).
In case it’s not obvious, this is a psuedo-transcript of a talk given at StrangeLoop 2016. See this link if you’d rather watch the video. I wrote this up before watching my talk, so the text probably doesn’t match the video exactly.
BTW, when I say performance, I don’t just mean speed (latency), or speed (throughput). We could also be talking about other aspects of performance like power. Although our example is going to be throughput oriented, the same style of reasoning works for other types of performance.
Why do we care about performance? One is answer is that we usually don’t care because most applications are fast enough. That’s true! Most applications are fast enough. Spending unecessary time thinking about performance is often an error.
However, when applications get larger, most applications become performance sensitive! This happens both because making a large application faster reduces its cost, and also because making a large application faster can increase its revenue. The second part isn’t intuitive to many people, but we’ll talk more about that later.
How do we think about performance? It turns out that we can often reason about the performance with siple arithmetic. For many applications, even applications that take years to build, it’s possible to estimate the performance before building the system with simple back-of-the-envelope calculations.
Here’s a popular tweet. It has 500 retweets! “Working code attracts people who want to code. Design documents attract people who want to talk.”
I get it. Coding feels like real work. Meetings, writing docs, creating slide decks, and giving talks don’t feel like work.
But when I look at outcomes, well, I often see two applications designed to the same thing that were implemented with similar resources where one application is 10x or 100x faster than the other. And when I ask around and find out why, I almost inevetiably find that the team that wrote the faster application spent a lot of time on design. I tend to work on applications that take a year or two to build, so let’s say we’re talking about something that took a year and a half. For a project of that duration, it’s not uncommon to spend months in the design phase before anyone writes any code that’s intended to be production code. And when I look at the slower application, the team that created the slower appliction usually had the idea that “meetings and whiteboarding aren’t real work” and jumped straight into coding.
The problem is that if you have something that takes a year and a half to build, if you build it, measure the performance, and then decide to iterate, your iteration time is a year and a half, whereas on the whiteboard, it can be hours or days. Moreover, if you build a system without reasoning about what the performance should be, when you build the system and measure its performance, you’ll only know how fast it runs, not how fast it should run, so you won’t even know that you should iterate.
It’s common to hear advice like “don’t optimize early, just profile and then optimize the important parts after it works”. That’s fine advice for non-performance critical systems, but it’s very bad advice for performane critical systems, where you may find that you have to re-do the entire architecture to get as much performance out of the system as your machine can give you.
Before we talk about performance, let’s talk about scale. Because people often mean different things when they talk about scale, I’m going to be very concrete here.
Since we’re talking about search, let’s imagine a few representative corpus sizes we might want to search: ten thousand, ten million, and ten billion documents.
And let’s assume that each document is 5kB. If we’re talking about the web, that’s a bit too small, and if we’re talking about email, that’s a bit too big, but you can scale this number to whatever corpus size you have.
BTW, the specific problem we’re going to look at is: we have a corupus of documents that we want to be able to search, and we’re going to handle AND queries.
That is, queries of the form, I want this word, and this word, and this word. For example, I want the words large AND yellow AND dog. The systems we’ll look at today can handle ORs and NOTs, but those aren’t fundamentally different and talking about them will add complexity, so we’ll only look at AND queries.
First, let’s consider searching ten thousand documents at 5kB per doc.
If you want to get an idea of how big this is, you can think of this as email search (for one person) or forum search (for one forum) in a typical case.
a k times a k is a million, and five times time is fifty, so 5kB times ten thousand is 50MB.
50MB is really small!
Today, for $50, you can buy a phone off amazon that has 1GB of RAM. 50MB will easily fit in RAM, even on a low-end phone. If our data set fits in RAM and we have 50MB, we can try the most naive thing possible and basially just grep through our data. If you want something more concrete, you can think of this as looping over all documents, and for each document, looping over all terms. Since we only need to handle AND queries, we can keep track of all the terms we want, and if a document has all of the terms we want, we can add that to our list of matches. Ok. So, for ten thousand documents, the most naive thing we can think of works. What about ten million documents? If you want to get a feel for how big ten million documents, you can think of this is roughly wikipedia-sized. Today, English language wikipedia has about five million documents. 5kB times ten million is 50GB. This is really close to wikipedia’s size – today, wikipedia is a bit over 50GB (uncompressed articles in XML, no talk, no history). We can’t fit that in RAM on a phone, and we’d need a pretty weird laptop to fit that in RAM on a laptop, but we can easily fit that in RAM on a low-budget server. Today, we can buy a$2000 server that has 128GB of RAM.
What happens when we try to run our naive grep-like algorithm? Well, our cheap server can get 25GB/s of bandwidth…
… and we have 50GB of data. That means that it takes two seconds to do one search query!
And while we’re doing a query, we’re using all the bandwidth on the machine, so we can’t expect to do anything else on the machine while queries are running, including other queries. This implies that it takes two seconds to do a query, or that we get one-half a query per second, or 12 QPS.
Is that ok? Is two seconds of latency ok? It depends.
For many applications, that’s totally fine! I know a lot of devs who have an internal search tool (often over things like logs) that takes a second or two to return results. They’d like to get results back faster, but given the cost/benefit tradeoff, it’s not worth optimizing things more.
How about 12 QPS? It depends.
As with latency, a lot of devs I know have a search service that’s only used internally. If you have 10 or 20 devs typing in queries at keyboards, it’s pretty unlikely that they’ll exceed 12 QPS with manual queries, so there’s no point in creating a system that can handle more throughput.
Our naive grep-like algorithm is totally fine for many search problems!
However, as services get larger, two seconds of latency can be a problem.
If we look at studies on latency and revenue, we can see a roughly linear relationship between latency and revenue over a pretty wide range of latencies.
Amazon found that every 100ms of latency cost them more than 1% of revenue. Google once found that adding 500ms of latency, or half a second, cost them 20% of their users.
This isn’t only true of large companies – when Mobify looked at this, they also found that 100ms of latency cost them more than 1% of revenue. For them, 1% was only \$300k or so. But even though I say “only”, that’s enough to pay a junior developer for a year. Latency can really matter!
Here’s a query from some search engine. The result came back in a little over half a second. That includes the time it takes to register input on the local computer, figure out what to do with the input, send it across the internet, go into some set of servers somewhere, do some stuff, go back across the internet, come back into the local computer, do some more stuff, and then render the results.
That’s a lot of stuff! If you do budgeting for a service like this and you want queries to have a half-second end-user round-trip budget, you’ll probably only leave tens of milliseconds to handle document matching on the machines that recieve queries and tell you which documents matched the queries. Two seconds of latency is definitely not ok in that case.
Furthermore, for a service like Bing or Google, provisioning for 12 QPS is somewhat insufficient.
What we can do? Maybe we can try using an index instead of grepping through all documents.
If we use an index, we can get widely varying performance characteristics. Asking what the performance is like if we “use an index” is like asking what the performance is like if we “use an algorithm”. It depends on the algorithm!
Today, we’ll talk about how to get performance in the range of thousands to tens of thousands of queries per second, but first…
… let’s finish our discussion about scale and talk about how to handle ten billion documents.
We’ve said that we can, using some kind of index, serve ten million documents from one machine with performance that we find to be acceptble. So how about ten billion?
With ten billion documents at 5kB a piece, we’re looking at 50TB. While it’s possible to get a single machine with 50TB of RAM, this approach isn’t cost effective for most problems, so we’ll look at using multiple cheap commodity machines instead of one big machine.
Search is a relatively easy problem to scale horizontally; that is, it’s relatively easy to split a search index across multiple machines. One way to do this (and this isn’t the only possible way) is to put different documents on different machines. Queries then go to all machines, and the result is just the union of all queries.
Since we have ten billion documents, and we’re assuming that we can serve ten million documents on a machine, if we split up the index we’ll have a thousand machines.
That’s ok, but if we have a cluster of a thousand machines and the cluster is in Redmond, and we have a customer in Europe, that could easily add 300ms of latnecy to the query. We’ve gone through all the effort of designing and index that can return a query in 10ms, and then we have customers that lose 300ms from having their queries go back and forth over the internet.
Instead of having a single cluster, we can use multiple clusters all over the world to reduce that problem.
Say we use ten clusters. Then we have ten thousand machines.
With ten thousand machines (or even with a thousand machines), we have another problem: given the failure rate of commodity hardware, with ten thousand machines, machines will be failing all the time. At any given time, in any given cluster, some machines will be down. If, for example, the machine that’s indexing cnn.com goes down and users who want to query that cluster can’t get results from CNN, that’s bad.
In order to avoid the loss of sites from failures, we might triple the number of machines for redundancy, which puts us at thirty thousand machines.
With thirty thousand machines, one problem we have is that we now have a distributed system. That’s a super interesting set of problems, but it’s beyond the scope of this talk.
Another problem we have is that we have a service that cost a non-trivial amount of money to run. If a machine costs a thousand dollars per year (amortized cost, including the cost of building out datacenters, buying machines, and running the machines), that puts us at thirty-million dollars a year. By the way, a thousand dollars a year is considered to be a relatively low total amortized cost. Even if we can hit that low number, we’re still looking at thirty-million dollars a year.
At thirty-million a year, if we can double the performance and halve the number of machines we need, that saves us fifteen-million a year. In fact, if we can even shave off one percent on the running time of a query, that would save three-hundred thousand dollars a year, saving enough money to pay a junior developer for an entire year.
Conventional wisdom often says that “machine time is cheaper than developer time, which means that you should use the most productive tools possible and not worry about performance”. That’s absolutely true for many applications. For example, that’s almost certainly true for any single-server rails app. But once you get to the point where you have thousands of machines per service, that logic is flipped on its head because machine time is more expensive than developer time.
Now that we’ve framed the discussion by talking about scale, let’s talk about search algorithms.
The problem we’re looking at is, given a bunch of documents, how can we handle AND queries.
The standard algorithm that people use for search indices is a posting list.
A posting list is basically what a layperson would call an index.
Here’s an index from the 1600s. If you look at the back of a book today, you’ll see the same thing: there’s a list of terms, and next to each term there’s a list pages that term appears on.
Computers don’t have pages in the same sense; if you want to imagine a simple version of a posting list, you can think of…
…a hash map from terms to linked lists of document ids. That is, a hash map where key is a term and the value is a list.
That’s one way to do it, and it’s standard. Another thing we could try to do is use Bloom Filters.
We do this in Bing in a system called BitFunnel. But before we can describe BitFunnel, we need to talk about how bloom filters work.
And before we talk about how bloom filters work, let’s consider a more naive solution we might construct.
One thing we might try would to be use something called in incidence matrix, that is, a 2d matrix where one dimension of the matrix is every single term we know about, and the other dimension is every single document we know about. Each entry in the matrix is a 1 if the term is in the document, and it’s a 0 if the term isn’t in the document.
What will the performance of that be?
Well, first, how many terms are there? How many terms do you think are on the internet? And let’s say we shard the internet a zillion ways and serve tens of millions of documents per server? How many unique terms do we have per server?
pause
someone shouts ten million
Turns out, when we do this, we can see tens of billions of terms per shard. This is often surprising to people. I’ve asked a lot of people this question, and people often guess that there are millons or billions of unique terms on the entire internet. But if you pick a random number under ten billion and search for it, you’re pretty likely to find it on the internet! So, there are probably more than ten billion terms on the internet!
In fact, if you limit the search to just github, you can find a single document with about fifty-million primes! And if you look at the whole internet, you can find a site with all primes under one trillion, which is over thirty-billion primes! If that site lands in a single shard, that shard is going to have at least thirty-billion unique terms. Turns out, a lot of people put long mathematical sequences online.
And in addition to numbers, there’s stuff that’s often designed to be unique, like catalog numbers, ID numbers, error codes, and GUIDs. Plus DNA! Really, DNA. Ok, DNA isn’t designed to be unique, but if you split it up into chunks of arbitrary numbers of characters, there’s a high probability that any N character chunk for N > 16 is unique.
There’s a lot of this stuff! One question you might ask is, do you need to index that stuff? Does anyone really search for GTGACCTTGGGCAAGTTACTTAACCTCTCTGTGCCTCAGTTTCCTCATCTGTAAAATGGGGATAATA?
It turns out, that when you ask people to evaluate a search engine, many of them will try to imagine the weirdest queries they can think of, try those, and then choose the search engine that handles those queries better. It doesn’t matter that they never do those queries normally. Some real people actually evaluate search engines that way. As a result, we have to index all of this weird stuff if we want people to use our search engine.
If we have tens of billions of terms, say we have thirty billion terms, how large is our incidence matrix? Even if we use a bit vector, one single document will take up thirty billion divided by 8 bytes, or 3.75GB. And that’s just one document!
How can we shrink that? Well, since most documents don’t contain most terms, we can hash terms down to a smaller space. Instead of reserving one slot for each unique term, we only need as many slots as we have terms in a document (times a constant factor which is necessary for bloom filter operation).
That’s basically what a bloom filter is! For the purposes of this talk, we can think of a bloom filter as a data structure that represents a set using a bit vector and a set of independant hash functions.
Here, we have the term “large” and we apply three independent hash functions, which hashes the term to locations five, seven, and twelve. Having three hash functions is arbitrary and we’ll talk about that tradeoff later.
To insert “large” into the document, we’ll set bits five, seven, and twelve. To query for “large”, we’ll do the bitwise AND of those locations. That is, we’ll check to see if all three locations are 1. If any location is a 0, the result will be 0 (false) otherwise the result will be 1 (true). For any term we’ve inserted, the query will be 1 (true), because we’ve just set those bits.
In this series of diagrams, any bit that’s colored is a 1 and any bit that’s white is a 0. The red bits are associated with the term “large”.
We can insert another term: “dog”. To do so, we’ll set those bits, one, seven, and ten. Seven was already set by “large” (red), but it’s fine to set it again with “dog”; all bits that are yellow are associated with the term “dog”. If we query for the term, as before, we’ll get a 1 (true) beacuse we’ve just set all the bits associated with the query.
We can also try querying a term that we didn’t insert into the document. Let’s say we query for “cat”, which happens to hash to three, ten, and twelve.
When we do the bitwise AND, we first look at bit three. Since bit three is a zero, we already know that the result will be 0 (false) before we look at the other bits and don’t have to look at bits ten and twelve.
Let’s try querying another term, “box”, and let’s say that term hashes to one, five, and ten.
Even if we don’t insert this term into the document, the query shows that the term is in the document because those bits were set by other terms. We have a false positive!
How bad is this problem? Well, what’s the probability that any query will return a false positive?
Let’s assume we have ten percent bit density. This is something we can control – for example, if we have a bit vector of length 100, and we have ten terms, each of which is hashed to one location, we expect the bit density to be slightly less than 10%. It would be 10% if no terms hashed to the same location, but it’s possible that some terms might collide nd hash to the same location.
What’s the probability of a false positive if we hash to one location instead of three locations?
If the term is actually in the document, then we’ll set the bit, and if we do a query, since the bit was set, we’ll definitely return true, so there’s no probability of a false negative.
If the term isn’t in the document and we haven’t set the associated bit for this term because of this term, what’s the probability the bit is set? Because our bit desnity is .1, or 10%, the probability is 10%.
What if we hash to two locations instead of one location? Since we’re assuming we have uniform 10% bit density, we can multiply the probabilities: we get .1 * .1 = .01 = 1%.
For three locations, the math is the same as before: .1 * .1 * .1 = .001 = 0.1%.
As we hash to more locations, if we don’t increase the size of the bit vector, the bit density will go up. Same amount of space, set more bits, higher bit density. So we have to increase the number of bits, and we have to increase the number of bits linearly. As we increase the number of bits linearly, we get an exponential decrease in the probability of a false positive.
One intuition as to why bloom filters work is that we pay a linear cost and get an exponential benefit.
Ok. We’ve talked about how to use a bloom filter to represent one document. Since our index needs to represent multiple documents, we’ll use multiple bloom filters.
In this diagram, each of the ten columns represents a document. That is, we have documents A through J.
One thing we could do is have ten independent bloom filters. We know that we can have one bloom filter represent one document, so why not use ten bloom filters for ten documents?
If we’re going to do that, we might as well maintain the same mapping from terms to rows; that is, use the same hash functions for each column, so that when we do a query, we can do the query in parallel.
In the single-document example, when we did a query, we did the bitwise AND of some bits. Now, to do a query, we’ll do the bitwise AND of rows of bits.
Now we’re going to query for all documents that have both “large” AND “dog”. As before, bits that are red are associated with the term “large” and bits that are yellow are associated with the “dog”. Additionally, bits that are grey are associated with other terms.
After we do the bitwise AND of all of the rows, the result will be a row vector with some bits set – those bits will be the documents that have both the terms “large” AND “dog”. We’re going to AND together rows one, five, seven, ten, and twelve and then look at the result.
In this diagram, on the right, the part’s that highlighted is the fraction of the query that we’ve done so far. On the left, the part’s that highlighted is the result of the computation so far.
When we start, we have row one.
When we AND rows one and five together, we can see that bit F is cleared to zero.
After we AND row seven into our result, nothing changes. Even though row seven has bit F set, an AND of a one and a zero is a zero, so the result in column F is still zero.
When we AND row ten in, bit I is cleared.
And then when we AND in the last row, nothing changes. The result of the query is that bit J is set. In other words, the query concludes that document J contains both the terms “large” AND “dog”, and no other document in this block contains both terms.
In our previous example, we queried a block of documents where at least one document contained both of the terms we cared about. We can also query a block of documents where none of the documents contain both of the terms.
As before, we want to take the bitwise AND of rows one, five, seven, ten, and twelve.
After we AND in row five, all of the bits are zero! When that happened in the “cat” example we did on a single document, we could stop because we knew that the document couldn’t possibly contain the term cat because we can’t set a bit by doing an AND. This same thing is true here, and we can stop and return that the result is all zeros.
I said, earlier, that we’d try to estimate the performance of a system. How do we do that?
We’ll want to have a cost model for operations and then figure out what operations we need to do. For us, we’re doing bitwise ANDs and reading data from memory. Reading data from memory is so much more expensive than a bitwise AND that we can ignore the cost of the ANDs and only consider the cost of memory accesses. If we had any disk accesses, those would even slower, but since we’re operating in memory, we’ll assume that a memory access is the most expensive thing that we do.
One bit of background is that on the machines that we run on, we do memory accesses in 512-bit blocks. So far, we’ve talked about doing operations on blocks of ten documents, but on the actual machine we can think of doing operations on 512 document blocks.
In that case, to get a performance estimate, we’ll need to know how many blocks we have, how many memory accesses (rows) we have per block, and how many memory accesses our machine can do per unit time.
To figure out how many memory accesses per block we want, we could work through the math…
…which is a series of probability calculations that will give us some number. I’m not going to do that here today, but it’s possible to do.
Another thing we can do is to run a simulation. Here’s the result of a simulation that was maybe thirty lines of code. This graph is a histogram of how many memory accesses we have to do per block, assuming we have 20% bit density, and a query that’s 14 rows.
If 14 rows sounds like a lot, well, we often do queries on 20 to 100 rows. That might sound weird, since we looked at an example where each term mapped to three rows. For one thing, terms can and sometimes do map to more than three rows. Additionally, we do query re-writing that makes queries more complicated (and hopefully better).
For example, let’s say we query for “large” AND “yellow” AND “dog”.
Maybe the user was actually searching for or trying to remember the name of some breed of large yellow dog, so we could re-write the query to be something like
(large AND yellow AND dog) OR (golden AND retriever)
as well as other breeds of dogs that can be large and yellow.
But the user might also be searching for some particular large yellow dog, so we could re-write the query to something like
(large AND yellow AND dog) OR (golden AND retriever) OR (old AND yeller)
and in fact we might want to query for the phrase “old yeller” and not just the AND of the terms, and so on and so forth.
When do you this kind of thing, and add in personalization based on location and query history, simple seeming queries can end up being relatively complicated, which is how we can get queries of 100 rows.
Coming back to the histogram of the number of memory accesses per block, we can see that it’s bimodal.
There’s he mode on the right, where we do 14 accesses. That mode corresponds to our first multi-document example, where at least one document in the block contained the terms. Because at least one document contained all of the terms, we don’t get all zeros in the result and do all 14 accesses.
The mode on the left, which is smeared out from 3 on to the right, is associated with blocks like our second example, where no document contained all of the terms in the query. In that case we’ll get a result of all zeros at some point with very high probability, and we can terminate the query early.
If we look at the average of the number of accesses we need for the left mode, it’s something like 4.6. On the right, it’s exactly 14. If we average these together, let’s say we get something like 5 accesses per query (just to get a nice, round, number).
Now we have what we need to do a first-order performance estimate!
If we go back to our roughly wikipedia-sized example, we had ten million documents. Since we’re on machine where memory accesses are 512 bits wide, that’s ten million divided by 512 equals twenty-thousand blocks, with a bit of rounding.
We said that we have roughly five memory accesses per query. If we have twenty-thousand blocks, that means that a query needs to do twenty-thousand times five memory accesses, or one hundred-thousand memory transfers.
We said that our cheap server can get 25GB/s of bandwidth out of. If we do 512-bit transfers, that’s three-hundred and ninety-million transfers per second.
If we divide three hundred-million transfers per second into a hundred thousand transfers per query, we get thirty-nine hundred QPS (with raounding from previous calculations).
When I do a calculation like this, if I’m just looking at the largest factors that affect performance, like we did here, I’m happy if we get within a factor of two.
If you adjust for a lot of smaller factors, it’s possible to get a more accurate estimate…
…but in the interest of time, we’re not going to look at all the smaller factors that add or remove 5% or 10% in performance.
However, there are a few major factors that affect performance a lot that I’ll briefly mention.
One thing is that our machines don’t only do document matching. So far, we’ve discussed an algorithm that, given a set of documents and a query will return a subset of those documents. We haven’t done any ranking, meaning that queries will come back unordered.
There are some domains where that’s fine, but in web search, we spend a significant fraction of CPU time ranking the documents that match the query.
Additionally, we also ingest new documents all the time. When news happens and people search for the news, they want to see it right away, so we can’t do batch updates.
This is something BitFunnel can actually do faster than querying. If we think about how queries worked, they’re global, in the sense that each query looked at information for each document. But when we’re ingesting new documents, since each document is a column, that’s possible to do without having to touch everything in the index. In fact, since our data structure is, in some sense, just an array that we want to set some bits in, it’s pretty straightforward to ingest documents with multiple threads while allowing queries with multiple threads.
It’s possible to work through the math for this the way we did for querying, but again, in the interest of time, I’ll just mention that this is possible.
Between ranking and ingestion, in the configuration we’re running today, that uses about half the machine, leaving half for matching, which reduces our performance by a factor of two.
However, we also have an optimization that drastically increases performance, which is using hierarchical bloom filters.
In our example, we had one bloom filter per document, which meant that if we had a query that only matched a single docucment, we’d have to examine at least one bit per document. In fact, we said that we’d end up looking at about five bits per document. If we use hierarchical bloom filters, it’s possible to look at a log number of bits per document instead of a linear number of bits per document.
The real production system we use has a number of not necessarily obvious changes in order to run at the speed that it does. Most of them aren’t required for the system to work correctly without taking up an unreasonable amount of memory, but one is.
If you take the algorithm I described it today and try to use it, when you look at sixteen rows in a block of ten documents, you might see something like this.
Notice that some columns (B and D) have most or all bits set, and some columns (A and C) have few or no bits set. This is because different documents have a different number of terms.
Let’s say we sized the number of rows so that we can efficiently store tweets. Let’s say, hypothetically, that means we need fifty rows. And then a weird document with ten million terms comes along and it wants to hash into the rows, say, thirty million times. That’s going to set every bit in its column, which means that every query will return true. Many weird documents like this contain terms that are almost never queried, so the query should almost never return true, but our system will always return true!
Say we size up the number of rows so that these weird ten million term documents are ok. Let’s say that means we need to have a hundred million rows. Ok, our queries will work fine, but we still have things like tweets that might want to set, say, sixteen bits. We said that we wanted to use bloom filters instead of arrays to save space by hashing to reduce the size of our array, but now we have all of these really sparse columns that have something like sixteen out of a hundred million bits set.
To get around this problem, we shard (split up the index) by the number of terms per document. Unlike many systems, which only run in a sharded configuration when they need to spill over onto another machine, we always run in a sharded configuration, even when we’re running on a single machine.
Although there are other low level details that you’d want to know to run an efficient system, this is the only change that you absolustely have to take into account when compared to the algorithm I’ve described today.
Let’s sum up what we’ve look at today.
Before we talk about the real conclusions, let’s discuss a few false impressions this talk could give.
“Search is simple”.
You’ve seen me describe an algorithm that’s used in production for web search. The algorithm is simple enough that it could be described in a thirty-minute talk with no background. However, to run this algorithm at the speed we’ve estimated today, there’s a fair amount of low-level implementation work. For example, to reduce the (otherwise substantial) control flow overhead of querying and ranking, we compile both our queries and our query ranking.
Additionally, even if this system were simple, this is less than 1% of the code in Bing. Search has a lot of moving parts and this is just one of them.
“Bloom filters are better than posting lists”.
I went into some detail about bloom filters and didn’t talk about posting lists much, except to say that they’re standard. This might give the impression that bloom filters are categorically better than posting lists. That’s not true! I only didn’t describe posting lists in detail and do a comparison because state-of-the-art posting list implementations are tremendously complicated and I couldn’t describe them to a non-specialist audience in thirty minutes, let alone do the comparison.
If you do the comparison, you’ll find that when one is better than the other depends on your workload. For an argument that posting lists are superior to bloom filters, see Zobel et al., “Inverted files versus signature files for text indexing”.
“You can easily reason about all performance”.
Today, we look at how an algorithm worked and estimated the performance of a system that took years to build. This was relatively straightforward because we were trying to calculate the average throughput of a system, which is something that’s amenable to back-of-the-envelope math. Something else that’s possible, but slightly more difficult, is to estimate the latency of a query on an unloaded system.
Something that’s substantially harder is estimating the latency on a system as load varies, and estimating the latency distribution.
Ok, now for an actual conclusion.
You can often reason about performance…
…and you can do so with simple arithmetic. Today, all we did was multiply and divide. Sometimes you might have to add, but you can often guess what the performance of a system should be with simple calculations.
Thanks to all of these people for help with this talk! Also, I seem to have forgotten to put Bill Barnes on the list, but he gave me some great feedback!
Post original talk: also, thanks to Laura Lindzey, Larry Marbuger, and someone’s name who I can’t remember for giving me great post-talk feedback that changed how I’m giving the next talk.
If you want to read more about the index we talked about today, BitFunnel, you can get more information at bitfunnel.org. We also have some code up at github.com/bitfunnel/bitfunnel.
Oh, yeah, I’m told you have to introduce yourself at these things. I’m Dan Luu, and I have a blog at danluu.com where I blog about the kind of thing I talked about here today. That is, I often write about performance, algorithms and data structures, and tradeoffs between different techniques.
Thanks for your time. Oh, also, I’m not going to take questions from the stage because I don’t know how people who aren’t particularly interested in the questions often feel obligated to stay for the question period. However, I really enjoy talking about this stuff and I’d be happy to take questions in the hallway or anytime later.
#### Some comments on the talk
Phew! I survived my first conference talk.
Considering how early the talk was (10am, the first non-keynote slot), I was surprised that the room was packed and people were standing. Here’s a photo Jessica Kerr took (and annotated) while we were chatting, maybe five or ten minutes before the talk started, before the room really filled up:
During the conference, I got a lot of positive comments on talk, which is great, but what I’d really love to hear about is where you were confused. If you felt lost at any point, you’d be doing me a favor my letting me know what you found to be confusing. Before I run this talk again, I’m probably going to flip the order of some slides in the Array/Bloom Filter/BitFunnel discussion, add another slide where I explictly talk about bit density, and add diagrams for a HashMap (in the posting list section) and an Array (in the lead-up to bloom filters). There are probably more changes I could make to make things clearer, though!
Dan Luu
Prior to working on BitFunnel, Dan worked on network virtualization hardware at Microsoft (SmartNIC), deep learning hardware at Google (TPU), and x86/ARM processors at Centaur. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5330168008804321, "perplexity": 799.7817426905062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612283.85/warc/CC-MAIN-20170529111205-20170529131205-00150.warc.gz"} |
http://math.poly.edu/news/seminars.phtml | Colloquia
COMING COLLOQUIA
Thursday, March 13, 2014: Colloquium Vertical versus horizontal Poincare inequalities Assaf Naor, Courant Institute of Mathematical Sciences New York University4:35-5:20 PM, Rogers Hall 302 Thursday, March 20, 2014: No Coloquium, Spring Break Thursday, April 03, 2014: Remarks on the conjectured Log-Brunn-Minkowski inequality Christos Saroglou, Texas A&M University Thursday, April 10, 2014: Colloquium TBA Artem Zvavitch, Kent State University4:35-5:20 PM, TBA Thursday, April 17, 2014: Colloquium TBA Joel Spruck, Johns Hopkins University Thursday, April 24, 2014: Colloquium TBA Monika Ludwig, Vienna University of Technology4:35-5:20 PM, TBA Thursday, April 24, 2014: Colloquium TBA Helmut Groemer, University of Arizona5:35-6:20PM, TBA Thursday, October 09, 2014: Colloquium TBA Gil Kalai, Yale University & Hebrew University of Jerusalem4:35-5:20 PM, TBA
PAST COLLOQUIA
Thursday, March 06, 2014: Colloquium Beltrami Fields, Electromagnetics, and Plasma Equilibria: An integral equation approach Mike O'Neil, Courant Institute of Mathematical Sciences New York University4:35-5:20 PM, Rogers Hall 302 Wednesday, March 05, 2014: Colloquium Extreme events and turbulence President and Dean Sreenivasan, NYU Polytechinic School of Engineering4:30pm, LC 400 Thursday, February 20, 2014: Colloquium A Characterization of Blaschke Addition Franz Schuster, Vienna University of Technology4:35-5:20pm, Roges Hall 302 Thursday, January 30, 2014: BAM NYU POLY MERGER, Celebrations Thursday, December 05, 2013: Colloquium Computational Art History: Three Emerging Projects C.Richard Johnson,Jr., Cornell University4:35-5:20pm, JAB 474 Monday, December 02, 2013: Colloquium Estimates of normal curvatures of space-like hypersurfaces in de Sitter space and convexity Vladimir Oliker, Emory University4:35-5:20pm, Rogers Hall 425 Thursday, November 21, 2013: Matching on-the-fly: Sequential Allocation with Higher Power and Efficiency Adam Kapelner, UPenn4:35-5:20 PM, Roges Hall 304 Thursday, November 14, 2013: Colloquium A Finite Field Model for Computed Tomography Eric Grinberg, UMass-Boston4:35-5:20 PM, Rogers Hall 304 Thursday, November 07, 2013: Colloquium Gradient estimaes for solutions of the Lame system with infinity coeffcients YanYan Li, Rutgers University4:35-5:20 PM, Rogers Hall 304 Thursday, October 31, 2013: Colloquium Random Cayley Graphs Noga Alon, Tel Aviv University and IAS, Princeton4:35-5:20 PM, Rogers Hall 304 Thursday, October 24, 2013: Colloquium Stability and slicing inequalities for measures of convex bodies Aleksandr Koldobskiy, University of Missouri-Columbia4:35-5:20 PM, Rogers Hall 304 Monday, October 21, 2013: Colloquium The Theory of Valuations and What It Can Do for You! Franz Schuster, Technical University of Vienna4:35-5:20 PM, Rogers Hall 425 Thursday, October 17, 2013: Colloquium Fractional covering numbers of convex bodies Boaz Slomka, Tel Aviv University 4:35-5:20 PM, Rogers Hall 304 Thursday, September 12, 2013: Colloquium Cone-volume measures of polytopes Martin Henk, University Magdeburg4:35-5:20 PM, Rogers Hall 304 Tuesday, August 06, 2013: Colloquium Tomography by flat tori in symmetric spaces of compact type Eric Grinberg, UMass-Boston4:35-5:20 PM, Rogers Hall 317 Thursday, April 18, 2013: Shadow Boundaries of Convex Bodies Louise Jottrand, University College, London4:35-5:20 PM, Rogers Hall 302 Thursday, April 11, 2013: Colloquium Classical surface theory revisited Brendan Guilfoyle, Institute of Technology, Tralee, County Kerry, Ireland4:35-5:20 PM, Rogers Hall 302 Thursday, April 04, 2013: Colloquium Some uses of uniform convexity Eric Carlen, Rutgers University4:35-5:20 PM, Rogers Hall 302 Thursday, March 28, 2013: Colloquium Fractional perimeters Monika Ludwig, Vienna University of Technology4:35-5:20 PM, Rogers Hall 302 Thursday, March 21, 2013: Colloquium A duality for isoperimetric problems Andreas Bernig, Goethe University Frankfurt4:35-5:20 PM, Rogers Hall 302 Thursday, March 14, 2013: Colloquium Grothendieck's inequality and the propeller conjecture Assaf Naor, Courant Institute of Mathematical Sciences New York University4:35-5:20 PM, Rogers Hall 302 Thursday, March 07, 2013: Colloquium A mozaic of Mathematical Problems Dmitri Burago, Penn State University4:30-5:15 PM, Rogers Hall 302 Thursday, February 28, 2013: Colloquium Eigenvalues and Eigenfunctions in Periodic Homogenization Fanghua Lin, Courant Institute of Mathematical Sciences New York University4:35-5:20 PM, Rogers Hall 302 Thursday, February 21, 2013: Colloquium Affine vs. Euclidean Isoperimetric Inequalities Franz Schuster, Technical University of Vienna5:35-6:20PM, Rogers Hall 302 Thursday, February 21, 2013: Colloquium On a thousand year old puzzle about triangles Shou-Wu Zhang, Princeton University & Columbia University4:35-5:20 PM, Rogers Hall 302 Thursday, December 06, 2012: Colloquium Computer Vision, Convolutions, Complexity and Algebraic Geometry Gregory Chudnovsky & David Chudnovsky, NYU-POLY4:35-5:20 PM, Rogers Hall 302 Thursday, November 29, 2012: Colloquium Synchronization in brain activity Mina Teicher , Bar-Ilan University4:35-5:20 PM, Rogers Hall 302 Thursday, November 15, 2012: Colloquium Some analytic aspects of conformally invariant fully nonlinear equations YanYan Li, Rutgers University4:35-5:20 PM, Rogers Hall 302 Thursday, October 25, 2012: Colloquium Fine asymptotic geometry of groups Moon Duchin, Tufts University4:35-5:20 PM, Rogers Hall 302 Thursday, October 18, 2012: Colloquium Shadow systems of asymmetric Lp zonotopes Manuel Weberndorfer, Vienna University of Technology4:35-5:20 PM, Rogers Hall 302 Thursday, October 11, 2012: Colloquium Analysis of Boolean functions, influence and noise. Gil Kalai, Yale University & Hebrew University of Jerusalem4:35-5:20 PM, Rogers Hall 302 Thursday, September 20, 2012: Colloquium Towards a Hadwiger Type Theorem for Minkowski Valuations Franz Schuster, Technical University of Vienna4:35-5:20 PM, Rogers Hall 302 Thursday, August 09, 2012: Colloquium Volume bounds for shadow covering Dan Klain, UMass-Lowell4:35-5:20 PM, Rogers Hall 302 Thursday, April 26, 2012: Colloquium Elliptic problems systems in the study of non-topological abelian and non-abelian vortices in Chern Simons theory Gabriella Tarantello, Universita di Roma Tor Vergata5:30-6:15 PM, Rogers Hall 302 Thursday, April 26, 2012: Colloquium Convergence (in shape) of iterated Steiner symmetrizations Aljosa Volcic, Università Calabria4:35-5:20 PM, Rogers Hall 302 Thursday, April 19, 2012: Colloquium Locally convex hypersurfaces of constant curvature with boundary Joel Spruck, Johns Hopkins University4:35-5:20 PM, Rogers Hall 302 Thursday, April 12, 2012: Colloquium The Evolution of MATLAB Cleve Moler, MathWorks4:35-5:20 PM, Bern Dibner Building LC 400 Thursday, April 05, 2012: Colloquium Radon transform on valuations Semyon Alesker, Tel-Aviv University4:35-5:20 PM, Rogers Hall 302 Thursday, March 29, 2012: Colloquium The unit sphere in flat spacetime, anti-deSitter-spacetime models of particle collisions, and Alexandrov curvature. Stephanie Alexander, University of Illinois at Urbana-Champaign4:35-5:20 PM, Rogers Hall 302 Thursday, March 15, 2012: No Coloquium, Spring Break Thursday, March 08, 2012: Colloquium The Stability of Matter, and Quantum Electrodynamics Elliott Lieb, Princeton University4:35-5:20 PM, Rogers Hall 302 Thursday, March 01, 2012: Colloquium Ultrametric skeletons Assaf Naor, Courant Institute of Mathematical Sciences New York University4:35-5:20 PM, Rogers Hall 302 Thursday, February 16, 2012: Colloquium On some new,and not so new,affine invariants for convex bodies Alina Stancu, Concordia University4:35-5:20 PM, Rogers Hall 302 Thursday, February 09, 2012: Colloquium The polynomial method in combinatorics Larry Guth, Courant Institute in New York University4:35-5:20 PM, Rogers Hall 302 Thursday, February 02, 2012: Colloquium Reverse Affine Isoperimetric Inequalities Manuel Weberndorfer, Vienna University of Technology4:35-5:20 PM, Rogers Hall 302 Monday, December 12, 2011: Colloquium Strong and weak epsilon nets and their applications Noga Alon, Tel Aviv University and IAS, Princeton4:35-5:20 PM, RH 214 Thursday, December 08, 2011: Colloquium An algorithmic look at some seventeenth century number theory Kenneth A. Ribet, University of California Berkeley4:35-5:20 PM, Rogers Hall 302 Monday, November 28, 2011: Colloquium Geometric convexity and design of freeform lenses and mirrors Vladimir Oliker, Emory University4:35-5:20 PM, RH 214 Thursday, November 17, 2011: Colloquium What is quantum probability? Greg Kuperberg, University of California - Davis4:35-5:20 PM, Rogers Hall 302 Monday, November 14, 2011: Colloquium Gradient Systems of Phase Transitions Fanghua Lin, Courant Institute of Mathematical Sciences New York University4:35-5:20 PM, RH 214 Thursday, November 03, 2011: Colloquium The Reasons behind some classical constructions in Analysis Vitali Milman, Tel Aviv University4:35-5:20 PM, Rogers Hall 302 Monday, October 17, 2011: Colloquium On the local solvability of the nirenberg problem On S^2 Zheng Chao Han, Rutgers University4:30-5:15 PM, RH214 Monday, October 03, 2011: Colloquium Some generalized maximum principles and its applications to S.S. Chern's conjectures Young Jin Suh, Kyungpook National University4:30-5:15 PM, RH214 Monday, May 02, 2011: Colloquium Constant width and diametrically complete sets Rolf Schneider, University of Freiburg5:30-6:15 PM, Rogers Hall 302 Monday, May 02, 2011: Colloquium An Almost obvious covering theorem waiting for a proof Helmut Groemer, University of Arizona4:30-5:15 PM, Rogers Hall 302 Monday, April 25, 2011: Colloquium Theory of Valuations and Integral Geometry Semyon Alesker, Tel-Aviv University4:30-5:15 PM, Rogers Hall 302 Monday, April 18, 2011: Colloquium Class Reduction and the Symmetry of Bivaluations Franz Schuster, Technical University of Vienna5:30-6:15 PM, Rogers Hall 302 Monday, April 18, 2011: Colloquium The method of moving planes in analysis and geometry Joel Spruck, Johns Hopkins University4:30-5:15 PM, Rogers Hall 302 Monday, April 11, 2011: Colloquium "Bounded" and "Unbounded" Groups Dmitry Burago, Penn State University4:30-5:15 PM, Rogers Hall 302 Monday, March 28, 2011: Colloquium Dynamics of the paths of the graphs Anatoly M. Vershik, St. Petersburg Branch, Steklov Institute of Mathematics of the Russian Academy of Sciences4:30-5:15 PM, Rogers Hall 302 Monday, March 21, 2011: Colloquium The occasional omnipresence of hyperbolic geometry Igor Rivin, Temple University4:30-5:15 PM, Rogers Hall 302 Monday, March 14, 2011: No Coloquium, Spring Break6:00pm, Hemmerdinger Hall,Silver Center for Arts and Science,100 Washington Square Monday, March 07, 2011: Colloquium Asymptotic behavior of solutions to the sigma_k-Yamabe equation near isolated singularities YanYan Li, Rutgers University4:30-5:15 PM, Rogers Hall 302 Monday, February 28, 2011: Colloquium Sharp constants for the Hardy-Littlewood-Sobolev inequality on R^n and the Folland-Stein and Jerison-Lee inequalities on the Heisenberg group Elliott Lieb, Princeton University4:30-5:15 PM, Rogers Hall 302 Monday, February 14, 2011: Colloquium The space of area functions on Minkowski spaces A. C. Thompson, Dalhousie University5:30-6:15 PM, Rogers Hall 302 Monday, February 14, 2011: Colloquium Valuations on Sobolev Spaces Monika Ludwig, Vienna University of Technology4:30-5:15 PM, Rogers Hall 302 Monday, November 22, 2010: Colloquium A fourth order invariant in CR geometry Paul Yang, Princeton University4:05-4:50 PM, RH317 Monday, October 18, 2010: Colloquium On the Symmetric Mixed Isoperimetric Inequality of Domains in the Plane Jiazu Zhou, Southwest University4:05-4:50 PM, Rogers Hall 302 Monday, October 04, 2010: Colloquium Towards a Calculus for nonlinear spectral gaps. Assaf Naor, Courant Institute of Mathematical Sciences New York University5:05-5:50 PM, RH505 Monday, September 27, 2010: Colloquium Combinatorial and topological aspects of problems in Convexity Gil Kalai, Yale University & Hebrew University of Jerusalem4:05-4:50 PM, Rogers Hall 302 Thursday, July 08, 2010: Colloquium Positive definite functions, stable random vectors and embedding of normed spaces in L_p Aleksandr Koldobskiy, University of Missouri4:05-4:50, Rogers Hall 302 Monday, May 10, 2010: Colloquium Radon Transforms of Functions of Matrix Argument and Composite Cosine Transforms Elena Ournycheva, University of Pittsburgh4:05-4:50 PM, Rogers Hall 302 Friday, May 07, 2010: Colloquium Boundary rigidity, volume minimality, and minimal surfaces in L_{\infty}: a survey Dmitry Burago, Penn State University4:05-4:50 PM, Rogers Hall 302 Monday, May 03, 2010: Colloquium Invariant measures on the universal graphs and continuous graphs Anatoly M. Vershik, St. Petersburg Branch, Steklov Institute of Mathematics of the Russian Academy of Sciences5:05-5:50 PM, Rogers Hall 302 Monday, May 03, 2010: Colloquium Cauchy's Surface Area Formula. Helmut Groemer, University of Arizona4:05-4:50 PM, Rogers Hall 302 Friday, April 30, 2010: Lecture (rescheduled) From Riemannian Geometry to Modern Computer Graphics Shing-Tung Yau, Harvard University4:00-5:00 PM, JAB 474 Monday, April 19, 2010: Colloquium Hypersurfaces of constant curvature in Hyperbolic space Joel Spruck, Johns Hopkins University4:05-4:50 PM, Rogers Hall 302 Monday, April 12, 2010: PhD Thesis Presentation Valuations on Lp-Spaces Andy Tsang, NYU Poly4:05-4:50 PM, Rogers Hall 302 Monday, April 05, 2010: Colloquium Centrally-symmetric polytopes with few faces Gunter M. Ziegler, Technische Universität Berlin5:05-5:50 PM, Rogers Hall 302 Monday, April 05, 2010: Colloquium Manifolds with k-positive Ricci curvature Jon Wolfson, Michigan State University4:05-4:50 PM, Rogers Hall 302 Friday, April 02, 2010: Colloquium Affine Analytic Inequalities Franz Schuster, Technische Universität Wien4:05-4:50 PM, RH304 Monday, March 22, 2010: Colloquium On the von Neumann-Goldstine problem Van Vu, Rutgers University4:05-4:50 PM, Rogers Hall 302 Monday, March 15, 2010: Spring Break Spring Break from 03/15 to 03/19 Monday, March 08, 2010: Colloquium Convex sets of constant width, or why geometry can be of vital importance Bernd Kawohl, Universität Köln4:05-4:50 PM, Rogers Hall 302 Monday, February 22, 2010: Colloquium Domain Walls and Junction Sets Fanghua Lin, Courant Institute of Mathematical Sciences New York University4:05-4:50 PM, Rogers Hall 302 Monday, February 08, 2010: Colloquium Counting parking functions and connected graphs Joseph Kung, University of North Texas4:05-4:50 PM, Rogers Hall 302 Monday, December 14, 2009: Colloquium The dimension of almost spherical sections of symmetric convex bodies Gideon Schechtman, Weizmann Institute of Science4:05-4:50 PM, RH202 Monday, December 07, 2009: Colloquium Double Permutation Sequences and Arrangements of Planar Families of Convex Sets Ricky Pollack, Courant Institute of Mathematical Sciences/NYU5:05-5:50 PM, RH202 Monday, December 07, 2009: Colloquium Counting flags in polytopes, posets and Coxeter groups Louis Billera, Cornell University4:05-4:50 PM, RH202 Monday, November 16, 2009: Colloquium Special classes of convex sets Valeriu Soltan, George Mason University4:05-4:50 PM, RH202 Monday, November 09, 2009: Colloquium From persistent random walk to the telegraph noise Pierre Vallois, Universite de Nancy I4:05-4:50 PM, RH202 Monday, November 02, 2009: Colloquium The Skeletal Approach to Polyhedra and Symmetry Egon Schulte, Northeastern University4:05-4:50 PM, RH202 Monday, October 26, 2009: Colloquium Avian Influenza: Modeling, Analysis, and Data Fitting Maia Martcheva, University of Florida4:05-4:50 PM, RH202 Monday, October 19, 2009: Colloquium Some remarks on singular solutions of nonlinear elliptic equations YanYan Li, Rutgers University4:05-4:50 PM, RH202 Monday, October 12, 2009: Colloquium A survey of the geometry of numbers Peter M. Gruber, Technische Universität Wien4:05-4:50 PM, RH202 Monday, October 05, 2009: Colloquium Fourier transform of the projection operator Franz Schuster, Technische Universität Wien4:05-4:50 PM, RH202 Monday, September 28, 2009: Colloquium L1 embeddings of the Heisenberg group and the Sparsest Cut problem. Assaf Naor, Courant Institute of Mathematical Sciences New York University4:05-4:50 PM, RH202 Thursday, May 07, 2009: Colloquium Measuring Convex Bodies with a Carpenter Square. Helmut Groemer, University of Arizona4:05-4:50 PM, Rogers Hall 302 Thursday, May 07, 2009: Colloquium Containment and inscribed simplices. Dan Klain, UMass-Lowell5:05-5:50 PM, Rogers Hall 302 Thursday, April 23, 2009: Colloquium On the Hamilton-Perelman proof of the Poincare conjecture. Christina Sormani, Lehman College and CUNY Graduate Center4:05-4:50 PM, Rogers Hall 302 Thursday, April 16, 2009: Colloquium Level set mean curvature flow. Joel Spruck, Johns Hopkins University4:05-4:50 PM, Rogers Hall 302 Thursday, April 16, 2009: Colloquium The $p$-Faber-Krahn inequality. Jie Xiao, Memorial University5:05-5:50 PM, Rogers Hall 302 Thursday, April 09, 2009: Colloquium Thompson's conjecture and related topics. Guiyun Chen, Southwest University4:05-4:50 PM, Rogers Hall 302 Thursday, March 26, 2009: Colloquium Sharp curvature estimates for the level sets of Green's function on a 3-dimensional convex domain. Xinan Ma, Institute for Advanced Study4:05-4:50 PM, Rogers Hall 302 Thursday, March 12, 2009: Colloquium Super-Gaussian directions on convex bodies. Grigoris Paouris, Texas A&M University4:05-4:50 PM, Rogers Hall 302 Thursday, March 12, 2009: Colloquium How does a typical convex lattice polygon look like? Fedor Petrov, Steklov Mathematical Institute5:05-5:50 PM, Rogers Hall 302 Thursday, March 05, 2009: Colloquium Sections of complex convex bodies and the complex Busemann-Petty problem. Marisa Zymonopoulou, Case Western Reserve University4:05-4:50 PM, Rogers Hall 302 Thursday, February 26, 2009: Colloquium On a geometric construction and some of its analytic implications. Alina Stancu, Concordia University4:05-4:50 PM, Rogers Hall 302 Thursday, January 22, 2009: Colloquium How little Information is enough to Determine Some Classical Transforms? Vitali Milman, Tel Aviv University4:05-4:50 PM, Rogers Hall 302 Thursday, December 11, 2008: Colloquium Some Spectral Properties of Convex Domains. Emanuel Milman, Institute for Advanced Study4:05-4:50 PM, RH503 Thursday, December 04, 2008: Colloquium Automorphism Groups of Finite Groupoids. Sylvia Silberger, Hofstra University4:05-4:50 PM, RH503 Thursday, November 20, 2008: Colloquium If you can hide behind it, can you hide inside it? Dan Klain, UMass-Lowell4:05-4:50 PM, RH503 Thursday, November 13, 2008: Colloquium What is computational science? Carl Tropper, McGill University4:05-4:50 PM, RH503 Thursday, November 06, 2008: Colloquium The degree-counting formulas for the mean field equations. Chang-Shou Lin, National Taiwan University4:05-4:50 PM, RH503 Thursday, October 30, 2008: Colloquium The near field refractor problem. Christian Gutierrez, Temple University4:05-4:50 PM, RH503 Thursday, October 30, 2008: Colloquium Mixed p-affine surface area and isoperimetric inequalities. Deping YE, Case Western Reserve University5:05-5:50 PM, RH503 Thursday, October 23, 2008: Colloquium Geometry (and algebra and combinatorics) of simplices. Igor Rivin, Temple University4:05-4:50 PM, RH503 Thursday, October 16, 2008: Colloquium Roots of Ehrhart Polynomials. Martin Henk, University of Magdeburg4:05-4:50 PM, RH503 Thursday, October 02, 2008: Colloquium Embeddings of discrete groups and the speed of random walks. Assaf Naor, Courant Institute of Mathematical Sciences New York University4:05-4:50 PM, RH503 Thursday, September 25, 2008: Colloquium The Fourier transform of a general monotone function. Elijah Liflyand, Bar Ilan University4:05-4:50 PM, RH503 Thursday, May 08, 2008: Colloquium Polytopes, Lattices and the Euclidean Algorithm Ted Bisztriczky, University of Calgary4:05-4:50 PM, Rogers Hall 302 Thursday, May 01, 2008: Colloquium How Jakob Steiner made his point. Helmut Groemer, University of Arizona4:05-4:50 PM, Rogers Hall 302 Thursday, April 24, 2008: Colloquium The abstract concept of Duality and some examples Vitali Milman, Tel-Aviv University4:05-4:50 PM, Rogers Hall 302 Thursday, April 24, 2008: Colloquium A Fourier type transform on translation invariant valuations on convex sets Semyon Alesker, Tel-Aviv University5:05-5:50 PM, Rogers Hall 302 Wednesday, April 23, 2008: SpecialColloquium Some Global Rigidity Theorems in Finsler Geometry Zhongmin Shen, Indiana University-Purdue University Indianapolis5:05-5:50 PM, RH304 Thursday, April 17, 2008: Colloquium Locally convex hypersurfaces of constant curvature with boundary Joel Spruck, Johns Hopkins University4:10-4:55 PM, Rogers Hall 302 Thursday, April 10, 2008: Colloquium Fourier transforms and determination of convex bodies. Vladyslav Yaskin, University of Oklahoma4:05-4:50 PM, Rogers Hall 302 Thursday, April 10, 2008: Colloquium Christoffel's problem and the Fourier transform. Maryna Yaskina, University of Oklahoma5:05-5:50 PM, Rogers Hall 302 Thursday, April 03, 2008: Colloquium The Aleksandrov problem of existence of hypersurfaces with given integral Gauss curvature and optimal mass transport on Sn Vladimir Oliker, Emory University4:05-4:50 PM, Rogers Hall 302 Thursday, March 20, 2008: Colloquium Valuations and volume inequalities Christoph Haberl, Technische Universitaet Wien4:05-4:50 PM, Rogers Hall 302 Thursday, March 13, 2008: Colloquium Contact Problems in nonlinear elasticity Friedemann Schuricht, TU Dresden4:05-4:50 PM, Rogers Hall 302 Thursday, March 06, 2008: Colloquium Random series' --- what does it mean? Anatoly M. Vershik, St. Petersburg Branch, Steklov Institute of Mathematics of the Russian Academy of Sciences4:05-4:50 PM, Rogers Hall 302 Thursday, February 28, 2008: Colloquium An introduction to coding sequences and a link to convex geometry Christian Steineder, Technische Universitaet Wien4:05-4:50 PM, Rogers Hall 302 Thursday, February 21, 2008: Colloquium Simplices in the Euclidean ball Carsten Schutt, University of Kiel4:05-4:50 PM, Rogers Hall 302 Thursday, February 21, 2008: Colloquium On Lp affine surface area Elizabeth Werner, Case Western Reserve University5:05-5:50 PM, Rogers Hall 302 Thursday, February 14, 2008: Colloquium Crofton Measures and Minkowski Valuations Franz Schuster, Technische Universität Wien4:05-4:50 PM, Rogers Hall 302 Thursday, February 07, 2008: Colloquium Sylvester-Gallai Bounds for the Affine and Projective Planes. Jon Lenchner, IBM Research4:05-4:50 PM, Rogers Hall 302 Thursday, January 17, 2008: Colloquium Tomography over maximally curved spheres. Eric L Grinberg, University of New Hampshire4:05-4:50 PM, Rogers Hall 302 Thursday, December 06, 2007: Colloquium Minimal-volume projections of cubes and minimal-volume sufficient enlargements for normed spaces Mikhail Ostrovskii, St John's University4:05-4:50 PM, Rogers Hall 302 Thursday, November 29, 2007: Colloquium The role of convexity in isoperimetry, spectral-gap and concentration. Emanuel Milman, Institute for Advanced Study4:05-4:50 PM, Rogers Hall 302 Thursday, November 15, 2007: Colloquium On a conformally invariant integral equation. Fengbo Hang, Courant Institute of Mathematical Sciences New York University4:05-4:50 PM, Rogers Hall 302 Thursday, November 01, 2007: Colloquium Sampling convex bodies and Random Matrix Theory Alain Pajor, Universite Paris-Est4:05-4:50 PM, Rogers Hall 302 Thursday, October 18, 2007: Colloquium Asymptotic geometry of convex sets Igor Rivin, Temple University4:05-4:50 PM, Rogers Hall 302 Thursday, October 11, 2007: Colloquium Distribution of volume on convex bodies and the Hyperplane Conjecture Grigoris Paouris, Courant Institute of Mathematical Sciences New York University4:05-4:50 PM, Rogers Hall 302 Thursday, October 04, 2007: Colloquium Separator theorems and intersection patterns of convex sets Janos Pach, City College4:05-4:50 PM, Rogers Hall 302 Thursday, September 27, 2007: Colloquium The Lipschitz extension problem Assaf Naor, Courant Institute of Mathematical Sciences New York University4:05-4:50 PM, Rogers Hall 302 Friday, May 18, 2007: Colloquium Approximation via regularization of the local time of semimartingales and Brownian motion Pierre Vallois, Nance Universite4:05-4:50 PM, Rogers Hall 302 Thursday, May 03, 2007: Colloquium On Sections of Star Bodies. Helmut Groemer, University of Arizona4:05-4:50 PM, Rogers Hall 302 Thursday, April 19, 2007: Colloquium Fully nonlinear elliptic equations in geometric problems. Bo Guan, Ohio State University4:05-4:50 PM, LC102 Thursday, April 19, 2007: Colloquium Random nets and random embeddings of normed spaces l∞N Nicole Tomczak-Jaegermann, University of Alberta5:05-5:50 PM, LC102 Thursday, April 12, 2007: Colloquium Sums of congruent convex bodies. Rolf Schneider, University of Freiburg4:07-4:52 PM, Rogers Hall 302 Thursday, March 08, 2007: Colloquium Rigid metric spaces, universality, and randomness A. M. Vershik, St. Petersburg Branch, Steklov Institute of Mathematics of the Russian Academy of Sciences4:07-4:52 PM, Rogers Hall 302 Thursday, March 01, 2007: Colloquium An Option-Theoretic Approach to the Valuation of Mortgage-Backed Securities Deane Yang, Polytechnic University, Brooklyn4:07-4:52 PM, JAB475 Thursday, February 22, 2007: Colloquium Some goemetric properties of intersection bodies. Maria Alfonseca, North Dakota State University4:07-4:52 PM, Rogers Hall 302 Thursday, January 25, 2007: Colloquium The Busemann-Petty Problem with the Generalized Axial Symmetry Boris Rubin, Louisiana State University4:07-4:52 PM, Rogers Hall 302 Thursday, November 30, 2006: Colloquium Distance Trisector Curves and Zone Diagram with Approximation Using Convex Distance Metrics Tetsuo Asano, School of Information Science JAIST, Japan4:07-4:52 PM, RH803 Thursday, November 09, 2006: Colloquium Minkowski valuations and geometric inequalities Franz Schuster, Technical University of Vienna4:07-4:52 PM, RH803 Thursday, November 02, 2006: Colloquium Sharp Sobolev Inequalities from Historic and Geometric View Points Meijun Zhu, University of Oklahoma, Norman4:07-4:52 PM, RH803 Thursday, July 27, 2006: Colloquium Affine Geometry of Convex Bodies Monika Ludwig, Technical University of Vienna4:07-4:52 PM, RH605 Thursday, May 18, 2006: Colloquium Classification of Rotation Intertwining Additive Maps and Applications Franz Schuster, Technical University of Vienna4:07-4:52 PM, RH605 Thursday, May 11, 2006: Colloquium Variational problems for volumes of convex bodies Stefano Campi, Universitá di Siena4:07-4:52 PM, Rogers Hall 302 Thursday, May 04, 2006: Colloquium Parallel Bodies and the Normal Bundle of Convex Bodies Helmut Groemer, University of Arizona4:07-4:52 PM, LC102 Thursday, May 04, 2006: Colloquium Integral Geometry and Zonoids Rolf Schneider, Albert-Ludwigs-Universität Freiburg5:15-6:00 PM, LC102 Thursday, April 13, 2006: Colloquium Gaussian Measure and Convex sets Artem Zvavitch, Kent State University4:07-4:52 PM, Rogers Hall 302 Thursday, March 16, 2006: Colloquium Minima of a Sequence of Gaussian Random Variables Carsten Schuett, University of Kiel3:07-3:52 PM, Rogers Hall 302 Thursday, March 16, 2006: Colloquium Approximation of Convex Bodies by Polytopes Elisabeth Werner, Case Western University4:07-4:52 PM, Rogers Hall 302 Thursday, March 02, 2006: Colloquium Valuations on Polytopes Monika Ludwig, Technische Universität Wien4:07-4:52 PM, Rogers Hall 302 Thursday, February 23, 2006: Colloquium Polynomials, Matrices, and Other Things Igor Rivin, Temple University4:07-4:52 PM, Rogers Hall 302 Thursday, February 16, 2006: Colloquium Concave Functions and Convex Level Sets Yakar Kannai, The Weizmann Institute4:07-4:52 PM, Rogers Hall 302 Thursday, January 26, 2006: Colloquium Mathematics, Finance and the Art of Pricing Charles S. Tapiero, Polytechnic University4:07-4:52 PM, Rogers Hall 302 Thursday, January 19, 2006: Colloquium Area and Perimeter Bisectors of Convex Sets Paul Goodey, University of Oklahoma (Norman)4:07-4:52 PM, Rogers Hall 302 Tuesday, January 03, 2006: Colloquium Uniformly Distributed Sequences of Partitions Aljosa Volcic, Universitá della Calabria4:07-4:52 PM, RH304 Thursday, December 08, 2005: Colloquium Mahler's inequality and conjecture A. C. Thompson, Dalhousie University, Halifax, Canada4:07-4:52 PM, Rogers Hall 302 Thursday, December 01, 2005: Colloquium Quantum Computation - an introduction Marianna Bonanome, CUNY4:07-4:52 PM, RH304 Thursday, November 17, 2005: Colloquium Measuring ellipsoids Igor Rivin, Temple University4:07-4:52 PM, RH304 Thursday, November 10, 2005: Colloquium On the Optimal Control of Partially Observed Systems Alain Bensoussan, University of Taxas at Dallas4:07-4:52 PM, RH304 Wednesday, November 09, 2005: Colloquium Some analytic aspects of Ricci flow: a mini-survey Bennett Chow, University of California - San Diego4:07-4:52 PM, JAB678 Thursday, November 03, 2005: Colloquium Reduction of randomness in some geometric constructions Shiri Artstein, Princeton University4:07-4:52 PM, RH304 Thursday, October 27, 2005: Colloquium Hyperbolic van der Waerden and Valiant-Schrijver Conjectures Leonid Gurvits, Los Alamos National Laboratories4:07-4:52 PM, RH304 Monday, October 24, 2005: Colloquium Functional versions of some geometric inequalities Vitali Milman, Tel Aviv University4:07-4:52 PM, Rogers Hall 302 Thursday, April 14, 2005: Colloquium Valuations on manifolds and their properties Semyon Alesker, Tel Aviv University4:07-4:52 PM, RH304 Thursday, March 17, 2005: Colloquium Double-Permutation Sequences and Pseudoline Transversals Jacob E. Goodman, City College, CUNY4:07-4:52 PM, RH304 Thursday, February 17, 2005: Colloquium A characterization of Lp-affine surface area Monika Ludwig, Technische Universität Wien4:07-4:52 PM, RH304 Thursday, February 10, 2005: Colloquium ZigZag approximation of the Euclidean ball Vitali Milman, Tel Aviv University4:07-4:52 PM, RH304 Thursday, January 27, 2005: Colloquium A Fourier transform approach to the integrability of trigonometric series Elijah Liflyand, Bar-Ilan University4:07-4:52 PM, RH304 Thursday, December 16, 2004: Colloquium On Willmore functional and Bonnesen-type inequalities Jiazu Zhou, Guizhou Normal University and Wuhan University, China4:07-4:52 PM, RH304 Thursday, November 18, 2004: Colloquium Combinatorial and Algorithmic Aspects of Hyperbolic Polynomials Leonid Gurvits, Los Alamos National Laboratory4:07-4:52 PM, RH304 Thursday, November 11, 2004: Colloquium The slicing problem and geometric symmetrization methods Bo'az Klartag, Institute of Advanced Study4:07-4:52 PM, RH304 Thursday, October 28, 2004: Colloquium On duality of metric entropy Shiri Artstein, Princeton University4:07-4:52 PM, RH304 Thursday, October 14, 2004: Colloquium Simplicity vs Complexity in Convex geometry Vital Milman, Tel Aiv4:07-4:52 PM, RH304 Thursday, October 07, 2004: Colloquium Directed Projection Functions of Convex Bodies Paul Goodey, University of Oklahoma4:07-4:52 PM, RH304 Thursday, September 30, 2004: Colloquium Classification of Solutions of Integral Equations and Systems Wenxiong Chen, Yeshiva College4:07-4:52 PM, RH304 Thursday, May 20, 2004: Colloquium `Desperately Seeking' a Generic Quantum P3 Michaela Vancliff, University of Texas at Arlington4-5 PM, RH304 Wednesday, April 28, 2004: Colloquium See Ya Later Surfaces Robert Bryant, Duke University and Columbia University4:07-4:52 PM, RH304 Thursday, April 22, 2004: Colloquium TBA Dmitry Burago, Penn State University4:07-4:52 PM, RH304 Thursday, April 15, 2004: Colloquium TBA Stephanie Alexander, University of Illinois at Urbana-Champaign4:07-4:52 PM, RH304 Thursday, April 08, 2004: Colloquium Affinely associated bodies Monika Ludwig, Technische Universität Wien4:07-4:52 PM, RH304 Wednesday, March 24, 2004: Colloquium Convergence of algorithms for reconstructing convex bodies Richard Gardner, Western Washington University4:07-4:52 PM, RH304 Thursday, March 11, 2004: Colloquium The algebraic approach to integral geometry Joseph Fu, University of Georgia4:07-4:52 PM, RH304 Thursday, February 26, 2004: Colloquium TBA Alexander Koldobsky, University of Missouri-Columbia4:07-4:52 PM, RH304 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22205263376235962, "perplexity": 21631.228204397867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999645327/warc/CC-MAIN-20140305060725-00089-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://wiki.melvoridle.com/index.php?title=Combat&oldid=60633 | # Combat
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Combat
Combat page.
Farmlands combat area selected.
Fighting a Chicken.
Death screen.
Combat involves fighting various monsters throughout the combat areas, slayer areas, and dungeons of Melvor. Food and Equipment cannot be changed while in a dungeon unless an upgrade has been purchased from the shop.
Combat stats — viewable from the combat page — are affected by the player's combat skills and any Equipment worn. The combat skills are , , , , , , , and . These skills are factored in when calculating the player's combat level, which is an approximation of the player's power. Players start at combat level 3 and can reach a maximum of combat level 126, while monsters range from combat level 1 to 1300.
## Combat Info
• Max Hit shows the maximum potential damage a successful attack can do. The Max Hit displayed for players and enemies does not factor in damage reduction.
• Chance to Hit shows the player's percent chance to hit the enemy with their attack.
• Accuracy Rating affects the likelihood an attack will be successful during combat. The player's Accuracy Rating is compared to the enemy's Evasion Rating when calculating the chance an attack will succeed.
• Damage Reduction (also referred to as DR) reduces any damage received by the percent shown.
• Evasion Rating affects the likelihood an attack will be avoided during combat. There are three types of Evasion Rating; Melee, Ranged, and Magic. The Attack Type being defended against affects which Evasion Rating will be used. The enemy's Accuracy Rating is compared to the player's Evasion Rating when calculating the chance an attack will be avoided.
• Prayer Points are gained by burying bones and are used to activate prayers during combat.
• Active Prayers provide bonuses to the player during combat and use Prayer Points to remain active. Up to two prayers can be active simultaneously, so long as the required Prayer Points are available.
## Basic Combat
To fight monsters navigate to the combat page by selecting any combat skill from the left menu. The combat page contains three tabs containing the locations where monsters can be fought, above a section for equipment and combat stats. Browse combat areas, slayer areas, or dungeons to reveal a menu containing all the relevant locations. Selecting a location will display the enemies which can be found there, along with their Hitpoints, Combat Level, and Attack Type. Selecting "Fight" will begin combat. Combat will continue until the player runs, dies, or defeats the dungeon. Enemies are fought automatically. Combat skills can not be trained in offline mode.
Combat skills are leveled by damaging enemies with weapons, and magic. To train a specific combat skill match the Attack Style icon with the relevant combat skill. The available Attack Styles will change depending on the type of weapon equipped. Most combat skills have a passive effect even when the relevant style is not selected. Weapons, armour, and food can be equipped for use by selecting them in the Bank. Weapons and armour provide bonuses and penalties to combat stats, while food can be used for healing in combat.
After each kill, it will take 3 seconds for the next monster to spawn.
## Loot and Rewards
All monsters killed in combat and slayer areas will drop bones upon death, with a chance for additional loot based on the monster's type. The "Drops" option will display the possible loot drops a monster can give.
Any items enemies drop upon death must be looted manually by selecting "Loot All" or the item itself, unless the is equipped. If there are more than 100 stacks of loot, the oldest drop disappears. Bones stack and do not time out like other drops. Other drops do not stack even if they are of the same type.
Monsters fought in dungeons will not drop loot when killed and do not count towards Tasks. The God Dungeons are an exception to this rule: the shards are rewarded after each killed monster. When the last monster in a dungeon is killed, the player will be given a reward that is sent to the Bank automatically.
## Combat Triangle
- See Combat Triangle
## Special Attacks
- Main article: Special Attacks
Special attacks can be used in combat to gain a slight advantage over the monster being fought. Equipping a weapon with a special attack will give your attacks a chance to perform a special attack. Only a select few high level weapons have special attacks. Several monsters can use special attacks against the player, including and from the , and all of the monsters from the four god dungeons. The attack bar will change to yellow when a special attack is being used.
Special Attacks cannot be used alongside Ancient Magicks.
## Gaining Experience
Players can gain experience in nine different skills while training combat:
experience is gained at a rate of 0.133 experience per damage dealt regardless of the combat style being used.
, , , , or experience is gained depending on the active combat style. If the selected combat style only gives experience towards a single skill, 0.4 experience will be gained per damage dealt. If the selected combat style is a hybrid combat style, 0.2 experience will be gained by each of the two relevant skills for each damage dealt.
No additional experience is directly generated for using Curses or Auroras, although they may indirectly generate additional experience by increasing the amount of damage the player deals.
Players gain $\displaystyle{ \frac{1}{30} }$ or $\displaystyle{ 0.0\bar{3} }$ experience per damage dealt per prayer point spent on prayers that affected the attack. For example, if (3 prayer point cost) is active, each attack made will generate 0.1 extra experience per damage dealt. Only prayers such as that say 'Provides extra Prayer XP based on damage dealt to the enemy' in their description will generate experience this way - Prayers such as that say 'Provides no extra Prayer XP' will not contribute towards this experience bonus.
All of the above experience will be multiplied by 1.04 while a is equipped.
experience is gained through two sources: Killing monsters on a Slayer Task and killing monsters in a Slayer area. Killing the monster that is the player's current Slayer Task will generate experience equal to 10% of the monster's hitpoints. Killing a monster in a Slayer area will generate experience equal to 5% of the monster's hitpoints. If the player's Slayer Task is for a monster in a Slayer area, these bonuses will stack for a total of 15% of the monster's hitpoints in experience per kill.
Slayer Experience cannot be increased by the , however the various pieces of Slayer Equipment that can be purchased from the Shop using provides an experience bonus when worn.
experience can be gained if the player has equipped tablets for at least one combat familiar. Summoning experience will be granted whenever the familiar attacks the enemy, consuming a tablet in the process. Being a non-combat skill, Summoning experience gained cannot be increased by the .
## Death
Receiving damage that would bring the player's total to 0 or below will cause the player to die. Upon death a random equipment slot is selected, any Equipment within the selected slot will be lost forever (unless the prayer is active). If either ammunition or Summoning tablets are lost, the entire stack - regardless of size - will be forfeited. Only non-food items that have been equipped in an active slot can be lost upon death.
All items equipped have an equal chance of being lost. When the death penalty rolls to determine which item is lost, it starts by rolling a random equipment slot. If the player has nothing equipped in that slot (For example, if they are using Melee and do not have any Ammo equipped) nothing will be lost, and a message saying "Luck was on your side today. You lost nothing." will appear.
characters that die are deleted permanently.
## Combat Mechanics
### Combat Level
The first formula is used to calculate the player's Base Combat Level.
\displaystyle{ \small{ \begin{aligned} \text{Base Combat Level} = 0.25 \times (\text{Defence Skill Level} + \text{Hitpoints Skill Level} + \lfloor 0.5 \times \text{Prayer Skill Level} \rfloor) \end{aligned}} }
The second set of formulas are used to calculate the player's Offensive Combat Levels, only the highest result is used.
\displaystyle{ \small{\begin{aligned} \text{Melee Combat Level} &= \text{Attack Skill Level} + \text{Strength Skill Level} \\ \text{Ranged Combat Level} &= \lfloor 1.5 \times \text{Ranged Skill Level} \rfloor \\ \text{Magic Combat Level} &= \lfloor 1.5 \times \text{Magic Skill Level} \rfloor \end{aligned}} }
The third formula is used to calculate the player's Combat Level, rounded down to the nearest whole number.
\displaystyle{ \small{\begin{aligned}\text{Combat Level} = \lfloor \text{Base Combat Level} + 0.325 \times \text{Highest Offensive Combat Level} \rfloor\end{aligned}} }
Where: $\displaystyle{ \left \lfloor x \right \rfloor }$ is the floor function.
### Stun (Freeze)
- Main article: Combat Debuff
Stun, also known as freeze, is a status effect that can be inflicted by both players and monsters. Stun can be applied for a number of turns by a special attack or a normal attack by the player if the is equipped. When an attack applies stun the current attack of the target is interrupted, including special attacks that are in progress. If the target is already stunned, it will not be applied again. When stunned, a stun timer counts down with a period equivalent to the target's attack speed. On completion of the timer the number of stun turns is decreased by one, and the status is removed if the resulting amount is zero. In addition to not being able to attack while stunned, a character cannot evade any attacks.
While a player or monster is stunned, they take 30% more damage.
While a player or monster is asleep, they take 20% more damage.
### Accuracy Rating
The calculation used to determine the player's accuracy rating is broadly the same for all combat styles, with some parameters varying depending on the player's current attack type and style:
The first formula is used to determine the player's effective skill level:
$\displaystyle{ \small{\text{Effective Skill Level} = \text{Standard Skill Level} + \text{Hidden Skill Level}} }$
Where 'Standard Skill Level' is the player's skill level as seen in the left-hand navigation bar (up to a maximum of 99), while 'Hidden Skill Level' is the sum of any hidden skill level bonuses, such as those granted by , some Pets or items like the .
Next, the base accuracy bonus should be calculated. This is the sum of the relevant attack bonus statistic provided by all currently equipped equipment (as seen in the Equipment Stats interface), plus:
• +15 if fighting a monster while the / synergy is active
• +15 if fighting a monster while the / synergy is active
• The additional ranged attack bonus provided by if equipped, the formula for this is included on the bow's page
In addition, the player's accuracy modifier needs to be known. This is the sum of all global accuracy rating and the relevant attack type's accuracy rating increases, provided from places such as Potions, , and . If using Surge spells from the standard magic spellbook, this modifier is increased by a further 6%.
Finally, the player's accuracy rating can then be calculated as:
$\displaystyle{ \text{Accuracy Rating} = \left \lfloor \left (\text{Effective Skill Level} + 9 \right ) \times \left (\text{Base Accuracy Bonus} + 64 \right ) \times \left (1 + \frac{\text{Accuracy Modifier}}{100} \right ) \right \rfloor }$
Where: $\displaystyle{ \left \lfloor x \right \rfloor }$ is the floor function.
### Chance to Hit
To calculate the chance an attack will hit, the Accuracy Rating of the attacker is compared to the relevant Evasion Rating of the target. If the attacker's Accuracy Rating is lower than the defender's evasion rating the formula is as follows:
\displaystyle{ \small{ \begin{aligned} \text{Percentage Hit Chance} = \frac{\text{Attacker Accuracy Rating}}{2 \times \text{Target Evasion Rating}} \times 100 \end{aligned}} }
Otherwise, the attacker's Accuracy Rating is higher than or equal to the target's Evasion Rating, and the formula is instead:
\displaystyle{ \small{ \begin{aligned} \text{Percentage Hit Chance} = 1 - \frac{\text{Target Evasion Rating}}{2 \times\text{Attacker Accuracy Rating}} \times 100 \end{aligned}} }
When the attacker's Accuracy Rating and the target's Evasion Rating are the same, the chance to hit is 50%. The higher the attacker's Accuracy Rating is above the targets Evasion Rating, the less valuable each point will be. At double the target's Evasion Rating, the attacker will hit 75% of the time, at triple, the attacker will hit 83.3% of the time.
### Max Hit
#### Melee and Ranged Max Hit
First, calculate the player's effective level for the melee max hit, or the effective level for the ranged max hit:
$\displaystyle{ \small{\text{Effective Skill Level} = \text{Standard Skill Level} + \text{Hidden Skill Level}} }$
Where 'Standard Skill Level' is the player's skill level as seen in the left-hand navigation bar (up to a maximum of 99), while 'Hidden Skill Level' is the sum of any hidden skill level bonuses, such as those granted by .
Next, the strength bonus should be calculated. This is the sum of the relevant statistic provided by all currently equipped equipment (as seen in the Equipment Stats interface), where the relevant statistic is:
• Melee strength bonus for
• Ranged strength bonus for
Given these figures, the base max hit (i.e. max hit before modifiers) is then calculated as:
$\displaystyle{ \text{Base Max Hit} = \left \lfloor M \times \left ( 2.2 + \frac{\text{Effective Skill Level}}{10} + \frac{\text{Effective Skill Level} + 17}{640} \times \text{Strength Bonus} \right ) \right \rfloor }$
Where $\displaystyle{ M }$ varies based on the Game Mode being played, and is equal to:
• 10 if playing Standard or mode
• 100 if playing mode
This base max hit is then adjusted by the percentage and flat max hit modifiers to arrive at the final max hit figure, where the modifiers include both global max hit increases as well as increases specific to the relevant attack type:
$\displaystyle{ \text{Max Hit} = \left \lfloor \text{Base Max Hit} \times \left ( 1 + \frac{\text{Percentage Max Hit Modifier}}{100} \right ) \right \rfloor + \text{Flat Max Hit Modifier} }$
#### Magic Max Hit
The player's Max Hit with spells changes depending on the spell being used. A spell's max hit is listed in the spell's description and is different for every spell. Magic Damage Bonus can be found in the Equipment Stats interface.
For Ancient Magick spells the max hit is simply as stated in the spell's description, and cannot be increased by the Magic Damage Bonus stat or any max hit modifiers - only the Combat Triangle damage bonus/penalty applies.
For standard spells, first determine the max hit stated in the spell's description along with the player's effective level by using the same effective level formula as Melee and Ranged max hit does. The base max hit is then:
$\displaystyle{ \text{Base Max Hit} = \left \lfloor \text{Spell Max Hit} \times \left ( 1 + \frac{\text{Magic Damage Bonus}}{100} \right ) \times \left ( 1 + \frac{\text{Effective Magic Level} + 1}{200} \right ) \right \rfloor }$
This base max hit is then adjusted by the percentage and flat max hit modifiers to arrive at the final max hit figure, where the modifiers include both global and magic max hit increases. Any damage increases with the same element as the spell being used (such as that provided by the ) are also added to the flat max hit modifier:
$\displaystyle{ \text{Max Hit} = \left \lfloor \text{Base Max Hit} \times \left ( 1 + \frac{\text{Percentage Max Hit Modifier}}{100} \right ) \right \rfloor + \text{Flat Max Hit Modifier} }$
### Minimum Hit
For Special Attacks the minimum hit varies depending on the attack. Otherwise, for normal attacks the base minimum hit starts at 1. To this base value, both modifiers for "+X% of Maximum Hit added to Minimum Hit" as well as increased flat minimum hit damage are applied. In addition, if a spell from the standard or Archaic spellbook is being used then element specific modifiers granting flat minimum hit damage for the same element as the spell being used are also included. The minimum hit is at least 1, and no more than the max hit.
The minimum hit calculation for normal attacks is therefore:
\displaystyle{ \begin{aligned}\text{Min Hit} = \min(\max(&\left \lfloor 1 + \text{Max Hit} \times \text{Percentage Max Hit added to Min Hit Modifier} \right \rfloor \\ &+ \text{Flat Min Hit Modifier}, 1), \text{Max Hit})\end{aligned} }
### Calculating Damage Dealt
When the player hits with a normal attack, the game calculates how much damage the player actually does by rolling a number between their minimum hit and maximum hit for the chosen combat style and then applying post-roll modifiers.
#### Post-Roll Modifiers
Some effects are applied after the damage roll is done. The main two are enemy Damage Reduction and Player Modifiers that affect damage to all monsters or to a subset of monsters such as Slayer Monsters or Boss Monsters. These modifiers can increase the player's damage beyond their listed maximum hit.
Ancient Magick spells are not affected by post-roll modifiers aside from enemy Damage Reduction, although weapon special attacks that do a fixed amount of damage are.
#### Damage Reduction
- Main article: Damage Reduction
Damage Reduction (often abbreviated as 'DR') is a stat that does exactly what it suggests: It reduces the amount of damage taken by a player or monster. Unlike other combat numbers that are complex calculations, a player's damage reduction is determined by simply adding together all damage reduction bonuses the player has, whether from equipment or something else like a or .
Many of the monsters in the game have natural damage reduction, with boss monsters typically having higher damage reduction than other monsters around the same combat level. Some monsters also have special attacks or passive effect which temporarily grant additional damage reduction, such as Stone Wall used by .
The player's damage reduction while engaged in combat can be calculated as their damage reduction % outside of combat ('Base Damage Reduction'), less any penalties that apply when in combat (such as the Intimidation passive effect present on many monsters), all multiplied by the Combat Triangle damage reduction modifier. Damage reduction has a maximum value of 95% and a minimum of 0%, where the in-combat damage reduction can be calculated as:
$\displaystyle{ \text{Damage Reduction %} = \min \Big( \max \Big( \Big\lfloor \left ( \text{Base Damage Reduction %} - \text{In-Combat Penalties} \right ) \times \text{Combat Triangle Modifier} \Big\rfloor , 0 \Big), 95 \Big) }$
Where: $\displaystyle{ \left \lfloor x \right \rfloor }$ is the floor function.
Whenever a player or monster with Damage Reduction greater than 0% would receive damage, the damage taken is reduced. This reduced damage can be calculated as follows:
$\displaystyle{ \text{Damage Taken} = \left \lfloor \text{Base Damage} \times \left ( 1 - \frac{\text{Damage Reduction %}}{100} \right ) \right \rfloor }$
#### Critical Hit
With specific pieces of equipment, there is a chance that hits dealt by the player may become critical. When a critical hit occurs, any damage dealt by that hit will be increased by 50% - this increase is multiplicative with any other damage increasing bonuses currently in effect.
The player's critical chance starts at 0% (at which point critical hits do not occur), and can be increased by the following:
#### Damage Per Hit, Average Hit, and Damage Per Second
If a player or monster successfully connects with an attack, the damage dealt will be a random integer value between their minimum and maximum hit. This random value is uniformly distributed, or in other words any value between and including the minimum and maximum hit is equally likely to be rolled.
Damage Reduction is applied after this initial damage per hit is calculated, which means it is possible for a player to deal less than their minimum hit against a monster that has Damage Reduction.
The player's average hit is thus:
$\displaystyle{ \text{Average Hit} = \frac{\text{Max Hit} + \text{Min Hit}}{2} \times (1 - \text{Enemy DR}) }$
And their damage per second can be calculated as:
$\displaystyle{ \text{DPS} = \frac{\text{Average Hit}}{\text{Attack Speed (Seconds)}} \times \text{Hit Chance} }$
### Evasion Rating
The calculation used to determine the player's evasion rating varies depending on the attack type the player is defending against. Each requires the calculation of effective skill levels, the formula for which is as follows:
$\displaystyle{ \small{\text{Effective Skill Level} = \text{Standard Skill Level} + \text{Hidden Skill Level}} }$
In addition, the player's evasion modifier need to be known. This is the sum of all global evasion rating and the relevant attack type's evasion rating increases, provided from places such as Potions, , and .
#### Melee and Ranged Evasion Rating
First determine the relevant defence bonus, this will be the melee defence bonus for the evasion rating, or the ranged defence bonus for the evasion rating. These stats can be found in the Equipment Stats interface.
Given this, the evasion rating is then:
$\displaystyle{ \text{Evasion Rating} = \left \lfloor \left ( \text{Effective Defence Level} + 9 \right ) \times \left ( \text{Defence Bonus} + 64 \right ) \times \left ( 1 + \frac{\text{Evasion Modifier}}{100} \right ) \right \rfloor }$
#### Magic Evasion Rating
The calculation for Magic evasion is slightly different from that for Melee and Ranged. The defence bonus used should naturally be the magic defence bonus. In addition, the effective defence level becomes a calculation that factors in both the player's effective Defence and Magic levels:
$\displaystyle{ \small{\text{Effective Level} = \left \lfloor 0.3 \times \text{Effective Defence Level} + 0.7 \times \text{Effective Magic Level} \right \rfloor} }$
Then the magic evasion rating becomes:
$\displaystyle{ \text{Evasion Rating} = \left \lfloor \left ( \text{Effective Level} + 9 \right ) \times \left ( \text{Magic Defence Bonus} + 64 \right ) \times \left ( 1 + \frac{\text{Evasion Modifier}}{100} \right ) \right \rfloor }$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6307564377784729, "perplexity": 4588.7265234348915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00428.warc.gz"} |
https://www.elastic.co/blog/index-type-parent-child-join-now-future-in-elasticsearch | Tech Topics
# Indices, types, and parent / child: current status and upcoming changes in Elasticsearch
A year and a few months ago, we blogged about the differences of types and indices, including "when to pick which." If you don't wish to read the full history, I'll give you the TL;DR: the conclusion was something like:
Multiple types in the same index really shouldn't be used all that often and one of the few use cases for types is parent child relationships.
Sadly, not everyone has stumbled on all of our blogs (shameless plug!), which means a lot of our users have gone on using types for what they were never really intended for. This raised a very good engineering question: should we continue this confusing "type" construct? Should we just (continue) recommending against it? Should we give the real use case of parent-child a proper first-class citizen in the Elasticsearch hierarchy? The good news is we have a much less confusing future that we're moving forward on. The bad news is eventually this will be a completely breaking change which means you will need to make application-side modifications as soon as you can, though we're really, really trying extra hard to make it as easy for you as possible. First, let's talk about why the change has come about.
## Why
Elastic (the company behind Elasticsearch) has seen thousands of support cases opened by our users and customers about the problems they've run into. Because of this, it puts us in a unique situation to see what sort of common problems people run into with Elasticsearch. With respect to types, this has come down to 3 real key elements:
• User expectations: This is probably the most prominent issue and Elastic is unfortunately at least partially to blame: once a bad analogy is out there, it is nearly impossible to kill. At some points in the past, we have stated that Elasticsearch indices were like traditional RDBMS databases while Elasticsearch types were similar to tables. However this was really an oversimplification of reality and as a result many people have the mental model that types are equivalent to tables in the relational world. In reality, in Elasticsearch, the underlying data structures are the same for the entire index, not per type. Due to the misconceptions, one of the most common pitfalls we've seen is that users expect fields to be independent across types. However, they must be of the same field type. So /my-index/type-a/my-field must use the same data type as /my-index/type-b/my-field.
• SparsitySparsity should be avoided! While Lucene 7 (which has recently merged into master) improves the handling of sparsity, it should still be avoided where possible. Types almost always increase the sparsity of your data because different types have different fields. So by removing types there is one less pitfall out there waiting to get you at some point.
• Scoring: Documents are scored by index and not by type so storing different entities in the same index can interfere with the relevance calculation for each entity type. Again, this is a bit counter-intuitive and many users miss this, which is another potential pitfall.
## What
As the one main use case derived from types vs indices is parent-child relationships, we have decided to supersede the "type" with a special field that stores the relationship between documents. We feel this represents a much better feel of the data. However, doing so is complicated, which gets us to the "when" element...
## When
We want the transition away from "types" to be as smooth as can be, so we're targeting long, multi-phase deprecation process to get us there. At the time of this writing, the current engineering targets for these are something like the following:
• In 5.x, add a new feature (index.mapping.single_type set to true/false), which will allow you to preview what the type removal will start to look like. This will be great for anybody that has a separate test environment and/or wants to start testing early.
• We plan to introduce a new breaking change currently targeted for 6.0 to make it so new indices will only allow a single type to be created to help you get better prepared for 7.x. Don't worry though -- the multi-type indices you created in 5.x will continue to work as before in 6.x. This phased roll-in is intended to give you some phase-in time as you upgrade without client-side breaks. In addition, we plan to:
• Make _uid consistent with this change by removing the type from it.
• Add a new feature for "typeless parent/child fields" called "join fields".
• Provide typeless URLs in preparation for the migration to 7.0.
• We currently plan to make the final breaking change related to this in 7.0, by removing types entirely from the Elasticsearch APIs
## What does this mean for me?
The answer to this question depends on what use case you're using the Elastic Stack for:
• Most logging & security analytics users will find the transition completely seamless: Beats and Logstash don't generally use types and where they do, there's common alignment on the teams to try make the transition work without you thinking about it.
• If you're using Elasticsearch as a search and/or document datastore/database, you'll want to review your type usage, especially your parent/child usage.
Of course, we recommend people get ahead of whatever they can by adopting things like this early. If you're looking to do so, you can use Kibana's Console / dev-tools to reindex any particular data moving data with a "_type" field into a "type" field.
POST _reindex
{
"source": {
"index": "old"
},
"dest": {
"index": "new"
},
"script": {
"inline": """
ctx._id = ctx._type + "-" + ctx._id;
ctx._source.type = ctx._type;
ctx._type = "doc";
"""
}
}
This moves the special "_type" field over to "type" which you can then use in subsequent filtering, aggregations, etc.
## I have more questions! Tell me more!
Step 1: don't panic :)
Step 2: please let us know on our forums
We're actively looking for any particular problems this may cause with your use case, so if you think you may have some, please talk to us!
• #### We're hiring
Work for a global, distributed team where finding someone like you is just a Zoom meeting away. Flexible work with impact? Development opportunities from the start? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27475929260253906, "perplexity": 2626.307645215816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304471.99/warc/CC-MAIN-20220124023407-20220124053407-00459.warc.gz"} |
https://www.physicsforums.com/threads/2x2-matrix-a-has-only-one-eigenvalue-l-with-eigenvector-v.374163/ | # 2x2 matrix A has only one eigenvalue λ with eigenvector v
1. Jan 31, 2010
### nlews
This is a revision problem I have come across,
I have completed the first few parts of it, but this is the last section and it seems entirely unrelated to the rest of the problem, and I can't get my head around it!
Suppose that the 2x2 matrix A has only one eigenvalue λ with eigenvector v, and that w is a non zero vector which is not an eigenvector..show that:
a) v and w are linearly independent
b) the matrix with respect to the basis {v, w} is
(λ c
0 λ)
for some c =not to 0
c) for a suitable choice of w, c = 1
I am stuck.
I know how to show that the eigenvalues are linearly independent, but how do I show that these two vectors are linearly independent to eachother?
Last edited: Jan 31, 2010
2. Jan 31, 2010
### tiny-tim
If v and w are linearly dependent, then w is a multiple of v, so obviously w is also an eigenvector.
Get some sleep! :zzz:
3. Jan 31, 2010
### nlews
Re: Eigenvalues/vectors
ahh ok..so I can prove by contradiction! thank you that helps massively for part a! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9137975573539734, "perplexity": 714.0154865082619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867644.88/warc/CC-MAIN-20180625092128-20180625112128-00335.warc.gz"} |
http://math.stackexchange.com/questions/11601/proof-that-a-combination-is-an-integer | # Proof that a Combination is an integer
From its definition a combination $(^n_k)$, is the number of distinct subsets of size k from a set of n elements.
This is clearly an integer, however I was curious as to why the equation
$\frac{n!}{k!(n-k)!}$ always evaluates to an integer.
So far I figured:
$n!$, is clearly divisible by $k!$, and $(n-k)!$, individually, but I could not seem to make the jump to proof that that $n!$ is divisible by their product.
-
You answered it in your first sentence. One way to show that something is an integer is to show that it counts something. So I guess you want a non-counting proof. – Jonas Meyer Nov 23 '10 at 23:03
@Jonas the fact that $nCr$ relates to Pascal's Triangle is another answer. I wouldn't call it a proof though. – Cole Johnson Jan 7 at 19:37
See my post here for a simple purely arithmetical proof that every binomial coefficient is an integer. The proof shows how to rewrite any binomial coefficient fraction as a product of fractions whose denominators are all coprime to any given prime $\rm\:p.\,$ This implies that no primes divide the denominator (when written in lowest terms), therefore the fraction is an integer.
The key property that lies at the heart of this proof is that, among all products of $\rm\, n\,$ consecutive integers, $\rm\ n!\$ has the least possible power of $\rm\,p\,$ dividing it - for every prime $\rm\,p.\,$ Thus $\rm\ n!\$ divides every product of $\rm\:n\:$ consecutive integers, since it has a smaller power of every prime divisor. Therefore $$\rm\displaystyle\quad\quad {m \choose n}\ =\ \frac{m!/(m-n)!}{n!}\ =\ \frac{m\:(m-1)\:\cdots\:(m-n+1)}{\!\!n\:(n-1)\ \cdots\:\phantom{m-n}1\phantom{+1}}\ \in\ \mathbb Z$$
-
Thanks, and sorry for the late reply/upvote :) – Akusete Sep 3 '12 at 7:45
Well, one noncombinatorial way is to induct on $n$ using Pascal's triangle; that is, using the fact that ${n \choose k} = {n-1 \choose k - 1} + {n-1 \choose k}$ (easy to verify directly) and that each ${n - 1 \choose 0}$ is just $1$.
-
As Jonas mentioned, it counts something so it has to be a natural number.
Another way is to notice that product of $m$ consecutive natural numbers is divisible by $m!$.(Prove this!)
So if we write $n! = n(n-1)(n-2) \cdots (k+1) \times (k!)$, we find that $k!$ divides $k!$ and
$n(n-1)(n-2) \cdots (k+1)$ is a product of $(n-k)$ consecutive natural numbers and hence $(n-k)!$ divides it.
-
I haven't thought too hard about this, but does there exist a direct proof (that does not rely on induction) of the fact that the product of $m$ consecutive numbers is divisible by $m!$? – Vladimir Sotirov Nov 23 '10 at 23:38
@Vladimir: You could prove that the prime factors of the numerator is of higher power than that of the denominator. You can look at Bill's proof which he has posted in the next post. – user17762 Nov 24 '10 at 0:07
@Vladimir: Generally, any proof (in Peano arithmetic) that some property is true for all integers must use induction. It may not explicitly invoke induction, e.g. the induction might be hidden way down some chain of lemmas. So it's not clear what it means for such a proof to "not rely on induction". – Bill Dubuque Nov 24 '10 at 0:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9467492699623108, "perplexity": 218.01432604879443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997901589.57/warc/CC-MAIN-20140722025821-00074-ip-10-33-131-23.ec2.internal.warc.gz"} |
http://debraborkovitz.com/author/dborkovitz/page/3/ | Trigonometry Yoga
posted in: Trigonometry |
Which is bigger, the sine of 40$$^{\circ}$$ or the sine of 50$$^{\circ}$$? This is a great trigonometry assessment question. Unfortunately, virtually none of my college students who haven’t used trig for a while can answer it (without a calculator). For … Read More
The Handshake Problem
|
The handshake problem is an old chestnut — if everyone in the room shook hands with everyone else, how many handshakes would there be? Then generalize. What I have to add to teaching the problem is a handout (doc version, … Read More
Quotes I Like
posted in: Miscellaneous |
I thought it would be nice to have a place to put quotes I run across and old favorites that I need to reread now and then. If I get a lot of them, I’ll categorize, right now, it’s a … Read More | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5380703806877136, "perplexity": 1679.3748087254796}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806856.86/warc/CC-MAIN-20171123180631-20171123200631-00630.warc.gz"} |
http://cms.math.ca/cmb/kw/Polarized%20manifold | On the nonemptiness of the adjoint linear system of polarized manifold Let $(X,L)$ be a polarized manifold over the complex number field with $\dim X=n$. In this paper, we consider a conjecture of M.~C.~Beltrametti and A.~J.~Sommese and we obtain that this conjecture is true if $n=3$ and $h^{0}(L)\geq 2$, or $\dim \Bs |L|\leq 0$ for any $n\geq 3$. Moreover we can generalize the result of Sommese. Keywords:Polarized manifold, adjoint bundleCategories:14C20, 14J99 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9892733693122864, "perplexity": 209.3208640481377}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021542591/warc/CC-MAIN-20140305121222-00056-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://scattered-thoughts.net/blog/2017/12/12/notes-on-psycgd02-principles-of-cognition/ | ERROR: type should be string, got "https://www.ucl.ac.uk/lifesciences-faculty-php/courses/viewcourse.php?coursecode=PSYCGD02\n\nThis module outlines general theoretical principles that underlie cognitive processes across many domains, ranging from perception to language, to reasoning and decision making. The focus will be on general, quantitative regularities, and the degree to which theories focusing on specific cognitive scientific topics can be constrained by such principles. There will be an introduction on general methods and approaches in cognitive science and some of the problems related to them. Later in the course, some computational approaches in cognitive science will be discussed. There will be particular emphasis on understanding cognitive principles that are relevant to theories of decision making.\n\n## What is Cognitive Science?\n\nBrief history.\n\nNotable that the narrative revolves around several key conferences where prominent figures from different fields became aligned.\n\n## Bridging Levels of Analysis for Probabilistic Models of Cognition\n\nLevels of models:\n\n• Computational - problem and declarative solution eg Bayesian inference\n• Algorithmic - representation and constructive solution eg message passing\n• Implementation - physical processes eg neurons\n\nPopular research method is to look at where people diverge from ideal solutions, to figure out what algorithms their mind is using to approximate the solution. But.. .\n\n• Vulnerable to misidentifying the computational problem being solved.\n• eg strategies for iterated PD look irrational in single PD\n• Requires understanding how levels constrain each other\n• eg are probalistic models fundamentally incompatible with connectionist models or can we implement one on top of the other?\n\nRational process models - identify algorithm for approximating probabilistic inference under time/space limits, compare to what we know about mind and behavior.\n\n• Bridges computational and algorithmic levels.\n• Constrains possible algorithms to those that produce ideal behavior in limit.\n• Explains many cases where individuals deviate but average behavior is close to ideal.\n\nExample - Monte Carlo with small number of samples is tractable. Consistent with:\n\n• Averaging multiple guesses from one person increases accuracy (ie contains some independent error)\n• Recall similar events ~= importance sampling. Predicts availability bias? Incorrect re-weighting?\n• Order effects (order of information incorrectly affects results of update) ~= particle filter.\n• Perceptual bistability ~= random walk.\n\nSome progress in bridging to implementation level eg neural models of importance sampling.\n\n## Lecture 1\n\nCognitive science as reverse engineering - understand how the mind works by trying to build one and see what differs.\n\nBrief history:\n\n• Structuralism\n• Building blocks are qualia\n• Learning via systematic introspection\n• Controlled, replicable experiments\n• But different labs struggled to replicate each others results\n• Difficult to relate conscious experiences which don’t match qualia (eg non-visual mental models)\n• Vulnerable to observer effects, confirmation, priming, retroactive justification\n• Introspection actually = retrospection\n• eg visual illusions, choice blindness\n• Behaviorism\n• Only talk about observable stimulus and response\n• Mostly experiments with animal learning\n• eg classic conditioning (event -> event -> response => event -> … -> response)\n• eg operant conditioning (action -> +/- => +/- action)\n• Reinforcement machines, not reasoning machines\n• Doesn’t allow internal state/structure\n• Doesn’t explain how stimulus/response are categorized - theoryless learning\n• But language has infinite structure => can’t be learned from stimulus/response without hyperpriors\n• Rats choose shorted route available, rather than most reinforced route\n• Cognitive science\n• Thought as computation / information processing - data + algorithms\n• We needed to invent computation first to be able to have this idea!\n\nMethods:\n\n• Behavioral studies\n• Lesion studies\n• Single-cell recordings\n• fMRI\n• Neural activity -> blood de-oxygenation -> magnetic interaction changes -> measure with big magnets\n• Spatial resolution ~1mm\n• Temporal resolution ~seconds\n• EEG\n• Neural activity -> electromagnetic field -> measure with electrodes on scalp\n• Can only measure large fields\n• Spatial resolution ~poor\n• Temporal resolution ~1ms\n• MEG\n• Neural activity -> electromagnetic field -> measure with ?\n• Spatial resolution ~better\n• Temporal resolution ~1ms\n• tDCS\n\n## Automaticity of Social Behavior: Direct Effects of Trait Construct and Stereotype Activation on Action\n\n(Paired with the more recent failed replication.)\n\nArguing that non-conscious priming can strongly affect behavior.\n\nExperiment 1:\n\n• Use scrambled sentence test with words that prime rude/polite/neutral\n• All experimenters blinded\n• Sent to another room for next test, where waiting confederate is asking experimenter questions\n• Time how long it takes them to interrupt\n• Huge effect sizes: almost 2x mean time, <20% vs >60% interruptions within 10min cutoff\n• No significant differences in reported perceptions of experimenters politeness\n• Should we trust reports of politeness? It’s a bad idea to call your professor rude!\n• Effect sizes are enormous. If a few words can double impatience, what could listening to angry music on the journey do? If we’re so strongly susceptible to small influences, how is there room for personality? How do we have any resistance to marketing?\n\nExperiment 2:\n\n• Two successful iterations!\n• Same setup, but priming elderly/neutral (without priming slow)\n• Timed how long subjects took to walk to the next room\n• Much smaller effect size - mean 7.30s -> 8.28s\n• Near identical results in both iterations!\n• Elderly -> slow? I get thinking about rudeness making me rude, but thinking about elderly making me slow seems a much bigger stretch. Thinking about predators makes me want to eat meat? Being chased by a tiger and stop for a steak sandwich?\n• Followup experiment with 19 undergrads found only 1 noticed the elderly priming\n\n• Do elderly priming, then Affect-Arousal Scale\n• Primed group were in slightly more positive mood, but not significantly\n• Uses this to defend against the idea that they walked slower because sad, but seems bizarre that they are affected so much that they move differently but not so much that they feel differently.\n\nExperiment 3:\n\n• Flash either African-American or Caucasian face before each trial.\n• On 130th claim error and say they have to start again. Experimenter explains error, but is blinded.\n• Facial expression caught by camera and rated by blinded experimenter.\n• Only two subjects reported seeing the faces when asked and couldn’t identify which they saw\n• Both experimenter in room and raters of pictures gave near-identical results!\n• But no difference in self-reported racial prejudice.\n\nArgues that this works where subliminal adverts for pepsi don’t because they directly activate traits which contain behavior whereas pepsi just activates the pepsi representation. So elderly -> walk slow but pepsi -/> drink pepsi? Also because there is some activation energy to get up and buy coke, whereas they setup situations where the action was already required and the only difference was in accessibility. So priming for hostility will make people more likely to react to an annoying trigger but not to be randomly hostile.\n\nNote that results for behavior here are stronger than their previous results for judgments, but would assume that judgments mediate behavior. But in ex1 there was no effect on perception of the experimenter. And little evidence so far for judgment mediating behavior.\n\n## Behavioral Priming: It’s all in the Mind, but Whose Mind?\n\nFailed replication of previous paper.\n\nReasons to doubt original:\n\n• Only two indirect replications.\n• Small sample sizes.\n• Evidence from neuroscience suggests that top-down attention and bottom-up saliency are both required for the spreading activations that are used to explain priming.\n• Experimenter who administered the task was not blinded enough - authors found that it was easy to accidentally glimpse the task sheet (original describes them as being in a closed envelope?)\n• Measuring time with a stopwatch is susceptible to bias\n• Not clear exactly what participants where asked afterwards - aware of stimulus vs aware of response vs aware of link.\n\nExperiment 1:\n\n• Task sheets in a closed envelope, opened by subjects\n• Experimenters assigned to subjects are random\n• Experimenters follow a strict script\n• Walking speed recorded by infrared beam\n• No significant difference in walking times\n• Four students reported being aware of the elderly-ness\n• Primed group chose pictures of old people significantly more often in forced choice test\n• No experimenters reported having any specific expectations about subject behavior\n\nExperiment 2:\n\n• 50 subjects, 10 experimenters\n• Half of experimenters told that primed participants will walk slower, other half told faster\n• Experimenters were unblinded\n• First subject for each experimenter was a confederate who behaved to confirm this expectation\n• Experimenters measured with stopwatch\n• For stopwatch times, fast+prime went faster and slow+prime went slower.\n• For infrared times, slow+prime went slightly slower and fast+prime was same as fast+control.\n\nMost subjects were aware of the prime (but it said 6%…) and are in psych course so might be expected to be suspicious.\n\nPriming via social cues is way more believable to me than priming via word choice. Clear selective pressure for understanding and reacting to social cues.\n\n## Lecture 2\n\nScientific reasoning. Psi hypothesis as running example.\n\nBase-rate fallacy vs significance testing.\n\nSuccessful replication could just mean replicating the mistakes of the original.\n\nIn a replication aim to improve on original methods or test some new factor - more likely to be received in good faith and more likely to generate new insight beyond back-and-forth.\n\nA good successfully replication can falsify a hypothesis by more accurately identifying the mechanism behind the effect eg previous paper replicated slow walking, but showed that the effect disappeared under proper blinding.\n\nDefenses of priming:\n\n• Hidden moderators\n• Experienced researchers\n\nBut:\n\n• Then the original effect is less powerful/robust than claimed\n• Post-hoc reasoning - just a hypothesis until tested\n• Administering questionnaires is not that hard\n• Most of the legwork is done by grad students anyway\n\nTry to structure experiments with multiple competing hypotheses where any given result would support some hypothesis and weaken the others.\n\n## The Cognitive Neuroscience of Human Memory Since H.M.\n\nIntro:\n\n• Current categories used in memory took time to establish - non-obvious.\n• Specific impairments from lesions rather than general degradation shows that brain is structured and specialized.\n\nHippocampus:\n\n• Hippocampal volume reduction of ~40% is common in memory-impaired patients - may be maximum cell loss ie 60% remaining is just dead tissue.\n• Damage to other regions can also impair memory.\n\nHM:\n\n• Learned a motor skill => memory not one single unit\n• Reasoning and perception intact => memory not required for reasoning/perception\n• Could sustain attention and had short-term recall => damaged ares not required for working memory\n• Had memories from before surgery => long-term storage not in damaged areas\n\nOther patients:\n\n• Perceptual priming still works\n• Can learn in Bayesian fashion, but not explicit memorization\n• Learned skills are rigid, fail if task is modified\n\nDeclarative: facts, representations, conscious recall, compare/contrast memories Non-declarative memory: unconscious performance, black box\n\nVisual perception:\n\n• Initially thought to require memory in some cases, but…\n• Tests accidentally benefit from memory\n• Often damage to adjacent vision-processing areas\n• Requires better imaging/locating of lesions to clear up confusion\n\nImmediate and working memory:\n\n• HM limited to 6 digit recall, but could maintain memory for 15 mins\n• => Immediate memory not time-limited, but maintenance-limited\n• Demonstrated in other patients - they do fine on tasks where distractions impair healthy subjects (working memory) but fail on tasks where distractions are fine for healthy subjects (long-term memory)\n• Open question - are there tasks that can be handled by working memory but are still impaired by hippocampus damage\n• Debate around path integration - unclear whether subjects are each using same process and representation\n\nRemote memory:\n\n• HM initially had autobiographical memory\n• (Later in life was limited to factual recall, but later MRIs also showed changes since initial event)\n• Many other patients also have autobiographical memory.\n• In patients without, often unclear how far damage extends and whether it might affect other areas\n\nWorking theory of long-term memory::\n\n• Medial temporal lobes deal with creating and maintaining declarative memories\n• Sensory memories stored in same area that initially processed them\n• Supported by many individual patients eg ‘colorblind painter’ - after damage that removed color perception, could no longer remember colors except declaratively\n• Recall consists of tying all of these together\n• Supported by various fMRI studies\n• Initially requires hippocampus, but over years memories reorganized, stored more permanently by changes across neocortex that tie these areas together\n\nStructure::\n\n• Working theory - organized by semantic categories\n• eg JBR lost memory of things identified by attributes but not things identified by function\n• Recollection = what was in specific memory?\n• Familiarity = was prompt in any memory?\n• Hippocampus damaged patients are impaired on both old/new task (familiarity + some recollection) and free recall (recollection)\n• Combine old/new with recall of which source - patients have less instances of familiarity without recall => damage is not recall only\n\nGroup studies average out individual variation - allows studying less obvious effects\n\n## Finding the engram\n\nEngram def=\n\n• Persistance - persistent physical change in brain resulting from specific experience\n• Ecphory - automatic retrieval in presence of cue\n• Content - reflects what happened and what can be retrived\n• Dormancy - exists (but dormant) even when encoding and retrieval not active\n\nThe hunt:\n\n• Moving target eg reconsolidation\n• Many learning-related changes observed in brain eg synaptic, chemical, epigenetic.\n• Different persistence periods.\n• Not clear if related to engrams.\n• Often don’t predict retrieval success.\n• Dominant theory - stronger connections between neurons that are active during encoding - neuronal ensemble\n\nSharp-wave ripple events in hippocampus:\n\n• Multi-unit recordings in rodents, fMRI in humans\n• Replay observed during tasks, resting and sleeping\n• Strength of replay correlates with later retrieval performance\n• Disrupting waves impairs subsequent expression\n• Some progress on correlating content\n• Related sensory cues may trigger replay\n• Hard to observe dormancy\n\nTracking:\n\n• Non-specific lesions only caused retrieval failure when wide areas damaged => memories are distributed\n• But overtrained rats => resilient memories\n• But may have accidentally damaged hippocampus with large lesions\n• Would like to lesion specific ensembles\n• Tagging shows that some same neurons active during both encoding and retrieval (~10%, >chance, possibly collateral tagging during encoding)\n• Neurons with higher levels of CREB are more often recruited into ensemble\n• Neurons with virally over-expressed CREB are more often recruited into ensemble\n• More CREB -> more excitable\n• Increasing excitability via various other methods also has same effect\n• Allocate-and-erase - ablating (killing?) artificially excitable neurons reduces retrieval performance without affecting future learning\n• Even if only one brain region is targeted => some parts of ensemble have key roles\n• Tag-and-erase - tag active neurons, apply inhibitors (how are these targeted?), same effect\n• Worries about collateral tagging resolved:\n• Tag 1st experience\n• Silence during 2nd\n• 2nd still learned but 1st is gone => not enough collateral tagging to interfere with 2nd task\n\nActivating:\n\n• Uncontrolled experiments with focal electrical stimulation during surgery\n• Tag-and-manipulate / allocate-and-manipulate - re-triggers learned behavior even in unrelated contexts\n• In both cases, activation seems to spread from initial site to entire ensemble\n• Can create false associations:\n• Tag ensemble in context 1\n• Activate in context 2 and shock mice\n• Learned fear response in context 1\n• No fear response in context 2\n• Even indirectly\n• Tag ensemble in context 1\n• Tag ensemble during shock\n• Repeatedly active both in context 2\n• Learned fear response in context 2\n• No fear response in context 1\n• Artificial activation paired with chemical that inhibits reconsolidation removes association\n• So far stimuli limited to fear/reward and response limited to freeze/approach/avoid - need more complex tasks to test episodic memory\n\n## Memory, navigation and theta rhythm in the hippocampal-entorhinal system\n\nHaving a lot of trouble with this paper. Needs much more time and depth.\n\n• Allocentric / map-based navigation - static representation, navigate by external landmarks\n• Egocentric navigation / path integration - track motion, estimate path from origin\n• Hippocampus and entorhinal cortex support both declarative memory and navigation\n• Semantic memory (data independent of temporal context) ~ allocentric navigation\n• Episodic memory (first-person experiences in context) ~ egocentric navigation\n• Semantic memory abstracts repeated patterns in episodic memory ~ allocentric maps abstract repeated paths and observations\n\nImplementation possibilities:\n\n• Place cells in hippocampus - fire at specific locations in space - possibly encode position or distance?\n• Grid cells in medial entorhinal cortex - fire in repeating hexagonal pattern in space - different scales - possibly coordinate system?\n• Head direction cells - ?\n• Border cells - ?\n• This is too complicated to skim\n• Firing patterns are not simple - small changes in environment can result in large change in firing patterns - provides high-dimensional code for storing many different envs?\n\n• Insects manage to navigate with much simpler circuits / less storage.\n• Massive excess capacity in mammals might be related to reuse for different kinds of memory.\n• Might also enable ‘maps’ of semantic knowledge\n• cf spatial metaphors in language\n• Recognition and recall associated with unique firing patterns in that area for each object/event\n\n• If episodic memories are stored similarly to paths through environment, might explain time-asymmetry and temporal contiguity (recalling one events makes it easier to recall other events that are nearby in time)\n\n• Neuronal assembly sequences:\n\n• Patterns of activation in time?\n• Generated continuously even when environment and body signals are kept constant\n• Can predict correct/incorrect moves in maze seconds before motor event\n• Maybe used to organize episodic memory\n• Are chunked, just like paths and memory\n• Limits error in long sequences\n• Is chunking like a hash tree?\n\nSome complex ideas about implementation in theta waves that I can’t follow, but apparently explains:\n\n• Fine resolution near recalled event/location, coarse structure elsewhere\n• Limited number of concurrently recalled events/locations\n• Long-distance jumps between events/locations (related to chunking?)\n• Compressed recall eg episodic recall tends to focus around highlights/lowlights rather than being linear in time\n• Why episodic recall plays out in real-time - tied to same mechanism that implements subjective time tracking\n\nMaybe this explains why word-vec works? Are we just reverse-engineering the minds spatial relationships?\n\nQuestions:\n\n• Encoding/meaning of firing patterns\n• Other animals have similar cells but that are not theta modulated - do they have some substitute system?\n• What does the representation space look like (size, layout)?\n• How does the cell layout vary between rodents and primates? Do some areas grow out of proportion?\n• ?\n• Does awareness of recollections require only the prefontal cortex, or also interaction with the rest of the cerebral cortex.\n\n## The role of the hippocampus in navigation is memory\n\nPlace cells, grid cells etc seem to imply that the hippocampus provides navigation. Paper argues that the evidence actually shows that it provides general cognitive maps and that navigation is just one usecase.\n\n• Search\n• No active goal orientation\n• Just movement and goal recognition\n• Target approaching\n• Orienting towards observable goal\n• Guidance\n• Towards pre-calculated goal location\n• eg defined by relationship between multiple landmarks\n• Requires some spatial computation, and thereafter is just target approaching\n• Wayfinding\n• Recognizing and approaching landmarks\n• Joining landmarks into route\n• Joining routes together into topological map\n• Embed known routes/maps into common frame of reference\n• Supports novel routes, detours, shortcuts\n\nRats with hippocampal lesions:\n\n• Can handle route navigation (eg turn left a T) - presumably recognition-triggered\n• Can handle alternating routes - again presumably recognition-triggered - but not if delays are inserted\n• Can handle guidance navigation with single route (eg water maze task - memorizing location of invisible platform relative to objects on wall - same starting point)\n• Can’t handle guidance navigation with multiple routes (eg water maze task with different starting points)\n• Can’t handle survey navigation (eg maze rotated after learning)\n• May or may not be able to handle path integration\n• (and both rats and humans suck at it anyway)\n• In one experiment, humans could but rats couldn’t\n• In another, rats were impaired even when visual cues existed => maybe the problem is forgetting where the goal is\n• Recording studies haven’t found compelling evidence of hippocampal neurons involved in path integration\n• Grid cell firing patterns degrade in the dark => they don’t work well with path integration alone\n\nHumans with hippocampal lesions:\n\n• Can navigate by reading a map\n• Can handle guidance navigation and path integration, so long as fits in working memory\n• Can describe routes in areas they knew before damage\n\nWorking theory:\n\n• Hippocampus is required for survey navigation.\n• But survey navigation is sometimes used even when lower-level strategies would suffice, explaining failures on simpler tasks\n• eg when foraging for food in open field, see firing patterns in grid cells et al, see place cells fire in sequence when navigating to regular food drops, seee map updates when goal locations change\n• eg when disoriented animals reorient, they use local geometry even if prominent landmark is available\n• Hippocampus probably not required for path integration, except to remember starting point and goal\n\nEvidence that different spatial mappings are used for different tasks within the same environment.\n\nHippocampus maps abstract spaces:\n\n• Rats with lesions can learn direct SR but not transitive\n• Humans with lesions have higher deficits for order of events than for direct recall\n• Rats with lesions can recognize odors but not recall order in which they were presented\n• Interesting signals in human brains when presented with social or associative problems\n• Similarly to in spatial tasks, some memory tasks engage hippocampal relational processing even when not required (this paragraph seems to contradict itself?)\n\nImaging suggests that hippocampus is not continuously involved when using cognitive maps in navigation, but only when learning or when planning/altering routes.\n\nSpeculation that hippocampus originally evolved for navigation but was co-opted for abstract relationships. (How does hippocampus size vary across species?).\n\n## Lecture 3\n\nDivide into declarative vs non-declarative memory no longer seems to be carving at the joints:\n\n• HM couldn’t learn maze routes but could learn mirror drawing.\n• House task - recall vs recognition of complex spatial arrangements (front doors and porches). Suddenly recall tanks for patients.\n• Patients impaired at statistical learning of relationships and associations.\n• Mountain task - normal when matching color/time-of-day but impaired when matching arrangement/rotation.\n• Lesioned rats can detect novel objects and novel placements but can’t pair placement with background context.\n\nPattern separator vs pattern completer.\n\n• Old people struggle at pattern separation (old vs similar).\n• CA1 responds to any difference, CA3/DG responds to degree of difference.\n\nPatients learn facts at school, have high IQ and get good grades.\n\nUse fMRI to detect 60% periodicity in humans when navigating => grid cells. Periodicity correlates with success on spatial memory task.\n\nExperiment suggesting that periodicity can be observed even for abstract spaces, by pairing a coordinate system with bird pictures of varying neck and leg length.\n\nSomething analogous to space cells for time observed in rats.\n\n## Uniting the Tribes of Fluency to Form a Metacognitive Nation\n\nTheory: the difficulty of a cognitive task (from fluent to non-fluent) is used as a meta-cognitive cue that feeds into other judgments via ‘naive theories’ aka heuristics.\n\nFluency:\n\n• Perceptual\n• Physical eg illegible text, varying contrast\n• Temporal eg briefly flashed images\n• Memory\n• Retrieval eg availability heuristic\n• Encoding eg memorization techniques\n• Embodied (not connected to judgments by the references here)\n• Facial expressions eg smiling in math class\n• Body feedback eg mirror writing\n• Linguistic\n• Phonological eg pronounceable vs unpronounceable letter strings\n• Lexical eg familiar vs unfamiliar synonyms\n• Syntactic eg sentence tree structure\n• Orthographic eg using other alphabets, 12% vs twelve percent (reading latex?)\n• Conceptual eg priming with structurally similar explanations, semantic coherence\n• Spatial reasoning eg rotating shapes (not connected to judgments by the references here)\n• Imagery eg imagining hypothetical scenarios\n• Decision eg jam choices\n\nJudgments:\n\n• Truth\n• Liking\n• Confidence\n\nDiscounting - if fluency is recognized, subject corrects and may even over-correct.\n\nSeems like discounting provides a lot of adjustment room in this theory. How to falsify? Could try varying eg legibility over a wide scale and looking for a discounting effect.\n\n## Lecture 4\n\nFluency can induce:\n\n• familiarity\n• likability\n• dis-likability (but not replicated).\n• perception of light or darker image (but not replicated)\n• judgments of fame (abolished by eating popcorn)\n• judgments of danger (abolished by eating popcorn)\n• volume of background noise\n\nFamiliarity seems like a reasonable heuristic - exposure => fluency, so assume fluency => exposure.\n\nExplanation for the popcorn is that it prevents subvocalisation so can’t judge pronunciation fluency of words.\n\nOthers make less sense to me.\n\nNotable that the class was typically split when asked to predict outcome of experiments ie proposed mechanism is so vague that either outcome is plausible.\n\nOther ‘constructs’:\n\n• Subjects reconstruct past to create useful narratives\n• Subjects claim even under strong pressure to remember seeing events that only their partner saw\n• Subjects remember seeing words when only related words were present\n\nNot worth reviewing, not confident in results.\n\n## Understanding face recognition\n\nBroad view of facial recognition, including processes like retrieving information about the faces owner.\n\nWhat information might components of facial recognition produce?\n\n• Pictorial - when viewing static photo, reconstruct some 3d representation after correcting for lighting, grain etc\n• Structural - angle/lighting/expression -invariant model of face shape/structure usable for recognition\n• Identifiable from low-res photos and caricatures\n• Pictorial vs structural - recognition of photos of strangers faces is impaired by changing angle/lighting => structural representation takes time to build up.\n• Recognition of familiar faces is less impaired by changes to external features => over long-term representation picks up on more unchangeable details eg feature arrangement vs hair color\n• Recognition from restricted (eg just eyes) and occluded (eg wearing sunglasses) views => heavy redundancy in structural code\n• Visually-derived semantic eg age, gender, similar faces\n• Identity-specific semantic eg occupation, friends\n• Slower than recognition alone\n• Name\n• Separated from identity-specific because it is sometimes uniquely effected by injury\n• Often get familiarity without identity, or identity without name. But name without identity would be surprising.\n• Usually try to get name by searching for further identity details, suggests it’s attached to identity rather than directly to structural info.\n• Slower than identity-specific semantic alone\n• Expression\n• Facial speech - everyone lip-reads a little.\n• Separated from recognition by injury in both directions\n\nOpen questions:\n\n• Finer-grained breakdown of cognitive processes involved.\n• Do we decide that something is a face and then apply facial recognition or vice versa?\n• How is contextual information included? eg not recognizing someone because you didn’t expect to see them in that place\n\n## Are faces special?\n\nAre there dedicated cognitive process for facial processing, or do we just reuse generic object recognition?\n\nMain arguments:\n\n• Face-directed activity in infants => innate\n• Holistic recognition only occurs for faces, not other objects\n• There are face-specific neural representations\n\nMain challenges\n\n• Most experiments test within-class discrimination for faces vs between-class discrimination for objects - may be different processes\n• Expertise hypothesis - maybe similar results for any class that is well practiced eg dog judge recognizing different dogs\n\nInnate:\n\n• Newborn babies can distinguish similar faces even after changing hair and viewpoint\n• Same for young monkeys with no previous exposure to faces\n• But only for upright faces\n• Perceptual narrowing to faces of familiar races occurs\n\nHolistic/configural processing vs within-class discrimination:\n\n• Inversion effects much stronger for faces than within other classes\n• Inversion effects occur for ambiguous patterns that are primed as faces, but not if primed as characters\n• Part-whole effect - much better recognition for face parts when presented in a face vs alone, not for objects\n• Composite effect - much worse recognition for top half with non-matching bottom half than top half alone, not for objects\n• Inversion effects for objects disappear with repeated trials, but not for faces.\n\nNeural:\n\n• Monkeys and humans show face-selective cells in large clusters\n• Can be disrupted with TMS\n• Face and object discrimination can be separated by injury\n• FFA is strongly activated by face tasks but (usually) not by object tasks\n\nExpertise:\n\n• No holistic effects found in object experts (eg radiologists, ornithologists)\n\nArgument that too many studies rely on significant vs not-significant, rather than testing interactions.\n\n## Lecture 5\n\nAre faces special?\n\n• Functional specificity - specialized mechanisms\n• Neural specificity - implemented in face-selective areas/neurons/cells\n• Holistic - face is not represented as collection of parts, but as single object. (Tricky to pin down - makes more sense relative to later experiments.)\n• Configural - face representation depends on spatial configuration of features, not just features alone\n\nFace recognition could be:\n\n• Domain-general object recognition (item-level hypothesis)\n• Domain-specific object recognition (eg expertise hypothesis)\n• Face-specific (face-specificity hypothesis)\n• Some mixture of the above\n\nBehavioral experiments:\n\n• Have to separate ‘face’ from ‘low-level details that happen to occur in faces’ - inverted faces are good control\n• Face inversion effect - face recognition impaired much more by inversion than other expert objects\n• But much more expert in faces than anything else\n• Experiments testing correlation between degree of expertise and inversion effect have mixed results - still unsettled\n• Face-composite effect - easier to tell if top halves of faces are different when bottom halves are misaligned\n• Part-whole effect - easier to discriminate features in context of whole face, rather than alone\n• (Face-composite and part-whole seem directly opposed?)\n• Both effects much stronger for faces vs objects of expertise\n• Measures of degree of holistic processing? Comparing strengths of effects within subjects:\n• Inversion ~ part-whole = 0.28\n• Inversion ~ composite = -0.03\n• Part-whole ~ composite = 0.05\n• Inversion ~ face recognition = 0.42\n• Part-whole ~ face recognition = 0.25\n• Composite ~ face recognition = 0.04\n• Would expect strong correlations all round\n\nNeural experiments:\n\n• In FMRI, FFA reacts more strongly to faces vs objects\n• Low-level features? Faces vs scrambled faces.\n• Item-level recognition? Faces vs houses/porches.\n• Animate objects? Faces vs hands.\n• But stronger response for inverted faces.\n• More processing for triggered-but-failed recognition?\n• Similar results for other objects categories in other areas - indicates other specificities?\n• Places\n• Visual words\n• Bodies\n• Other peoples thoughts\n• Similar results for single-cell recordings in monkeys\n• Can find cells which react linearly to continuous changes in several of many face features\n• Deep brain stimulation results in mis-recognition\n• Face space (Chang & Tsao 2017)\n• Use PCA to choose vectors in face space\n• Found faces cells that react only to single vectors\n• Can reconstruct faces from cell responses\n\nMedical cases:\n\n• Prosopagnosia (developmental in ~2% of population)\n• Module defect or the tail of a bell curve?\n• Most visible symptom of general object agnosia? Some prosopagnosiacs have normal object recognition\n• Impairment of item-level recognition? Some prosopagnosiacs have normal item-level recognition\n• Impaired recognition of visually similar forms? Some prosopagnosiacs score normally on differentiation of morphed objects, as long as they are not faces\n• Impaired recognition of objects-of-expertise? WJ learned to recognize sheep at expert levels after injury.\n• Some subjects with object agnosia can recognize faces made out of vegetables, but can’t recognize the vegetables => independent mechanisms, not superset\n\nInnate:\n\n• Babies orient more towards face-like arrangements\n• Subject with upside-down head shows normal recognition accuracy on inverted faces, and > inverted accuracy on normal faces\n• (Surprised by interpretation. Also, maybe vision is flipped upstream?)\n\n## Lecture 6\n\nSkipped the reading this week :S\n\nSocial cognition - ‘the psychological processes that result from inferring the actual, imagined, or implied mental state of another’\n\nAffect is creeping back into models of decision-making.\n\nMoving away from 2-process model because of neuro evidence - clearly many systems involved.\n\nWhat makes a process automatic? Not requiring:\n\n• Intent\n• Capacity\n• Effort\n• Awareness\n\nRare for any given process to hit all 4.\n\nIllusion of agency - maybe intent does not exist.\n\nDebate over value of heuristics vs rationality.\n\nMentalizing:\n\n• inferring intentions, goals, desires of other mind (or own mind?)\n• typically care about intent and capability (eg warmth, competence etc)\n\nWhen do we attribute responsibility to an agent for an action?\n\n• Jones says single behavior => specific intent when:\n• given choice\n• has capability\n• departs from behavior of other agents\n• behaves differently in other contents / with other targets\n• Kelley says behavior over time => disposition when:\n• departs from behavior of other agents\n• behaves differently in other contents / with other targets\n• consistently behaves in this way in this context\n\nJohn laughs at the comedian. No one else laughs at the comedian. John laughs at every comedian. John laughs at the comedian every time. => Behavior is attributable to John, not to comedian\n\nExperimentally, seems to be less sensitive to consensus than other two.\n\nAttribute agency to objects similarly, but not moral status eg ‘computer said no’ but don’t feel bad for throwing the computer away. How do we tell the difference?\n\nEmotions hard to define.\n\n• Facial expressions are interpreted in context - changing context changes perception\n• No 1-1 mapping from face muscles to emotions - complex signal\n• Much disagreement on mapping emotions to brain regions\n• Anxious reappraisal\n• Self-reported eg happiness easily influenced by context, but discounted if made aware\n• Ability to mimic faces is innate, so universality of expressions could be from cultural transmission\n• Subjects with amygdala lesions can be fear-conditioned but are not aware of being afraid\n• Awareness of own heart rate predicts differing emotional reactions\n\nDominant theory - emotion as cognitive interpretation of physiological signals . Behavior change:\n\n• motivation + capacity\n• very resistant eg anti-smoking ads\n• changing environment almost always easier than changing the person\n\nDefault mode = social cognition applied to self?\n\n## Lecture 7\n\nExamples of theories that try to unify multiple phenomena:\n\n• Scale invariance\n• Decision by sampling\n• A theory of magnitude\n\nScale invariance:\n\n• $y \\propto x^\\alpha$\n• Examples in cogsci:\n• Weber’s law - smallest perceptable change : magnitude of stimulus\n• Fechner’s law - subjective intensity : physical intensity\n• Exponent varies by sense\n• Fitt’s law - time to hit target : log (target distance / target width)\n• Forgetting - recollection probability(?) : time\n• Surprising - exponential decay is a much more natural model\n• Practice - task reaction time : practice time\n• Recall - number of items recalled : time spent recalling\n• Seems not to depend at all on period covered by recall\n• Luce’s choice rule and Herrnstein’s matching law - probability of choosing item : attractiveness/payoff\n• Most examples cover a few ranges of magnitude but fall down at extremes\n• Causes?\n• Tends to be null hypothesis since it turns up so often\n• Violations, switching points are interesting\n\nDecision by sampling:\n\n• Need to be able to trade-off between utility of different outcomes, subjective probability, time\n• Well-calibrated\n• eg prospect theory matches up with empirical distribution of credits/debits into bank accounts, supermarket prices\n• eg temporal discounting matches up with number of google hits / newspaper entries for different durations\n• eg subjective risk evaluation matches up with probability judgments of probabilistic phrases + distribution of phrases in British National Corpus\n• How to explain this calibration?\n• Could be caused in other direction - subjective curves => behavior - but hard to see why it would affect distribution in this way.\n• Plausible algorithm - no numerical scale, just sample several similar elements and compare to get a rough ranking\n• How does sampling work? How is the reference class decided?\n• From memory - choose a reference class - explains framing\n• From context - explains anchoring and effect of irrelevant options\n• From exploration\n• How do we translate between reference scales eg trade off time vs money?\n• Poorly, usually.\n• CFAR’s ‘units of exchange’ provides anchors / exchange rate?\n• Picoeconomics claims willpower problems caused by hyperbolic discounting. Can we change the discounting curve by changing sampling process?\n\nA theory of magnitude:\n\n• Walsh 2003\n• Proposes that time, space and number are represented by same mechanism\n• Poorly supported, lecturer expects it to be wrong but useful as research direction\n• Time and space usually need to processed together eg for motor action, predicting movement\n• Plausible that number sense piggybacks on same system\n• Number vs space (well supported):\n• Quicker to distinguish numbers that have larger differences (ie further apart on number line)\n• SNARC effect - quicker response to small numbers on left side of vision, large numbers on right side of vision\n• Attention bias effect - quicker to notice stimuli in left when fixated on small number, right when fixated on large number\n• Line bisection effect - left/right bias when picking middle of string depending on number word in string eg “twotwotwotwo”\n• Asymmetric deficits on number tasks in neglect patients / TMS subjects\n• Some subjects describe weird number lines and also deviate from these patterns\n• Time vs number (poorly supported):\n• Number tasks and time estimation impair each other\n• Time vs space (poorly supported):\n• Subjects imaging 30m activity in scale model take longer for larger models\n• Neglect patients show asymmetric deficits when estimating duration of stimulus in neglected side of field\n\n## Scale-invariance as a unifying psychological principle\n\nScale invariance common in nature. Psych processes adapted to reflect this?\n\nClear examples in perception:\n\n• Luminance between sunlight and shade can be 10000x but brightness and color of an object is perceived same in both - visual system processes ratios, not absolute magnitudes\n• Similarly for hearing frequency - absolute pitch is rare but relative pitch is common\n• Weber’s law - difficulty of distinguishing perceptions proportional to ratio of magnitude, not absolute difference\n• But power varies across scale, so not totally clear\n• Steven’s law - in >30 perceptual/motor dimensions mapping to numerical scale is power law\n• When making judgments on numerical scale, does anchoring a point in the middle shift judgements in a scale-invariant fashion?\n\nCan’t be purely scale-invariant, because it is possible to judge magnitudes, but usually poorly.\n\nNot true at all for eg color perception.\n\nPerhaps reflects that the systems themselves are implemented physically.\n\n## A theory of magnitude: common cortical metrics of time, space and quantity\n\nArgues that:\n\n• Hemispheric asymmetry is because numerical calculation tied to language\n• Number-selective neurons located in same space as space-selective neurons, and some circumstantial evidence of temporal-sensitive neurons in same area\n\nExplaining interference in terms of attention is way too unconstrained. Sounds like single theory but close reading of literature shows that wide variety of proposed effects and causal mechanisms.\n\nPredicts SNARC should work for any space/action -coded magnitude.\n\n## Decision by sampling\n\nTypical theories of decision-making take utility functions as given. How do we build/calibrate a utility function given basic psychological operations?\n\nTo relate this back to previous two papers, how do we get an absolute judgment of utility out of brain systems that are only good at relative, scale-invariant judgments?\n\nMany examples of utility functions (in aggregate) matching cumulative distribution of events in the real world.\n\nProposes that we sample several items from memory and use these to estimate percentile on empirical distribution.\n\nMany other examples of similar processes:\n\n• Norm theory - judge normality by similarity to sampled events\n• Decision field theory - compare alternative by weighted sampling of advantages on random walk\n• Support theory - subjective probability depends on alternative hypotheses sampled\n• MINERVA-DM - subjective probability/plausibility based on similarity to sampled events\n• Stochastic difference model - ?\n\nAssumes that sampling from memory is a good approximation of sampling from reality. Some evidence for this eg Anderson & Schooler 1991.\n\nHas anyone tested the predicted binomial noise?\n\nTweaks:\n\n• Temporal discount rate decreases with magnitude of gain. Explained by assuming that time and magnitude are sampled together, not independently.\n• Temporal discount rate is higher for gains than losses. Explained by curvature of gain/loss utility interacting with base discount rate - discount applies to utility, not gain/loss directly.\n• Working-memory load increases discounting of delayed vs immediate gains. Explained by failing to sample enough large delays - biases score upwards.\n\n## Lecture 8\n\nLanguage is hard to define:\n\n• Clark & Clark 1977\n• Arbitrary - mapping from words to meanings\n• Structured - mapping from sentence to meaning\n• Generative - not limited to fixed set of meanings\n• Dynamic - words and structure change over time\n• Hocket 1963 - 13 features, of which 10-13 are claimed to only exist in humans\n• Displacement - refer to things removed in time and space\n• Productivity - create novel utterances/meanings which are nevertheless understood by others\n• Cultural transmission\n• Duality of patterning - generative\n• (But many of these arguably displayed in animals eg Alex the parrot)\n\nLevels of analysis:\n\n• Phonology - phonemes, speech perception, spectrograms\n• Semantics - words, semantic priming\n• Grammar - hierarchical structure, formal grammars\n\n• Broca’s area = speech production\n• Wernicke’s area = speech comprehension\n• Connected by arcuate fasciculus\n• Concentrated in left hemisphere:\n• Wada test - inject sodium amital into artery to sedate one hemisphere\n• Anatomical asymmetry in related areas\n• Asymmetry in PET and fMRI on language tasks\n• Differences in neuron shape between hemispheres\n• But hugely confounded by motor control which is also asymmetric\n\nProblems with model:\n\n• No clear causal relation between lesions and defects (including patients recovering from defects over time)\n• No consistent correlation established by functional imaging\n• Voxel-based lesion-symptom mapping identifies different areas\n• Evidence for multiple networks for language comprehension\n• Right hemisphere dominant for many complex language tasks\n• Word-specific activation distributed throughout brain, seemingly paralleling organization of sensory and motor systems eg action words in the motor system\n\nSpeech perception is ambiguous - requires top-down processing. Illusion of speech units.\n\n• At phonology level:\n• Segmentation problem - cannot find word/syllable boundaries in spectrogram\n• ‘Lack of invariance’ problem - phonemes do no have consistent representation in spectrogram\n• Speaking rate eg careful pronunciation vs normal conversation produce different spectrograms\n• Huge variation between accents\n• At word level:\n• Homonyms\n• Polysemy eg ‘the door fell off its hinge’ vs ‘the child ran through the door’\n• At syntax level:\n• Ambiguous binding\n• Combined eg ‘Mary made her dress correctly’\n• Correct interpretation improved by access to mouth movements, body movements (co-speech), conversational context\n\nReally no reason to continue teaching Wernicke-Geschwind model.\n\n## The free-energy principle: a unified brain theory?\n\nSummary of Surfing Uncertainty\n\nSummary of The Predictive Mind\n\nWikipedia on free-energy principle\n\nVariational Bayes:\n\n• Posterior $P(Z \\vert X)$ is hard to calculate exactly, so instead we approximate it by some family of distributions $Q_\\theta(Z)$\n• Want to minimize $D_{\\mathrm{KL}}(Q(Z) \\Vert P(Z \\vert X))$, because we have to minimize something and this is both reasonable and tractable.\n• Related - $P_\\mathrm{new}(\\theta, X) = \\mathrm{argmin}_Q D_{KL}(Q(\\theta, X) \\Vert P_\\mathrm{old}(\\theta, X)) \\text{ subject to } \\sum_\\theta Q(\\theta, X=x) = 1 \\text{ and } \\sum_\\theta Q(\\theta, X \\neq x) = 0$. Is minimizing distance to posterior equivalent to minimizing distance to prior subject to constraints?\n• Implications for forward vs reverse KL\n• Can rewrite as $D_{\\mathrm{KL}}(Q \\Vert P) = \\mathrm{constant} -H(Q) -E_Q[\\log{P(Z,X)}]$. Last term (last two terms?) is called ‘variational free energy’. Because thermodynamics?\n• If $Q$ has some factorization over $Z$ can use calculus of variations (somehow) to produce a set of recursive equations that describe the minimum and which converge under iteration.\n\nFree energy principle\n\n• $P$ is joint distribution of world model (‘causes’) and sensory input. Bayesian update on this model predicts future sensory inputs from past sensory inputs, via inferring underlying causes.\n• $Q$ is referred to as recognition density. (Why?)\n• Express free energy $F$ wrt energy and entropy:\n• $F = -E_Q[\\log{P(\\text{sense}, \\text{cause})}] -H(Q(\\text{cause}) = \\text{energy} - \\text{entropy} = \\text{expected surprise} - \\text{complexity of model}$\n• Shows that free energy can be evaluated using information that the agent has\n• Rewrite free energy $F$ wrt action:\n• IE how much we had to mess with the model vs how much predictive accuracy we gained for the recent sensation\n• The action that minimizes free energy is the one that minimizes surprise about the resulting sensations => act to confirm predictions\n• Hard to interpret. Eg changing point of view to disambiguate optical illusion?\n• Active inference\n• Rewrite free energy $F$ wrt sensation:\n• As approximation -> model, $F$ -> surprise\n• Choosing actions and models to minimize $F$ places an upper bound on surprise\n• Perceptions feed into online update of $Q$ to more accurately model causes and hence future perceptions.\n\nBut we like surprising things? Presumably this is to be explained. Or are actions chosen to minimize $F$ in general, rather than for this specific action?\n\nRelation to infomax principle (maximizing mutual information between sense and model subject to constraints on complexity of model). Complexity term in 1st formulation penalizes more complex models - regularization/shrinking.\n\nThe fact that these models predict empirically observed receptive fields so well suggests that we are endowed with (or acquire) prior expectations that the causes of our sensations are largely independent and sparse.\n\nArranged hierarchically, so each model passes prediction error up and passes predictions down. Precision parameter models noise at each level. High noise => more trust in priors / predictions from above. Low noise => more trust in sensory data from below.\n\nStates ‘value is inverse proportional to surprise’. (In a particular simple model) if we perform gradient ascent on value, then the long-term proportion of time spent in a state is proportional to value, so surprise is inversely proportional to value. Since we act to minimize free energy, priors can encode values. But does acting to minimize free energy lead to gradient ascent on value? Seems like the argument is backwards.\n\nStarting to get flashes of picoeconomics here - recursive relation between model of the future and model of own decision making.\n\nMany references to more general connections between minimizing free energy and defying thermodynamics over lifetime of agent, which I don’t follow at all.\n\n## Active Inference, Curiosity and Insight\n\nVarious activities can be explained as acting to reduce uncertainty:\n\n• Hidden states -> perceptual inference\n• Future states -> information-seeking behavior, intrinsic motivation\n• Future outcomes -> goal-seeking behavior, extrinsic motivation\n• World model / parameters -> novelty-seeking behavior, curiosity\n\nTo infer expected free energy, we need priors on our own behavior.\n\n• Minimizing free energy == avoiding surprise\n• Minimizing expected free energy == acting to resolve uncertainty\n• Need prior on our own behavior to calculate expected free energy. Active inference == prior that we will minimize free energy.\n\nUsing example of learning complex rules by active inference. Use prior beliefs about own behavior to encode rules of task, in a way that I don’t understand.\n\nNon-REM sleep. In absence of new sensory input, minimizing free energy => minimizing model complexity vs accuracy. Pruning as regularization.\n\nREM sleep. After pruning parameters, need to reevaluate posterior. Can do this by re-simulating observed evidence.\n\nSuperstition as premature pruning.\n\nOpen confusions: choice of action vs expected free energy, encoding values as priors, explore vs exploit, precision. Suspect that many of these would be resolved by implementing one of the examples\n\n## Lecture 9\n\nValue of actions can depend on order eg find food then eat vs eat then find food. So have to evaluate policies, not individual actions.\n\n$\\sigma$ is softmax.\n\nPenalizes divergence between $Q$ and $P_\\text{prior}$, can set prior on future state to encode value. Not clear how to encode non-bounded tasks.\n\nBear in mind that we are summing log-probabilities == multiplying probabilities. So states that have 0 on any of the decompositions are still worthless overall.\n\nDepression, self-destructive behavior etc explained as malformed priors.\n\nFrom discussion afterwards:\n\n• Example models don’t show precision. When used, it’s often to fixed to a constant unless they are trying to model dopamine.\n• Policies are pure function of Q - so not timeless but not directly depending on time either - allows controlling how much memory the model has by controlling what Q remembers of the past.\n• In examples path integral is trivial, but in more complex models use time slicing?\n\n## Lecture 10\n\nEmbodied cognition - cognitive processes rooted in perception and action, knowledge not stored as abstract symbolic representation but derived on the fly from perception (past or present) and action.\n\nDoesn’t seem to pin down a clear hypothesis, makes it difficult to figure out which experiments support which version of the theory.\n\nEg language\n\n• Some support for abstract representation\n• eg mix up phonemes sometimes => phonemes are a unit at some level of processing\n• But how is abstract representation connected to the world?\n• Embodied metaphors eg future is in front, past is behind.\n\nUsually attempt to demonstrate embodiment by demonstrating interaction between cognition and perception/action.\n\nClassic experiments which failed to replicate:\n\n• Hold pen in mouth to create frown or smile, affects humor rating of cartoons\n• Adverts which suggest an action matching the viewers handedness are preferred, reversed when hand is already occupied\n• Parsing action sentences as valid/invalid is quicker when the correct option is presented in a location that matches the action direction\n• Hearing words associated with body parts activates same brain region as moving those body parts\n\nPresented several other experiments which have yet to be replicated. Effect sizes are typically <1%\n\nThink of embodiment as a spectrum from purely symbolic/logical to fully embodied. Claim evidence does not strongly support either end of the spectrum.\n\nModels of embodiment underspecified. Any effect of the body on thought taken as evidence for embodiment without understanding of how embodiment works. We should be able to explain the pattern of results, not just whether embodiment is there or not." | {"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4424847960472107, "perplexity": 11481.186923450054}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154798.45/warc/CC-MAIN-20210804080449-20210804110449-00264.warc.gz"} |
http://math.stackexchange.com/users/52694/mike-battaglia | # Mike Battaglia
less info
reputation
117
bio website location age member for 1 year, 11 months seen Sep 3 at 4:03 profile views 99
# 19 Questions
14 Integer sequences which quickly become unimaginably large, then shrink down to “normal” size again? 14 Elements in $\hat{\mathbb{Z}}$, the profinite completion of the integers 12 Tensor products of p-adic integers 12 Is the theory of dual numbers strong enough to develop real analysis, and does it resemble Newton's historical method for doing calculus? 9 Is there an explicit embedding from the various fields of p-adic numbers $\mathbb{Q}_p$ into $\mathbb{C}$?
# 591 Reputation
+5 Integer sequences which quickly become unimaginably large, then shrink down to “normal” size again? +10 Is the theory of dual numbers strong enough to develop real analysis, and does it resemble Newton's historical method for doing calculus? +5 Injective norm on tensor algebra of a finite-dimensional Banach space +5 Elements in $\hat{\mathbb{Z}}$, the profinite completion of the integers
2 Questions regarding p-adic expansion and numbers
# 47 Tags
2 p-adic-number-theory × 4 0 set-theory × 4 2 number-theory × 3 0 geometry × 3 2 prime-numbers 0 category-theory × 3 0 abstract-algebra × 5 0 combinatorics × 3 0 ring-theory × 4 0 order-theory × 3
# 8 Accounts
Mathematics 591 rep 117 MathOverflow 379 rep 111 Bitcoin 171 rep 2 Cryptography 124 rep 3 Stack Overflow 121 rep 3 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41738036274909973, "perplexity": 2903.021807073745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008720.43/warc/CC-MAIN-20141125155648-00064-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://www.talkstats.com/threads/log-rank-or-other-test-for-difference-between-survival-curves.25008/ | # log-rank (or other) test for difference between survival curves
#### @nthRo
##### New Member
Hi, all.
I have co-opted some R code that estimates parameters for a Gompertz-Makeham mortality model. The data to which the model is fit is interval censored, with counts of individuals in each age-range category. For example:
Code:
naga <- structure(c(15,20,35,50,20,35,50,Inf,46,310,188,101),.Dim=as.integer(c(4,3)),.Dimnames=list(NULL,c("col1","col2","col3")))
The first and second columns are lower/upper limits for each age-range category; the third column is the number of individuals observed in each group. The Gompertz-Makeham parameters are estimated by the following function:
Code:
GM.naga <- function(x,deaths=naga)
{
a2=x[1]
a3=x[2]
b3=x[3]
shift<-15
nrow<-NROW(deaths)
S.t<-function(t)
{
return(exp(-a2*(t-shift)+a3/b3*(1-exp(b3*(t-shift)))))
}
d<-S.t(deaths[1:nrow,1])-S.t(deaths[1:nrow,2])
obs<-deaths[,3]
lnlk<-as.numeric(crossprod(obs,log(d)))
return(lnlk)
}
optim(c(0.001, 0.01, 0.1),GM.naga,control=list(fnscale=-1))
And to get the survival values at each age, from 15-70 years...
Code:
surv_GM.naga <- function (t)
{
x=c(5.227131e-09, 2.208408e-02, 4.577932e-02)
a2<-x[1]
a3<-x[2]
b3<-x[3]
shift<-15
S.t<-exp(-a2*(t-shift)+a3/b3*(1-exp(b3*(t-shift))))
return<-S.t
print(S.t)
}
surv_GM.naga(15:70)
If I estimated hazard parameters for another sample, how could I compare the two survival distributions to see if they are significantly different? Thanks.
--Trey | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.685430109500885, "perplexity": 4787.369226871683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710534.53/warc/CC-MAIN-20221128171516-20221128201516-00737.warc.gz"} |
http://www2.macaulay2.com/Macaulay2/doc/Macaulay2-1.19/share/doc/Macaulay2/Polyhedra/html/_is__Pointed.html | # isPointed -- checks if a Cone or Fan is pointed
## Synopsis
• Usage:
b = isPointed C
b = isPointed F
• Inputs:
• F, an instance of the type Fan
• Outputs:
• b, , true if the Cone or the Fan is pointed, false otherwise
## Description
Tests if a Cone is pointed, i.e. the lineality space is 0. A Fan is pointed if one of its Cones is pointed. This is equivalent to all Cones being pointed.
i1 : C = coneFromHData(matrix{{1,1,-1},{-1,-1,-1}}) o1 = C o1 : Cone i2 : isPointed C o2 = false i3 : C = intersection{C, coneFromHData(matrix{{1,-1,-1}})} o3 = C o3 : Cone i4 : isPointed C o4 = true
## Ways to use isPointed :
• "isPointed(Cone)"
• "isPointed(Fan)"
## For the programmer
The object isPointed is . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7687498927116394, "perplexity": 7908.472823393148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00613.warc.gz"} |
https://www.clutchprep.com/chemistry/practice-problems/103744/carbon-disulfide-a-poisonous-flammable-liquid-is-an-excellent-solvent-for-phosph-1 | # Problem: Carbon disulfide, a poisonous, flammable liquid, is an excellent solvent for phosphorus, sulfur, and some other nonmetals. A kinetic study of its gaseous decomposition gave these data:Calculate the average value of the rate constant.
⚠️Our tutors found the solution shown to be helpful for the problem you're searching for. We don't have the exact solution yet.
###### Problem Details
Carbon disulfide, a poisonous, flammable liquid, is an excellent solvent for phosphorus, sulfur, and some other nonmetals. A kinetic study of its gaseous decomposition gave these data:
Calculate the average value of the rate constant. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9393371343612671, "perplexity": 4875.477317293462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655906214.53/warc/CC-MAIN-20200710050953-20200710080953-00326.warc.gz"} |
http://nrich.maths.org/public/leg.php?code=-99&cl=2&cldcmpid=1235 | # Search by Topic
#### Resources tagged with Working systematically similar to Estimating Angles:
Filter by: Content type:
Stage:
Challenge level:
### There are 325 results
Broad Topics > Using, Applying and Reasoning about Mathematics > Working systematically
### First Connect Three
##### Stage: 2 and 3 Challenge Level:
The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for?
### Magic Potting Sheds
##### Stage: 3 Challenge Level:
Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it?
### First Connect Three for Two
##### Stage: 2 and 3 Challenge Level:
First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line.
### Colour Islands Sudoku
##### Stage: 3 Challenge Level:
An extra constraint means this Sudoku requires you to think in diagonals as well as horizontal and vertical lines and boxes of nine.
### Medal Muddle
##### Stage: 3 Challenge Level:
Countries from across the world competed in a sports tournament. Can you devise an efficient strategy to work out the order in which they finished?
### Football Sum
##### Stage: 3 Challenge Level:
Find the values of the nine letters in the sum: FOOT + BALL = GAME
### Multiples Grid
##### Stage: 2 Challenge Level:
What do the numbers shaded in blue on this hundred square have in common? What do you notice about the pink numbers? How about the shaded numbers in the other squares?
### Inky Cube
##### Stage: 2 and 3 Challenge Level:
This cube has ink on each face which leaves marks on paper as it is rolled. Can you work out what is on each face and the route it has taken?
### Button-up Some More
##### Stage: 2 Challenge Level:
How many ways can you find to do up all four buttons on my coat? How about if I had five buttons? Six ...?
### Seven Flipped
##### Stage: 2 Challenge Level:
Investigate the smallest number of moves it takes to turn these mats upside-down if you can only turn exactly three at a time.
### Fault-free Rectangles
##### Stage: 2 Challenge Level:
Find out what a "fault-free" rectangle is and try to make some of your own.
### Factor Lines
##### Stage: 2 Challenge Level:
Arrange the four number cards on the grid, according to the rules, to make a diagonal, vertical or horizontal line.
##### Stage: 3 Challenge Level:
Rather than using the numbers 1-9, this sudoku uses the nine different letters used to make the words "Advent Calendar".
### Isosceles Triangles
##### Stage: 3 Challenge Level:
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
### Twinkle Twinkle
##### Stage: 2 Challenge Level:
A game for 2 people. Take turns placing a counter on the star. You win when you have completed a line of 3 in your colour.
### Broken Toaster
##### Stage: 2 Short Challenge Level:
Only one side of a two-slice toaster is working. What is the quickest way to toast both sides of three slices of bread?
### Ben's Game
##### Stage: 3 Challenge Level:
Ben passed a third of his counters to Jack, Jack passed a quarter of his counters to Emma and Emma passed a fifth of her counters to Ben. After this they all had the same number of counters.
### Masterclass Ideas: Working Systematically
##### Stage: 2 and 3 Challenge Level:
A package contains a set of resources designed to develop students’ mathematical thinking. This package places a particular emphasis on “being systematic” and is designed to meet. . . .
### American Billions
##### Stage: 3 Challenge Level:
Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3...
### Triangles to Tetrahedra
##### Stage: 3 Challenge Level:
Starting with four different triangles, imagine you have an unlimited number of each type. How many different tetrahedra can you make? Convince us you have found them all.
### Cayley
##### Stage: 3 Challenge Level:
The letters in the following addition sum represent the digits 1 ... 9. If A=3 and D=2, what number is represented by "CAYLEY"?
### More on Mazes
##### Stage: 2 and 3
There is a long tradition of creating mazes throughout history and across the world. This article gives details of mazes you can visit and those that you can tackle on paper.
### Number Daisy
##### Stage: 3 Challenge Level:
Can you find six numbers to go in the Daisy from which you can make all the numbers from 1 to a number bigger than 25?
### Ones Only
##### Stage: 3 Challenge Level:
Find the smallest whole number which, when mutiplied by 7, gives a product consisting entirely of ones.
### Triangles All Around
##### Stage: 2 Challenge Level:
Can you find all the different triangles on these peg boards, and find their angles?
### Counting on Letters
##### Stage: 3 Challenge Level:
The letters of the word ABACUS have been arranged in the shape of a triangle. How many different ways can you find to read the word ABACUS from this triangular pattern?
### Sums and Differences 1
##### Stage: 2 Challenge Level:
This challenge focuses on finding the sum and difference of pairs of two-digit numbers.
### Tetrahedra Tester
##### Stage: 3 Challenge Level:
An irregular tetrahedron is composed of four different triangles. Can such a tetrahedron be constructed where the side lengths are 4, 5, 6, 7, 8 and 9 units of length?
### Sums and Differences 2
##### Stage: 2 Challenge Level:
Find the sum and difference between a pair of two-digit numbers. Now find the sum and difference between the sum and difference! What happens?
### Crossing the Town Square
##### Stage: 2 and 3 Challenge Level:
This tricky challenge asks you to find ways of going across rectangles, going through exactly ten squares.
### How Long Does it Take?
##### Stage: 2 Challenge Level:
In this matching game, you have to decide how long different events take.
##### Stage: 3 Challenge Level:
A few extra challenges set by some young NRICH members.
### Family Tree
##### Stage: 2 Challenge Level:
Use the clues to find out who's who in the family, to fill in the family tree and to find out which of the family members are mathematicians and which are not.
### Wonky Watches
##### Stage: 2 Challenge Level:
Stuart's watch loses two minutes every hour. Adam's watch gains one minute every hour. Use the information to work out what time (the real time) they arrived at the airport.
### A First Product Sudoku
##### Stage: 3 Challenge Level:
Given the products of adjacent cells, can you complete this Sudoku?
### A-magical Number Maze
##### Stage: 2 Challenge Level:
This magic square has operations written in it, to make it into a maze. Start wherever you like, go through every cell and go out a total of 15!
### Plate Spotting
##### Stage: 2 Challenge Level:
I was in my car when I noticed a line of four cars on the lane next to me with number plates starting and ending with J, K, L and M. What order were they in?
### Bean Bags for Bernard's Bag
##### Stage: 2 Challenge Level:
How could you put eight beanbags in the hoops so that there are four in the blue hoop, five in the red and six in the yellow? Can you find all the ways of doing this?
### Fake Gold
##### Stage: 2 Challenge Level:
A merchant brings four bars of gold to a jeweller. How can the jeweller use the scales just twice to identify the lighter, fake bar?
### Pasta Timing
##### Stage: 2 Challenge Level:
Nina must cook some pasta for 15 minutes but she only has a 7-minute sand-timer and an 11-minute sand-timer. How can she use these timers to measure exactly 15 minutes?
### It Figures
##### Stage: 2 Challenge Level:
Suppose we allow ourselves to use three numbers less than 10 and multiply them together. How many different products can you find? How do you know you've got them all?
### Room Doubling
##### Stage: 2 Challenge Level:
Investigate the different ways you could split up these rooms so that you have double the number.
### 1 to 8
##### Stage: 2 Challenge Level:
Place the numbers 1 to 8 in the circles so that no consecutive numbers are joined by a line.
### All Seated
##### Stage: 2 Challenge Level:
Look carefully at the numbers. What do you notice? Can you make another square using the numbers 1 to 16, that displays the same properties?
### Counters
##### Stage: 2 Challenge Level:
Hover your mouse over the counters to see which ones will be removed. Click to remover them. The winner is the last one to remove a counter. How you can make sure you win?
### Uncanny Triangles
##### Stage: 2 Challenge Level:
Can you help the children find the two triangles which have the lengths of two sides numerically equal to their areas?
### How Much Did it Cost?
##### Stage: 2 Challenge Level:
Use your logical-thinking skills to deduce how much Dan's crisps and ice-cream cost altogether.
### Chocoholics
##### Stage: 2 Challenge Level:
George and Jim want to buy a chocolate bar. George needs 2p more and Jim need 50p more to buy it. How much is the chocolate bar?
### Ice Cream
##### Stage: 2 Challenge Level:
You cannot choose a selection of ice cream flavours that includes totally what someone has already chosen. Have a go and find all the different ways in which seven children can have ice cream. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2508320212364197, "perplexity": 2937.031932742461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507452681.5/warc/CC-MAIN-20141017005732-00021-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.usgs.gov/media/images/another-look-margin-kahauale-a-2-flow-small-vegetat | # Another look at the margin of the Kahauale‘a 2 flow. Small vegetat...
## Detailed Description
Another look at the margin of the Kahauale‘a 2 flow. Small vegetation fires triggered by the active lava spread a short distance out from the flow margin.
## Details
Image Dimensions: 4608 x 3456
Date Taken: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9402595162391663, "perplexity": 13517.419391948375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518882.71/warc/CC-MAIN-20191209121316-20191209145316-00300.warc.gz"} |
https://chem.libretexts.org/Under_Construction/Purgatory/Book%3A_Analytical_Chemistry_2.0_(Harvey)/12_Chromatographic_and_Electrophoretic_Methods/12.7%3A_Electrophoresis | # 12.7: Electrophoresis
Electrophoresis is a class of separation techniques in which we separate analytes by their ability to move through a conductive medium—usually an aqueous buffer—in response to an applied electric field. In the absence of other effects, cations migrate toward the electric field’s negatively charged cathode. Cations with larger charge-to-size ratios—which favors ions of larger charge and of smaller size—migrate at a faster rate than larger cations with smaller charges. Anions migrate toward the positively charged anode and neutral species do not experience the electrical field and remain stationary.
There are several forms of electrophoresis. In slab gel electrophoresis the conducting buffer is retained within a porous gel of agarose or polyacrylamide. Slabs are formed by pouring the gel between two glass plates separated by spacers. Typical thicknesses are 0.25–1 mm. Gel electrophoresis is an important technique in biochemistry where it is frequently used for separating DNA fragments and proteins. Although it is a powerful tool for the qualitative analysis of complex mixtures, it is less useful for quantitative work.
In capillary electrophoresis, the conducting buffer is retained within a capillary tube whose inner diameter is typically 25–75 μm. Samples are injected into one end of the capillary tube. As the sample migrates through the capillary its components separate and elute from the column at different times. The resulting electropherogram looks similar to a GC or an HPLC chromatogram, providing both qualitative and quantitative information. Only capillary electrophoretic methods receive further consideration in this section.
As we will see shortly, under normal conditions even neutral species and anions migrate toward the cathode.
## 12.7.1 Theory of Capillary Electrophoresis
In capillary electrophoresis we inject the sample into a buffered solution retained within a capillary tube. When an electric field is applied across the capillary tube, the sample’s components migrate as the result of two types of action: electrophoretic mobility and electroosmotic mobility. Electrophoretic mobility is the solute’s response to the applied electrical field. As described earlier, cations move toward the negatively charged cathode, anions move toward the positively charged anode, and neutral species remain stationary. The other contribution to a solute’s migration is electroosmotic flow, which occurs when the buffer moves through the capillary in response to the applied electrical field. Under normal conditions the buffer moves toward the cathode, sweeping most solutes, including the anions and neutral species, toward the negatively charged cathode.
### Electrophoretic Mobility
The velocity with which a solute moves in response to the applied electric field is called its electrophoretic velocity, νep; it is defined as
$ν_\ce{ep}= \mu_\ce{ep}E \label{12.34}$
where μep is the solute’s electrophoretic mobility, and E is the magnitude of the applied electrical field. A solute’s electrophoretic mobility is defined as
$\mu_\ce{ep} = \dfrac{q}{6\pi ηr } \label{12.35}$
where
• q is the solute’s charge,
• η is the buffer viscosity, and
• r is the solute’s radius.
Using Equation \ref{12.34} and Equation \ref{12.35} we can make several important conclusions about a solute’s electrophoretic velocity. Electrophoretic mobility and, therefore, electrophoretic velocity, increases for more highly charged solutes and for solutes of smaller size. Because q is positive for a cation and negative for an anion, these species migrate in opposite directions. Neutral species, for which q is zero, have an electrophoretic velocity of zero.
### Electroosmotic Mobility
When an electrical field is applied to a capillary filled with an aqueous buffer we expect the buffer’s ions to migrate in response to their electrophoretic mobility. Because the solvent, H2O, is neutral we might reasonably expect it to remain stationary. What we observe under normal conditions, however, is that the buffer solution moves towards the cathode. This phenomenon is called the electroosmotic flow.
Electroosmotic flow occurs because the walls of the capillary tubing are electrically charged. The surface of a silica capillary contains large numbers of silanol groups (–SiOH). At pH levels greater than approximately 2 or 3, the silanol groups ionize to form negatively charged silanate ions (–SiO). Cations from the buffer are attracted to the silanate ions. As shown in Figure 12.56, some of these cations bind tightly to the silanate ions, forming a fixed layer. Because the cations in the fixed layer only partially neutralize the negative charge on the capillary walls, the solution adjacent to the fixed layer—what we call the diffuse layer—contains more cations than anions. Together these two layers are known as the double layer. Cations in the diffuse layer migrate toward the cathode. Because these cations are solvated, the solution is also pulled along, producing the electroosmotic flow.
The anions in the diffuse layer, which also are solvated, try to move toward the anode. Because there are more cations than anions, however, the cations win out and the electroosmotic flow moves in the direction of the cathode.
Figure 12.56: Schematic diagram showing the origin of the double layer within a capillary tube. Although the net charge within the capillary is zero, the distribution of charge is not. The walls of the capillary have an excess of negative charge, which decreases across the fixed layer and the diffuse layer, reaching a value of zero in bulk solution.
The rate at which the buffer moves through the capillary, what we call its electroosmotic flow velocity, νeof, is a function of the applied electric field, E, and the buffer’s electroosmotic mobility, μeof.
$\nu_\ce{eof} = \mu_\ce{eof}E \label{12.36}$
Electroosmotic mobility is defined as
$\mu_\ce{eof} = \dfrac{εζ}{4πη} \label{12.37}$
where
• ε is the buffer dielectric constant,
• ζ is the zeta potential, and
• η is the buffer viscosity.
The zeta potential—the potential of the diffuse layer at a finite distance from the capillary wall—plays an important role in determining the electroosmotic flow velocity. Two factors determine the zeta potential’s value. First, the zeta potential is directly proportional to the charge on the capillary walls, with a greater density of silanate ions corresponding to a larger zeta potential. Below a pH of 2 there are few silanate ions, and the zeta potential and electroosmotic flow velocity are zero. As the pH increases, both the zeta potential and the electroosmotic flow velocity increase. Second, the zeta potential is directly proportional to the thickness of the double layer. Increasing the buffer’s ionic strength provides a higher concentration of cations, decreasing the thickness of the double layer and decreasing the electroosmotic flow.
Zeta Potential
The definition of zeta potential given here is admittedly a bit fuzzy. For a much more technical explanation see Delgado, A. V.; González-Caballero, F.; Hunter, R. J.; Koopal, L. K.; Lyklema, J. “Measurement and Interpretation of Electrokinetic Phenomena,” Pure. Appl. Chem. 2005, 77, 1753–1805. Although this a very technical report, Sections 1.3–1.5 provide a good introduction to the difficulty of defining the zeta potential and measuring its value.
The electroosmotic flow profile is very different from that of a fluid moving under forced pressure. Figure 12.57 compares the electroosmotic flow profile with that the hydrodynamic flow profile in gas chromatography and liquid chromatography. The uniform, flat profile for electroosmosis helps minimize band broadening in capillary electrophoresis, improving separation efficiency.
Figure 12.57: Comparison of hydrodynamic flow and electroosmotic flow. The nearly uniform electroosmotic flow profile means that the electroosmotic flow velocity is nearly constant across the capillary.
### Total Mobility
A solute’s total velocity, $$v_{tot}$$, as it moves through the capillary is the sum of its electrophoretic velocity and the electroosmotic flow velocity.
$ν_\ce{tot} =ν_\ce{ep} + ν_\ce{eof}$
As shown in Figure 12.58, under normal conditions the following general relationships hold true.
$(ν_\ce{tot})_\ce{cations} > ν_\ce{eof}$
$(ν_\ce{tot})_\ce{neutrals} = ν_\ce{eof}$
$(ν_\ce{tot})_\ce{anions} < ν_\ce{eof}$
Cations elute first in an order corresponding to their electrophoretic mobilities, with small, highly charged cations eluting before larger cations of lower charge. Neutral species elute as a single band with an elution rate equal to the electroosmotic flow velocity. Finally, anions are the last components to elute, with smaller, highly charged anions having the longest elution time.
Figure 12.58: Visual explanation for the general elution order in capillary electrophoresis. Each species has the same electroosmotic flow, νeof. Cations elute first because they have a positive electrophoretic velocity, νep. Anions elute last because their negative electrophoretic velocity partially offsets the electroosmotic flow velocity. Neutrals elute with a velocity equal to the electroosmotic flow.
### Migration Time
Another way to express a solute’s velocity is to divide the distance it travels by the elapsed time
$\nu_{tot}=\frac{l}{t_{m}} \label{12.38}$
where l is the distance between the point of injection and the detector, and tm is the solute’s migration time. To understand the experimental variables affecting migration time, we begin by noting that
$ν_\ce{tot} = \mu_\ce{tot}E= (\mu_\ce{ep} + \mu_\ce{eof})E\label{12.39}$
Combining Equations \ref{12.38} and \ref{12.39} and solving for tm leaves us with
$t_\ce{m} = \dfrac{l}{(\mu_\ce{ep} + \mu_\ce{eof})E}\label{12.40}$
Finally, the magnitude of the electrical field is
$E = \dfrac{V}{L}\label{12.41}$
where V is the applied potential and L is the length of the capillary tube. Finally, substituting Equation \ref{12.41} into Equation \ref{12.40} leaves us with the following equation for a solute’s migration time.
$t_\ce{m}= \dfrac{lL}{(\mu_\ce{ep} + \mu_\ce{eof})V}\label{12.42}$
To decrease a solute’s migration time—and shorten the analysis time—we can apply a higher voltage or use a shorter capillary tube. We can also shorten the migration time by increasing the electroosmotic flow, although this decreases resolution.
### Efficiency
As we learned in Section 12.2.4, the efficiency of a separation is given by the number of theoretical plates, N. In capillary electrophoresis the number of theoretic plates is
$N = \dfrac{l^2}{2Dt_\ce{m}} = \dfrac{(\mu_\ce{ep} + \mu_\ce{eof})Vl}{2DL} \label{12.43b}$
where $$D$$ is the solute’s diffusion coefficient.
From Equations \ref{12.10} and \ref{12.11}, we know that the number of theoretical plates for a solute is
$N = \dfrac{l^2}{\sigma ^2}$
where l is the distance the solute travels and σ is the standard deviation for the solute’s band broadening. For capillary electrophoresis band broadening is due to longitudinal diffusion and is equivalent to 2Dtm, where tm is the migration time.
From Equation \ref{12.43}, the efficiency of a capillary electrophoretic separation increases with higher voltages. Increasing the electroosmotic flow velocity improves efficiency, but at the expense of resolution. Two additional observations deserve comment. First, solutes with larger electrophoretic mobilities—in the same direction as the electroosmotic flow—have greater efficiencies; thus, smaller, more highly charged cations are not only the first solutes to elute, but do so with greater efficiency. Second, efficiency in capillary electrophoresis is independent of the capillary’s length. Theoretical plate counts of approximately 100,000–200,000 are not unusual.
It is possible to design an electrophoretic experiment so that anions elute before cations—more about this later—in which smaller, more highly charged anions elute with greater efficiencies.
### Selectivity
In chromatography we defined the selectivity between two solutes as the ratio of their retention factors (Equation \ref{12.9}). In capillary electrophoresis the analogous expression for selectivity is
$α = \dfrac{\mu_\textrm{ep,1}}{\mu_\textrm{ep,2}}$
where μep,1 and μep,2 are the electrophoretic mobilities for the two solutes, chosen such that α ≥ 1. We can often improve selectivity by adjusting the pH of the buffer solution. For example, NH4+ is a weak acid with a pKa of 9.75. At a pH of 9.75 the concentrations of NH4+ and NH3 are equal. Decreasing the pH below 9.75 increases its electrophoretic mobility because a greater fraction of the solute is present as the cation NH4+. On the other hand, raising the pH above 9.75 increases the proportion of the neutral NH3, decreasing its electrophoretic mobility.
### Resolution
The resolution between two solutes is
$R = \dfrac{0.177(\mu_\textrm{ep,1} - \mu_\textrm{ep,2})\sqrt{V}}{\sqrt{D(\mu_\textrm{avg} - \mu_\textrm{eof})}}\label{12.44}$
where μavg is the average electrophoretic mobility for the two solutes. Increasing the applied voltage and decreasing the electroosmotic flow velocity improves resolution. The latter effect is particularly important. Although increasing electroosmotic flow improves analysis time and efficiency, it decreases resolution.
## 12.7.2 Instrumentation
The basic instrumentation for capillary electrophoresis is shown in Figure 12.59 and includes a power supply for applying the electric field, anode and cathode compartments containing reservoirs of the buffer solution, a sample vial containing the sample, the capillary tube, and a detector. Each part of the instrument receives further consideration in this section.
Figure 12.59: Schematic diagram of the basic instrumentation for capillary electrophoresis. The sample and the source reservoir are switched when making injections.
### Capillary Tubes
Figure 12.60 shows a cross-section of a typical capillary tube. Most capillary tubes are made from fused silica coated with a 15–35 μm layer of polyimide to give it mechanical strength. The inner diameter is typically 25–75 μm—smaller than the internal diameter of a capillary GC column—with an outer diameter of 200–375 μm.
Figure 12.60: Cross section of a capillary column for capillary electrophoresis. The dimensions shown here are typical and are scaled proportionally.
The capillary column’s narrow opening and the thickness of its walls are important. When an electric field is applied to the buffer solution within the capillary, current flows through the capillary. This current leads to the release of heat—what we call Joule heating. The amount of heat released is proportional to the capillary’s radius and the magnitude of the electrical field. Joule heating is a problem because it changes the buffer solution’s viscosity, with the solution at the center of the capillary being less viscous than that near the capillary walls. Because a solute’s electrophoretic mobility depends on viscosity (Equation \ref{12.35}), solute species in the center of the capillary migrate at a faster rate than those near the capillary walls. The result is an additional source of band broadening that degrades the separation. Capillaries with smaller inner diameters generate less Joule heating, and capillaries with larger outer diameters are more effective at dissipating the heat. Placing the capillary tube inside a thermostated jacket is another method for minimizing the effect of Joule heating; in this case a smaller outer diameter allows for a more rapid dissipation of thermal energy.
### Injecting the Sample
There are two commonly used method for injecting a sample into a capillary electrophoresis column: hydrodynamic injection and electrokinetic injection. In both methods the capillary tube is filled with the buffer solution. One end of the capillary tube is placed in the destination reservoir and the other end is placed in the sample vial.
Hydrodynamic injection uses pressure to force a small portion of sample into the capillary tubing. A difference in pressure is applied across the capillary by either pressurizing the sample vial or by applying a vacuum to the destination reservoir. The volume of sample injected, in liters, is given by the following equation
$V_\ce{inj}= \dfrac{Pd^4πt}{128ηL} \times 10^3\label{12.45}$
where ∆P is the difference in pressure across the capillary in pascals, d is the capillary’s inner diameter in meters, t is the amount of time that the pressure is applied in seconds, η is the buffer’s viscosity in kg m–1 s–1, and L is the length of the capillary tubing in meters. The factor of 103 changes the units from cubic meters to liters.
For a hydrodynamic injection we move the capillary from the source reservoir to the sample. The anode remains in the source reservoir.
A hydrodynamic injection is also possible by raising the sample vial above the destination reservoir and briefly inserting the filled capillary.
If you want to verify the units in Equation \ref{12.45}, recall from Table 2.2 that 1 Pa is equivalent to 1 kg m-1 s-2.
Example 12.9
In a hydrodynamic injection we apply a pressure difference of 2.5 × 103 Pa (a ∆P ≈ 0.02 atm) for 2 s to a 75-cm long capillary tube with an internal diameter of 50 μm. Assuming that the buffer’s viscosity is 10–3 kg m–1 s–1, what volume and length of sample did we inject?
Solution
Making appropriate substitutions into equation 12.45 gives the sample’s volume as
\begin{align} V_\ce{inj} &= \mathrm{\dfrac{(2.5×10^3\: kg\: m^{−1}\: s^{−2})(50×10^{−6}\: m)^4(3.14)(2\: s)}{(128)(0.001\: kg\: m^{−1}\: s^{−1})(0.75\:m)} × 10^3\: L/m^3}\\ V_\ce{inj} &= \mathrm{1×10^{−9}\: L = 1\: nL} \end{align}
Because the interior of the capillary is cylindrical, the length of the sample, l, is easy to calculate using the equation for the volume of a cylinder; thus
$l = \dfrac{V_\ce{inj}}{πr^2} = \mathrm{\dfrac{(1.0×10^{−9}\: L)(10^{−3}\: m^3/L)}{(3.14)(25×10^{−6}\: m)^2} = 5×10^{−4}\: m = 0.5\: mm}$
Exercise 12.9
Suppose that you need to limit your injection to less than 0.20% of the capillary’s length. Using the information from Example 12.9, what is the maximum injection time for a hydrodynamic injection?
In an electrokinetic injection we place both the capillary and the anode into the sample and briefly apply an potential. The volume of injected sample is the product of the capillary’s cross sectional area and the length of the capillary occupied by the sample. In turn, this length is the product of the solute’s velocity (see equation 12.39) and time; thus
$l = \dfrac{V_\ce{inj}}{πr^2} = \mathrm{\dfrac{(1.0×10^{−9}\: L)(10^{−3}\: m^3/L)}{(3.14)(25×10^{−6}\: m)^2} = 5×10^{−4}\: m = 0.5\: mm}$
where
• $$r$$ is the capillary’s radius,
• $$L$$ is the length of the capillary, and
• $$E′$$ is effective electric field in the sample.
An important consequence of equation 12.46 is that an electrokinetic injection is inherently biased toward solutes with larger electrophoretic mobilities. If two solutes have equal concentrations in a sample, we inject a larger volume—and thus more moles—of the solute with the larger μep.
The electric field in the sample is different that the electric field in the rest of the capillary because the sample and the buffer have different ionic compositions. In general, the sample’s ionic strength is smaller, which makes its conductivity smaller. The effective electric field is
$E′ = E \times \dfrac{κ_\ce{buf}}{κ_\ce{sam}}$
where κbuf and κsam are the conductivities of the buffer and the sample, respectively.
When an analyte’s concentration is too small to detect reliably, it may be possible to inject it in a manner that increases its concentration in the capillary tube. This method of injection is called stacking. Stacking is accomplished by placing the sample in a solution whose ionic strength is significantly less than that of the buffer in the capillary tube. Because the sample plug has a lower concentration of buffer ions, the effective field strength across the sample plug, E′ is larger than that in the rest of the capillary.
We know from equation 12.34 that electrophoretic velocity is directly proportional to the electrical field. As a result, the cations in the sample plug migrate toward the cathode with a greater velocity, and the anions migrate more slowly—neutral species are unaffected and move with the electroosmotic flow. When the ions reach their respective boundaries between the sample plug and the buffering solution, the electrical field decreases and the electrophoretic velocity of cations decreases and that for anions increases. As shown in Figure 12.61, the result is a stacking of cations and anions into separate, smaller sampling zones. Over time, the buffer within the capillary becomes more homogeneous and the separation proceeds without additional stacking.
Figure 12.61 The stacking of cations and anions. The top diagram shows the initial sample plug and the bottom diagram shows how the cations and anions become concentrated at opposite sides of the sample plug.
### Applying the Electrical Field
Migration in electrophoresis occurs in response to an applied electrical field. The ability to apply a large electrical field is important because higher voltages lead to shorter analysis times (see equation 12.42), more efficient separations (equation 12.43), and better resolution (equation 12.44). Because narrow bored capillary tubes dissipate Joule heating so efficiently, voltages of up to 40 kV are possible.
Because of the high voltages, be sure to follow your instrument’s safety guidelines.
### Detectors
Most of the detectors used in HPLC also find use in capillary electrophoresis. Among the more common detectors are those based on the absorption of UV/Vis radiation, fluorescence, conductivity, amperometry, and mass spectrometry. Whenever possible, detection is done “on-column” before the solutes elute from the capillary tube and additional band broadening occurs.
UV/Vis detectors are among the most popular. Because absorbance is directly proportional to path length, the capillary tubing’s small diameter leads to signals that are smaller than those obtained in HPLC. Several approaches have been used to increase the pathlength, including a Z-shaped sample cell and multiple reflections (see Figure 12.62). Detection limits are about 10–7 M.
Figure 12.62: Two approaches to on-column detection in capillary electrophoresis using a UV/Vis diode array spectrometer: (a) Z-shaped bend in capillary, and (b) multiple reflections.
Better detection limits are obtained using fluorescence, particularly when using a laser as an excitation source. When using fluorescence detection a small portion of the capillary’s protective coating is removed and the laser beam is focused on the inner portion of the capillary tubing. Emission is measured at an angle of 90o to the laser. Because the laser provides an intense source of radiation that can be focused to a narrow spot, detection limits are as low as 10–16 M.
Solutes that do not absorb UV/Vis radiation or that do not undergo fluorescence can be detected by other detectors. Table 12.10 provides a list of detectors for capillary electrophoresis along with some of their important characteristics.
Table 12.10 Characteristics of Detectors for Capillary Electrophoresis
detection limit
detector
selectivity
universal or analyte must...
moles injected
molarity
on-column
detection?
UV/Vis absorbance have a UV/Vis chromophore 10–13–10–16 10–5–10–7 yes
indirect absorbance universal 10–12–10–15 10–4–10–6 yes
fluorescence have a favorable quantum yield 10–15–10–17 10–7–10–9 yes
laser fluorescence have a favorable quantum yield 10–18–10–20 10–13–10–16 yes
mass spectrometer universal (total ion)
selective (single ion)
10–16–10–17 10–8–10–10 no
amperometry undergo oxidation or reduction 10–18–10–19 10–7–10–10 no
conductivity universal 10–15–10–16 10–7–10–9 no
Source: Baker, D. R. Capillary Electrophoresis, Wiley-Interscience: New York, 1995.
## 12.7.3 Capillary Electrophoresis Methods
There are several different forms of capillary electrophoresis, each of which has its particular advantages. Four of these methods are briefly described in this section.
### Capillary Zone Electrophoresis (CZE)
The simplest form of capillary electrophoresis is capillary zone electrophoresis. In CZE we fill the capillary tube with a buffer solution and, after loading the sample, place the ends of the capillary tube in reservoirs containing additional buffer solution. Usually the end of the capillary containing the sample is the anode and solutes migrate toward the cathode at a velocity determined by their electrophoretic mobility and the electroosmotic flow. Cations elute first, with smaller, more highly charged cations eluting before larger cations with smaller charges. Neutral species elute as a single band. Anions are the last species to elute, with smaller, more negatively charged anions being the last to elute.
We can reverse the direction of electroosmotic flow by adding an alkylammonium salt to the buffer solution. As shown in Figure 12.63, the positively charged end of the alkyl ammonium ions bind to the negatively charged silanate ions on the capillary’s walls. The tail of the alkyl ammonium ion is hydrophobic and associates with the tail of another alkyl ammonium ion. The result is a layer of positive charges that attract anions in the buffer solution. The migration of these solvated anions toward the anode reverses the electroosmotic flow’s direction. The order of elution is exactly opposite of that observed under normal conditions.
Figure 12.63 Two modes of capillary zone electrophoresis showing (a) normal migration with electroosmotic flow toward the cathode and (b) reversed migration in which the electroosmotic flow is toward the anode.
Coating the capillary’s walls with a nonionic reagent eliminates the electroosmotic flow. In this form of CZE the cations migrate from the anode to the cathode. Anions elute into the source reservoir and neutral species remain stationary.
Capillary zone electrophoresis provides effective separations of charged species, including inorganic anions and cations, organic acids and amines, and large biomolecules such as proteins. For example, CZE has been used to separate a mixture of 36 inorganic and organic ions in less than three minutes.15 A mixture of neutral species, of course, can not be resolved.
### Micellar Electrokinetic Capillary Chromatography (MEKC)
One limitation to CZE is its inability to separate neutral species. Micellar electrokinetic capillary chromatography overcomes this limitation by adding a surfactant, such as sodium dodecylsulfate (Figure 12.64a) to the buffer solution. Sodium dodecylsulfate, or SDS, has a long-chain hydrophobic tail and a negatively charged ionic functional group at its head. When the concentration of SDS is sufficiently large a micelle forms. A micelle consists of a spherical agglomeration of 40–100 surfactant molecules in which the hydrocarbon tails point inward and the negatively charged heads point outward (Figure 12.64b).
Figure 12.64: (a) Structure of sodium dodecylsulfate and its representation, and (b) cross section through a micelle showing its hydrophobic interior and its hydrophilic exterior.
Because micelles have a negative charge, they migrate toward the cathode with a velocity less than the electroosmotic flow velocity. Neutral species partition themselves between the micelles and the buffer solution in a manner similar to the partitioning of solutes between the two liquid phases in HPLC. Because there is a partitioning between two phases, we include the descriptive term chromatography in the techniques name. Note that in MEKC both phases are mobile.
The elution order for neutral species in MEKC depends on the extent to which each partitions into the micelles. Hydrophilic neutrals are insoluble in the micelle’s hydrophobic inner environment and elute as a single band, as they would in CZE. Neutral solutes that are extremely hydrophobic are completely soluble in the micelle, eluting with the micelles as a single band. Those neutral species that exist in a partition equilibrium between the buffer solution and the micelles elute between the completely hydrophilic and completely hydrophobic neutral species. Those neutral species favoring the buffer solution elute before those favoring the micelles. Micellar electrokinetic chromatography has been used to separate a wide variety of samples, including mixtures of pharmaceutical compounds, vitamins, and explosives.
### Capillary Gel Electrophoresis (CGE)
In capillary gel electrophoresis the capillary tubing is filled with a polymeric gel. Because the gel is porous, a solute migrates through the gel with a velocity determined both by its electrophoretic mobility and by its size. The ability to effect a separation using size is helpful when the solutes have similar electrophoretic mobilities. For example, fragments of DNA of varying length have similar charge-to-size ratios, making their separation by CZE difficult. Because the DNA fragments are of different size, a CGE separation is possible.
The capillary used for CGE is usually treated to eliminate electroosmotic flow, preventing the gel’s extrusion from the capillary tubing. Samples are injected electrokinetically because the gel provides too much resistance for hydrodynamic sampling. The primary application of CGE is the separation of large biomolecules, including DNA fragments, proteins, and oligonucleotides.
### Capillary Electrochromatography (CEC)
Another approach to separating neutral species is capillary electrochromatography. In CEC the capillary tubing is packed with 1.5–3 μm particles coated with a bonded stationary phase. Neutral species separate based on their ability to partition between the stationary phase and the buffer, which is moving as a result of the electroosmotic flow; Figure 12.65 provides a representative example for the separation of a mixture of hydrocarbons. A CEC separation is similar to the analogous HPLC separation, but without the need for high pressure pumps. Efficiency in CEC is better than in HPLC, and analysis times are shorter.
Figure 12.65: Capillary electrochromatographic separation of a mixture of hydrocarbons in DMSO. The column contains a porous polymer of butyl methacrylate and lauryl acrylate (25%:75% mol:mol) with butane dioldacrylate as a crosslinker. Data provided by Zoe LaPier and Michelle Bushey, Department of Chemistry, Trinity University.
The best way to appreciate the theoretical and practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of a vitamin B complex by capillary zone electrophoresis or by micellar electrokinetic capillary chromatography provides an instructive example of a typical procedure. The description here is based on Smyth, W. F. Analytical Chemistry of Complex Matrices, Wiley Teubner: Chichester, England, 1996, pp. 154–156.
Representative Method 12.3: Determination of a Vitamin B Complex by CZE or MEKC
Description of Method
The water soluble vitamins B1 (thiamine hydrochloride), B2 (riboflavin), B3 (niacinamide), and B6 (pyridoxine hydrochloride) are determined by CZE using a pH 9 sodium tetraborate-sodium dihydrogen phosphate buffer or by MEKC using the same buffer with the addition of sodium dodecyl sulfate. Detection is by UV absorption at 200 nm. An internal standard of o-ethoxybenzamide is used to standardize the method.
Procedure
Crush a vitamin B complex tablet and place it in a beaker with 20.00 mL of a 50 % v/v methanol solution that is 20 mM in sodium tetraborate and 100.0 ppm in o-ethoxybenzamide. After mixing for 2 min to ensure that the B vitamins are dissolved, pass a 5.00-mL portion through a 0.45-μm filter to remove insoluble binders. Load an approximately 4 nL sample into a capillary column with an inner diameter of a 50 μm. For CZE the capillary column contains a 20 mM pH 9 sodium tetraborate-sodium dihydrogen phosphate buffer. For MEKC the buffer is also 150 mM in sodium dodecyl sulfate. Apply a 40 kV/m electrical field to effect both the CZE and MEKC separations.
Questions
1. Methanol, which elutes at 4.69 min, is included as a neutral species to indicate the electroosmotic flow. When using standard solutions of each vitamin, CZE peaks are found at 3.41 min, 4.69 min, 6.31 min, and 8.31 min. Examine the structures and pKa information in Figure 12.66 and identify the order in which the four B vitamins elute.
Vitamin B1 is a cation and elutes before the neutral species methanol; thus it is the compound that elutes at 3.41 min. Vitamin B3 is a neutral species and elutes with methanol at 4.69 min. The remaining two B vitamins are weak acids that partially ionize to weak base anions in the pH 9 buffer. Of the two, vitamin B6 is the stronger acid (a pKa of 9.0 versus a pKa of 9.7) and is present to a greater extent in its anionic form. Vitamin B6, therefore, is the last of the vitamins to elute.
2. The order of elution when using MEKC is vitamin B3 (5.58 min), vitamin B6 (6.59 min), vitamin B2 (8.81 min), and vitamin B1 (11.21 min). What conclusions can you make about the solubility of the B vitamins in the sodium dodecylsulfate micelles? The micelles elute at 17.7 min.
The elution time for vitamin B1 shows the greatest change, increasing from 3.41 min to 11.21 minutes. Clearly vitamin B1 has the greatest solubility in the micelles. Vitamin B2 and vitamin B3 have a more limited solubility in the micelles, showing only slightly longer elution times in the presence of the micelles. Interestingly, the elution time for vitamin B6 decreases in the presence of the micelles.
3. For quantitative work an internal standard of o-ethoxybenzamide is added to all samples and standards. Why is an internal standard necessary?
Although the method of injection is not specified, neither a hydrodynamic injection nor an electrokinetic injection is particularly reproducible. The use of an internal standard compensates for this limitation.
(You can read more about the use of internal standards in capillary electrophoresis in the following paper: Altria, K. D. “Improved Performance in Capillary Electrophoresis Using Internal Standards,” LC.GC Europe, September 2002.)
Figure 12.66: Structures of the four water soluble B vitamins in their predominate forms at a pH of 9; pKa values are shown in red.
## 12.7.4 Evaluation
When compared to GC and HPLC, capillary electrophoresis provides similar levels of accuracy, precision, and sensitivity, and a comparable degree of selectivity. The amount of material injected into a capillary electrophoretic column is significantly smaller than that for GC and HPLC—typically 1 nL versus 0.1 μL for capillary GC and 1–100 μL for HPLC. Detection limits for capillary electrophoresis, however, are 100–1000 times poorer than that for GC and HPLC. The most significant advantages of capillary electrophoresis are improvements in separation efficiency, time, and cost. Capillary electrophoretic columns contain substantially more theoretical plates (≈106 plates/m) than that found in HPLC (≈105 plates/m) and capillary GC columns (≈103 plates/m), providing unparalleled resolution and peak capacity. Separations in capillary electrophoresis are fast and efficient. Furthermore, the capillary column’s small volume means that a capillary electrophoresis separation requires only a few microliters of buffer solution, compared to 20–30 mL of mobile phase for a typical HPLC separation.
Note
See Section 12.4.8 for an evaluation of gas chromatography, and Section 12.5.6 for an evaluation of high-performance liquid chromatography. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8237160444259644, "perplexity": 3464.7260757506037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00153.warc.gz"} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.